Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:15-9551)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence
Acoustic Fingerprints 2015 |
Acoustic fingerprints can be used to identify an audio sample or quickly locate similar items in an audio database. As a security tool, fingerprints offer a modality of biometric identification of a user. Current research is exploring various aspects and applications, including the use of these fingerprints for mobile device security, antiforensics, use of image processing techniques, and client side embedding. The research work cited here was presented in 2015.
Tsai, T.J.; Friedland, G.; Anguera, X., "An Information-Theoretic Metric of Fingerprint Effectiveness," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 340-344, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7177987
Abstract: Audio fingerprinting refers to the process of extracting a robust, compact representation of audio which can be used to uniquely identify an audio segment. Works in the audio fingerprinting literature generally report results using system-level metrics. Because these systems are usually very complex, the overall system-level performance depends on many different factors. So, while these metrics are useful in understanding how well the entire system performs, they are not very useful in knowing how good or bad the fingerprint design is. In this work, we propose a metric of fingerprint effectiveness that decouples the effect of other system components such as the search mechanism or the nature of the database. The metric is simple, easy to compute, and has a clear interpretation from an information theory perspective. We demonstrate that the metric correlates directly with system-level metrics in assessing fingerprint effectiveness, and we show how it can be used in practice to diagnose the weaknesses in a fingerprint design.
Keywords: audio coding; audio signal processing; copy protection; signal representation; audio fingerprinting literature; audio representation extraction; audio segment; fingerprint effectiveness; information theoretic metric; search mechanism; system level metrics; system level performance; Accuracy; Databases; Entropy; Information rates; Noise measurement; Signal to noise ratio; audio fingerprint; copy detection (ID#: 15-8805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177987&isnumber=7177909
Szlosarczyk, Sebastian; Schulte, Andrea, "Voice Encrypted Recognition Authentication - VERA," in Next Generation Mobile Applications, Services and Technologies, 2015 9th International Conference on, pp. 270-274, 9-11 Sept. 2015. doi: 10.1109/NGMAST.2015.74
Abstract: We propose VERA - an authentication scheme where sensitive data on mobile phones can be secured or whereby services can be locked by the user's voice. Our algorithm takes use of acoustic fingerprints to identify the personalized voice. The security of the algorithm depends on the discrete logarithm problem in ZN where N is a safe prime. Further we evaluate two practical examples on Android devices where our scheme is used: First the encryption of any data(set). Second locking a mobile phone. Voice is the basic for both of the fields.
Keywords: Acoustics; Authentication; Encryption; Mobile handsets; Protocols; Android; acoustic fingerprint; authentication; biometrics; cryptography; encryption; voice (ID#: 15-8806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373254&isnumber=7373199
Casagranda, P.; Sapino, M.L.; Candan, K.S., "Audio Assisted Group Detection Using Smartphones," in Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, pp. 1-6, June 29 2015-July 3 2015. doi: 10.1109/ICMEW.2015.7169764
Abstract: In this paper we introduce a novel technique to discover groups of users sharing the same environment: a room, an office, a car. Using a smartphone device, we propose a method based on the joint usage of GPS and acoustic fingerprints, allowing to greatly improve the precision of GPS only group detection. To reach the objective, we use a novel variation of an existing audio fingerprinting algorithm with good noise tolerance, assessing it under several conditions. The method is shown to be especially effective for groups of listeners of audio and audio visual content. We finally propose an application of the method to deliver content recommendations for a specific use case, hybrid content radio, an adaptive radio service discussed in the European Broadcasting Union, allowing the enrichment of traditional broadcast linear radio with personalized and context-aware audio content.
Keywords: Global Positioning System; audio signal processing; mobile computing; mobility management (mobile radio); object detection; radio broadcasting; smart phones; European Broadcasting Union; GPS; acoustic fingerprints; audio assisted group detection; audio fingerprinting algorithm; audio visual content; broadcast linear radio; content recommendations; context-aware audio content; hybrid content radio; noise tolerance; smartphone device; Global Positioning System; Audience discovery; audio fingerprinting; contextual recommendation; group detection; group recommendation; hybrid content radio (ID#: 15-8807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7169764&isnumber=7169738
Sankupellay, M.; Towsey, M.; Truskinger, A.; Roe, P., "Visual Fingerprints of the Acoustic Environment: The Use of Acoustic Indices to Characterise Natural Habitats," in Big Data Visual Analytics (BDVA), 2015, pp. 1-8, 22-25 Sept. 2015. doi: 10.1109/BDVA.2015.7314306
Abstract: Acoustic recordings play an increasingly important role in monitoring terrestrial environments. However, due to rapid advances in technology, ecologists are accumulating more audio than they can listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio-recordings by calculating acoustic indices. These are statistics which describe the temporal-spectral distribution of acoustic energy and reflect content of ecological interest. We combine spectral indices to produce false-color spectrogram images. These not only reveal acoustic content but also facilitate navigation. An additional analytic challenge is to find appropriate descriptors to summarize the content of 24-hour recordings, so that it becomes possible to monitor long-term changes in the acoustic environment at a single location and to compare the acoustic environments of different locations. We describe a 24-hour 'acoustic-fingerprint' which shows some preliminary promise.
Keywords: Big Data; acoustic signal processing; data visualisation; national security; Big-data; acoustic data visualisation; acoustic energy; acoustic environment; acoustic recording; false-color spectrogram image; temporal-spectral distribution; visual fingerprint; Acoustics; Digital audio players; Entropy; Indexes; Meteorology; Monitoring; Spectrogram (ID#: 15-8808)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314306&isnumber=7314277
Lu, Yao; Chang, Ye; Tang, Ning; Qu, Hemi; Pang, Wei; Zhang, Daihua; Zhang, Hao; Duan, Xuexin, "Concentration-Independent Fingerprint Library of Volatile Organic Compounds Based on Gas-Surface Interactions by Self-Assembled Monolayer Functionalized Film Bulk Acoustic Resonator Arrays," in SENSORS, 2015 IEEE, pp. 1-4, 1-4 Nov. 2015. doi: 10.1109/ICSENS.2015.7370506
Abstract: This paper reported a novel e-nose type gas sensor based on film bulk acoustic resonator (FBAR) array in which each sensor is functionalized individually by different organic monolayers. Such hybrid sensors have been successfully demonstrated for VOCs selective detections. Two concentration-independent fingerprints (adsorption energy constant and desorption rate) were obtained from the adsorption isotherms (Ka, K1, K2) and kinetic analysis (koff) with four different amphiphilic self-assembled monolayers (SAMs) coated on high frequency FBAR transducers (4.44 GHz). The multi-parameter fingerprints regardless of concentration effects compose a recognition library and improve the selectivity of VOCs.
Keywords: Adsorption; Film bulk acoustic resonators; Fingerprint recognition; Kinetic theory; Out of order; Silicon; Transducers; ??-nose; Adsorption analysis; Concentration-independent; FBAR; SAMs; VOCs (ID#: 15-8809)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7370506&isnumber=7370096
Ondel, L.; Anguera, X.; Luque, J., "MASK+: Data-Driven Regions Selection For Acoustic Fingerprinting," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 335-339, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7177986
Abstract: Acoustic fingerprinting is the process to deterministically obtain a compact representation of an audio segment, used to compare multiple audio files or to efficiently search for a file within a big database. Recently, we proposed a novel fingerprint named MASK (Masked Audio Spectral Keypoints) that encodes the relationship between pairs of spectral regions around a single spectral energy peak into a binary representation. In the original proposal the configuration of location and size of the regions pairs was determined manually to optimally encode how energy flows around the spectral peak. Such manual selection has always been considered as a weakness in the process as it might not be adapted to the actual data being represented. In this paper we address this problem by proposing a unsupervised, data-driven method based on mutual information theory to automatically define an optimal MASK fingerprint structure. Audio retrieval experiments optimizing for data distorted with additive Gaussian white noise show that the proposed method is much more robust than the original MASK and a well-known acoustic fingerprint.
Keywords: AWGN; audio coding; audio databases; information retrieval; optimisation; signal representation; MASK+; Masked Audio Spectral Keypoints; acoustic fingerprinting; additive Gaussian white noise; audio files; audio retrieval experiments; audio segment; binary representation; compact representation; data-driven region selection; mutual information theory; optimal MASK fingerprint structure; spectral energy; spectral regions; Acoustics; Distortion; Mutual information; Noise measurement; Robustness; Signal to noise ratio; Audio fingerprinting; content recognition (ID#: 15-8810)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177986&isnumber=7177909
Fung, S.; Yipeng Lu; Hao-Yen Tang; Tsai, J.M.; Daneman, M.; Boser, B.E.; Horsley, D.A., "Theory and Experimental Analysis of Scratch Resistant Coating for Ultrasonic Fingerprint Sensors," in Ultrasonics Symposium (IUS), 2015 IEEE International, pp. 1-4, 21-24 Oct. 2015. doi: 10.1109/ULTSYM.2015.0150
Abstract: Ultrasonic imaging for fingerprint applications offers better tolerance of external conditions and high spatial resolution compared to typical optical and solid state sensors respectively. Similar to existing fingerprint sensors, the performance of ultrasonic imagers is sensitive to physical damage. Therefore it is important to understand the theory behind transmission and reflection effects of protective coatings for ultrasonic fingerprint sensors. In this work, we present the analytical theory behind effects of transmitting ultrasound through a thin film of scratch resistant material. Experimental results indicate transmission through 1 μm of Al2O3 is indistinguishable from the non-coated cover substrate. Furthermore, pulse echo measurements of 5 μm thick Al2O3 show ultrasound pressure reflection increases in accordance with both theory and finite element simulation. Consequently, feasibility is demonstrated of ultrasonic transmission through a protective layer with greatly mismatched acoustic impedance when sufficiently thin. This provides a guide for designing sensor protection when using materials of vastly different acoustic impedance values.
Keywords: acoustic impedance; fingerprint identification; finite element analysis; ultrasonic transducers; ultrasonic transmission; acoustic impedance; finite element simulation; pulse echo measurements; scratch resistant coating; ultrasonic fingerprint sensors; ultrasonic transmission; ultrasound pressure reflection; Acoustic measurements; Acoustics; Aluminum oxide; Coatings; Sensors; Substrates; Ultrasonic imaging; piezoelectric micromachined ultrasound transducers; ultrasonic transducers; ultrasonic transmission (ID#: 15-8811)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7329365&isnumber=7329057
Hoople, J.; Kuo, J.; Abdel-moneum, M.; Lal, A., "Chipscale GHZ Ultrasonic Channels for Fingerprint Scanning," in Ultrasonics Symposium (IUS), 2015 IEEE International, pp. 1-4, 21-24 Oct. 2015. doi: 10.1109/ULTSYM.2015.0027
Abstract: In this paper we present 1-3 GHz frequency ultrasonic interrogation of surface ultrasonic impedances. The chipscale and CMOS integration of GHz transducers can enable surface identification imaging for many applications. We use aluminum nitride piezoelectric thin films driven at maximum amplitudes of 4-Vpp to launch and measure pulse packets. In this paper we first use the contrast in ultrasonic impedance between air and skin to create an image of a fingerprint. As a second application we directly measure the reflection coefficient for different liquids to demonstrate the ability to measure the ultrasonic impedance and distinguish between three different liquids. Using a rubber phantom the image of a portion of a fingerprint is captured by measuring changes in signal levels at the resonance frequency of the piezoelectric transducers 2.7 GHz. Reflected amplitude waves from air and skin differ by factors of 1.8-2. The measurements for three different liquids; water, isopropyl alcohol, and acetone show that the three liquids have sufficiently different acoustic impedances to be able to identify them.
Keywords: CMOS image sensors; aluminium compounds; fingerprint identification; phantoms; piezoelectric thin films; piezoelectric transducers; surface impedance; ultrasonic transducers; AlN; CMOS integration; aluminum nitride piezoelectric thin films; chipscale GHz ultrasonic channels; fingerprint scanning; frequency 1 GHz to 3 GHz; piezoelectric transducers; rubber phantom; surface ultrasonic impedances; Acoustics; Aluminum nitride; CMOS integrated circuits; Fingerprint recognition; Impedance; Reflection coefficient; Transducers; AlN; Fingerprint; MEMS (ID#: 15-8812)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7329436&isnumber=7329057
Hon, Tsz-Kin; Wang, Lin; Reiss, Joshua D.; Cavallaro, Andrea, "Fine Landmark-Based Synchronization of Ad-Hoc Microphone Arrays," in Signal Processing Conference (EUSIPCO), 2015 23rd European, pp. 1331-1335, Aug. 31 2015-Sept. 4 2015. doi: 10.1109/EUSIPCO.2015.7362600
Abstract: We use audio fingerprinting to solve the synchronization problem between multiple recordings from an ad-hoc array consisting of randomly placed wireless microphones or handheld smartphones. Synchronization is crucial when employing conventional microphone array techniques such as beam-forming and source localization. We propose a fine audio landmark fingerprinting method that detects the time difference of arrivals (TDOAs) of multiple sources in the acoustic environment. By estimating the maximum and minimum TDOAs, the proposed method can accurately calculate the unknown time offset between a pair of microphone recordings. Experimental results demonstrate that the proposed method significantly improves the synchronization accuracy of conventional audio fingerprinting methods and achieves comparable performance to the generalized cross-correlation method.
Keywords: Array signal processing; Feature extraction; Microphone arrays; Signal processing algorithms; Synchronization; Time-frequency analysis; Synchronization; audio fingerprinting; microphone array (ID#: 15-8813)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7362600&isnumber=7362087
Kohout, J.; Pevny, T., "Unsupervised Detection of Malware in Persistent Web Traffic," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1757-1761, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178272
Abstract: Persistent network communication can be found in many instances of malware. In this paper, we analyse the possibility of leveraging low variability of persistent malware communication for its detection. We propose a new method for capturing statistical fingerprints of connections and employ outlier detection to identify the malicious ones. Emphasis is put on using minimal information possible to make our method very lightweight and easy to deploy. Anomaly detection is commonly used in network security, yet to our best knowledge, there are not many works focusing on the persistent communication itself, without making further assumptions about its purpose.
Keywords: Internet; computer network security; invasive software telecommunication traffic; anomaly detection; network security; outlier detection; persistent malware communication; persistent network communication; persistent web traffic; statistical fingerprints; unsupervised detection; Companies; Detection algorithms; Detectors; Histograms; Joints; Malware; Servers; malware; outlier detection; persistent communication (ID#: 15-8814)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178272&isnumber=7177909
Tang, H.; Lu, Y.; Fung, S.; Tsai, J.M.; Daneman, M.; Horsley, D.A.; Boser, B.E., "Pulse-Echo Ultrasonic Fingerprint Sensor on a Chip," in Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS), 2015 Transducers - 2015 18th International Conference on, pp. 674-677, 21-25 June 2015. doi: 10.1109/TRANSDUCERS.2015.7181013
Abstract: A fully-integrated ultrasonic fingerprint sensor based on pulse-echo imaging is presented. The device consists of a 24×8 Piezoelectric Micromachined Ultrasonic Transducer (PMUT) array bonded at the wafer level to custom readout electronics fabricated in a 180-nm CMOS process. The proposed top-driving bottom-sensing technique minimizes signal attenuation due to the large parasitics associated with high-voltage transistors. With 12V driving signal strength, the sensor takes 24μs to image a 2.3mm by 0.7mm section of a fingerprint.
Keywords: CMOS image sensors; integrated circuit bonding; micromachining; microsensors; piezoelectric transducers; pulse measurement; readout electronics; sensor arrays; ultrasonic transducer arrays; ultrasonic variables measurement; CMOS process; PMUT array; high-voltage transistor; piezoelectric micromachined ultrasonic transducer array; pulse-echo imaging; pulse-echo ultrasonic fingerprint sensor on a chip; readout electronics; signal attenuation; size 180 mum; time 24 mus; top-driving bottom-sensing technique; voltage 12 V; wafer level bonding; Acoustics; Aluminum nitride; Arrays; Electrodes; Fingerprint recognition; Micromechanical devices; Transducers; Fingerprint sensor; MEMS-CMOS integration; PMUT; Ultrasound transducer (ID#: 15-8815)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181013&isnumber=7180834
Qi Yan; Rui Yang; Jiwu Huang, "Copy-Move Detection of Audio Recording with Pitch Similarity," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1782-1786, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178277
Abstract: The widespread availability of audio editing software has made it very easy to create forgeries without perceptual trace. Copy-move is one of popular audio forgeries. It is very important to identify audio recording with duplicated segments. However, copy-move detection in digital audio with sample by sample comparison is invalid due to post-processing after forgeries. In this paper we present a method based on pitch similarity to detect copy-move forgeries. We use a robust pitch tracking method to extract the pitch of every syllable and calculate the similarities of these pitch sequences. Then we can use the similarities to detect copy-move forgeries of digital audio recording. Experimental result shows that our method is feasible and efficient.
Keywords: audio recording; counterfeit goods; audio editing software; audio forgeries; copy-move detection; digital audio recording; pitch sequences; pitch tracking; post-processing; Audio databases; Audio recording; Fingerprint recognition; Forgery; Image segmentation; Robustness; Security; Audio forensics; Audio forgeries; Copy-Move detection; Pitch similarity (ID#: 15-8816)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178277&isnumber=7177909
Nagano, H.; Mukai, R.; Kurozumi, T.; Kashino, K., "A Fast Audio Search Method Based on Skipping Irrelevant Signals By Similarity Upper-Bound Calculation," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 2324-2328, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178386
Abstract: In this paper, we describe an approach to accelerate fingerprint techniques by skipping the search for irrelevant sections of the signal and demonstrate its application to the divide and locate (DAL) audio fingerprint method. The search result for the applied method, DAL3, is the same as that of DAL mathematically. Experimental results show that DAL3 can reduce the computational cost of DAL to approximately 25% for the task of music signal retrieval.
Keywords: acoustic signal processing; audio signal processing; fingerprint identification; musical acoustics; DAL audio fingerprint method; divide-and-locate audio fingerprint method; fast audio search method; finger print technology; music signal retrieval; similarity upper-bound calculation; skipping irrelevant signals; Acceleration; Accuracy; Computational efficiency; Databases; Fingerprint recognition; Histograms; Multiple signal classification; Audio fingerprint; audio search; information retrieval (ID#: 15-8817)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178386&isnumber=7177909
Xu, G.; Meng, Z.; Lin, J.; Deng, C.; Carson, P.; Fowlkes, J.; Tomlins, S.; Siddiqui, J.; Davis, M.; Kunju, L.; Wang, X., "In Vivo Biopsy by PhotoacousticUS Based Tissue Characterization," in Ultrasonics Symposium (IUS), 2015 IEEE International, pp. 1-4, 21-24 Oct. 2015. doi: 10.1109/ULTSYM.2015.0216
Abstract: Our recent research has demonstrated that the frequency domain power distribution of radio-frequency (RF) photoacoustic (PA) signals contains the microscopic information of the optically absorbing materials in the sample. In this research, we were seeking for methods of systematically analyzing the PA measurement from biological tissues and the feasibility of evaluating tissue chemical and microstructural features for potential tissue characterization. By performing PA scan over a broad spectrum covering the optical fingerprints of specific relevant chemical components, and then transforming the radio-frequency signals into the frequency domain, a 2D spectrogram, namely physio-chemical spectrogram (PCS) can be generated. The PCS contains rich diagnostic information allowing quantification of not only contents but also histological microfeatures of various chemical components in tissue. Comprehensive analysis of PCS, namely photoacoustic physio-chemical analysis (PAPCA), could reveal the histopathology information in tissue and hold the potential to achieve comprehensive and accurate tissue characterization.
Keywords: bio-optics; biological tissues; biomedical ultrasonics; photoacoustic effect; biological tissues; biopsy; chemical components; frequency domain power distribution; optically absorbing materials; photoacoustic physiochemical analysis; photoacousticUS based tissue characterization; physiochemical spectrogram; radiofrequency photoacoustic signals; Acoustics; Biomedical optical imaging; Chemicals; Fingerprint recognition; Lipidomics; Liver; Microscopy; fatty liver; multi-spectral; photoacoustic imaging; prostate cancer; spectral analysis; tissue characterization (ID#: 15-8818)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7329159&isnumber=7329057
Ruizhe Li; Chang-Tsun Li; Yu Guan, "A Compact Representation of Sensor Fingerprint for Camera Identification and Fingerprint Matching," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1777-1781, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178276
Abstract: Sensor Pattern Noise (SPN) has been proved as an effective fingerprint of imaging devices to link pictures to the cameras that acquired them. In practice, forensic investigators usually extract this camera fingerprint from large image block to improve the matching accuracy because large image blocks tend to contain more SPN information. As a result, camera fingerprints usually have a very high dimensionality. However, the high dimensionality of fingerprint will incur a costly computation in the matching phase, thus hindering many interesting applications which require an efficient real-time camera matching. To solve this problem, an effective feature extraction method based on PCA and LDA is proposed in this work to compress the dimensionality of camera fingerprint. Our experimental results show that the proposed feature extraction algorithm could greatly reduce the size of fingerprint and enhance the performance in term of Receiver Operating Characteristic (ROC) curve of several existing methods.
Keywords: data compression; feature extraction; fingerprint identification; image enhancement; image forensics; image matching; image representation; image sensors; principal component analysis; LDA; PCA; camera fingerprint dimensionality compression; camera fingerprint extraction; camera identification; compact sensor fingerprint representation; feature extraction method; fingerprint matching; fingerprint size reduction; forensic investigators; image blocks; imaging devices; matching accuracy improvement; receiver operating characteristic curve; sensor pattern noise; Cameras; Principal component analysis; Digital forensics; PCA denoising; Photo-response nonuniformity noise; Sensor pattern noise}, (ID#: 15-8819)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178276&isnumber=7177909
Horsley, David A.; Rozen, Ofer; Lu, Yipeng; Shelton, Stefon; Guedes, Andre; Przybyla, Richard; Tang, Hao-Yen; Boser, Bernhard E., "Piezoelectric Micromachined Ultrasonic Transducers for Human-Machine Interfaces and Biometric Sensing," in SENSORS, 2015 IEEE, pp. 1-4, 1-4 Nov. 2015. doi: 10.1109/ICSENS.2015.7370564
Abstract: Improvements in thin-film piezoelectric materials such as AlN and PZT enable piezoelectric micromachined transducers that are superior to existing capacitive transducers. This paper presents the basic design equations, equivalent circuit model, and fabrication processes for piezoelectric micromachined ultrasonic transducers (PMUTs) operating in fluid or air. Relative to conventional ultrasonic transducers, PMUTs have the advantages of small size, low cost, low power consumption, and compatibility with integrated circuit manufacturing methods. These advantages enable PMUTs to be used in new applications such as human-machine interfaces and ultrasonic fingerprint sensors.
Keywords: Acoustics; Aluminum nitride; Electrodes; III-V semiconductor materials; Impedance; Resonant frequency; Silicon; MEMS; PMUT; piezoelectric sensors; ultrasonic transducers (ID#: 15-8820)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7370564&isnumber=7370096
Valsesia, D.; Coluccia, G.; Bianchi, T.; Magli, E., "Scale-Robust Compressive Camera Fingerprint Matching with Random Projections," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1697-1701, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178260
Abstract: Recently, we demonstrated that random projections can provide an extremely compact representation of a camera fingerprint without significantly affecting the matching performance. In this paper, we propose a new construction that makes random projections of camera fingerprints scale-robust. The proposed method maps the compressed fingerprint of a rescaled image to the compressed fingerprint of the original image, rescaled by the same factor. In this way, fingerprints obtained from rescaled images can be directly matched in the compressed domain, which is much more efficient than existing scale-robust approaches. Experimental results on the publicly available Dresden database show that the proposed technique is robust to a wide range of scale transformations. Moreover, robustness can be further improved by providing reference scales in the database, with a small additional storage cost.
Keywords: data compression; fingerprint identification; image coding; image matching; image sensors; photoresponse nonuniformity; random projections; scale-robust compressive camera fingerprint matching; Cameras; Databases; Forensics; Image coding; Robustness; Sensors; PRNU; random projections (ID#: 15-8821)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178260&isnumber=7177909
Ling Zou; Qianhua He; Xiaohui Feng, "Cell Phone Verification from Speech Recordings using Sparse Representation," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1787-1791, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178278
Abstract: Source recording device recognition is an important emerging research field of digital media forensic. Most of the prior literature focuses on the recording device identification problem. In this study we propose a source cell phone verification scheme based on sparse representation. We employed Gaussian supervectors (GSVs) based on Mel-frequency cepstral coefficients (MFCCs) extracted from the speech recordings to characterize the intrinsic fingerprint of the cell phone. For the sparse representation, both exemplar based dictionary and dictionary learned by K-SVD algorithm were examined to this problem. Evaluation experiments were conducted on a corpus consists of speech recording recorded by 14 cell phones. The achieved equal error rate (EER) demonstrated the feasibility of the proposed scheme.
Keywords: Gaussian processes; audio recording; cepstral analysis; digital forensics; error statistics; signal representation; smart phones; speech recognition; vectors; EER; Gaussian supervectors; K-SVD algorithm; MFCC; Mel-frequency cepstral coefficients; dictionary learning; digital media forensic; equal error rate; exemplar based dictionary; recording device identification problem; source cell phone verification; sparse representation; speech recording device recognition; Cellular phones; Dictionaries; Feature extraction; Forensics; Measurement; Speech; Speech recognition; Digital audio forensic; Gaussian supervector; Source cell phone verification; Sparse representation (ID#: 15-8822)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178278&isnumber=7177909
Vezzoli, E.; Dzidek, B.; Sednaoui, T.; Giraud, F.; Adams, M.; Lemaire-Semail, B., "Role of Fingerprint Mechanics and Non-Coulombic Friction in Ultrasonic Devices," in World Haptics Conference (WHC), 2015 IEEE, pp. 43-48, 22-26 June 2015. doi: 10.1109/WHC.2015.7177689
Abstract: Ultrasonic vibration of a plate can be used to modulate the friction of a finger pad sliding on a surface. This modulation can modify the user perception of the touched object and induce the perception of textured materials. In the current paper, an elastic model of finger print ridges is developed. A friction reduction phenomenon based on non-Coulombic friction is evaluated based on this model. Then, a comparison with experimental data is carried out to assess the validity of the proposed model and analysis.
Keywords: friction; haptic interfaces; ultrasonic devices; vibrations; elastic model; finger pad sliding friction; finger print ridges; fingerprint mechanics; friction reduction phenomenon; nonCoulombic friction; textured materials; ultrasonic devices; ultrasonic vibration; user perception; Acoustics; Actuators; Fingers; Force; Friction; Springs; Vibrations (ID#: 15-8823)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177689&isnumber=7177662
Papalambrou, A.; Karadimas, D.; Gialelis, J.; Voyiatzis, A.G., "A Versatile Scalable Smart Waste-Bin System Based on Resource-Limited Embedded Devices," in Emerging Technologies & Factory Automation (ETFA), 2015 IEEE 20th Conference on, pp. 1-8, 8-11 Sept. 2015. doi: 10.1109/ETFA.2015.7301466
Abstract: This work presents the architecture, modelling, simulation, and physical implementation of a versatile, scalable system for use in common-type waste-bins that can perform and transmit accurate fill-level estimates while consuming minimal power and consisting of low-cost embedded components. The sensing units are based on ultrasonic sensors that provide ranging information which is translated to fill-level estimations based on extensive simulations in MATLAB and physical experiments. At the heart of the proposed implementation lies RFID technology with active RFID tags retrieving information and controlling the sensors and RFID readers receiving and interpreting information. Statistical processing of the simulation in combination with physical experiments and field tests verified that the system works accurately and efficiently with a tiny data-load fingerprint.
Keywords: embedded systems; radiofrequency identification; refuse disposal; statistical analysis; ultrasonic transducers; MATLAB; RFID readers; RFID technology; active RFID tags; architecture; data-load fingerprint; fill-level estimations; low-cost embedded components; minimal power consumption; modelling; physical implementation; resource-limited embedded devices; sensing units; sensors control; simulation; statistical processing; ultrasonic sensors; urban solid waste; versatile scalable smart waste-bin system; Accuracy; Acoustics; Active RFID tags; Estimation; Mobile communication; active RFID tag; smart-cities; sustainability; ultrasonic sensors; urban solid waste; waste-bin fill-level estimation (ID#: 15-8824)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301466&isnumber=7301399
Saad Zaghloul, Z.; Bayoumi, M., "Adaptive Neural Matching Online Spike Sorting VLSI Chip Design for Wireless BCI Implants," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 977-981, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178115
Abstract: Controlling the surrounding world by just the power of our thoughts has always seemed to be just a fictional dream. With recent advancements in technology and research, this dream has become a reality for some through the use of a Brain Computer/Machine Interface (BCI/BMI). One of the most important goals of BCI is to enable handicap people to control artificial limbs. Some research proposed wireless implants that do not require chronic wound in the skull. However, the communications consume a high bandwidth and power that exceeds the allowed limits, 8-10mW. This study proposes and implements a modified version of real-time spike sorting for wireless BCI [4] that simplifies and uses less computation via an adaptive neural-structure; which makes it simpler, faster and power and area efficient. The system was implemented, and simulated using Modalism and Cadence, with ideal case and worst case accuracy of 100% and 91.7%, respectively. Also, the chip layout of 0.704mm2, with power consumption of 4.7mW and was synthesized on 45nm technology using Synopsys.
Keywords: brain-computer interfaces; integrated circuit design; neural chips; prosthetics; Cadence; Modalism; Synopsys; adaptive neural matching online spike sorting VLSI chip design; artificial limbs; brain computer/machine interface; power 4.7 mW; power 8 mW to 10 mW; power consumption; size 45 nm; wireless BCI implants; wireless implants; Bandwidth; Fingerprint recognition; Implants; Neurons; Sorting; Wireless communication; Wireless sensor networks; Adaptive; BCI/BMI; Spike Sorting; VLSI; layout (ID#: 15-8825)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178115&isnumber=7177909
Varges da Silva, M.; Marana, A.N.; Paulino, A.A., "On the Importance of Using High Resolution Images, Third Level Features and Sequence of Images for Fingerprint Spoof Detection," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1807-1811, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178282
Abstract: The successful and widespread deployment of biometric systems brings on a new challenge: the spoofing, which involves presenting an artificial or fake biometric trait to the biometric systems so that unauthorized users can gain access to places and/or information. We propose a fingerprint spoof detection method that uses a combination of information available from pores, statistical features and fingerprint image quality to classify the fingerprint images into live or fake. Our spoof detection algorithm combines these three types of features to obtain an average accuracy of 97.3% on a new database (UNESP-FSDB) that contains 4,800 images of live and fake fingerprints. An analysis is performed that considers some issues such as image resolution, pressure by the user, sequence of images and level of features.
Keywords: fingerprint identification; image resolution; image sequences; biometric systems; fingerprint spoof detection; high resolution images; image resolution; image sequence; third level features; Accuracy; Biomedical imaging; Classification algorithms; Fingerprint recognition; Image resolution; Iris recognition; Biometrics; fingerprint; pores; security; spoof detection (ID#: 15-8826)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178282&isnumber=7177909
Honghai Yu; Moulin, P., "SNR Maximization Hashing for Learning Compact Binary Codes," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1692-1696, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178259
Abstract: In this paper, we propose a novel robust hashing algorithm based on signal-to-noise ratio (SNR) maximization to learn binary codes. We first motivate SNR maximization for robust hashing in a statistical model, under which maximizing SNR minimizes the robust hashing error probability. A globally optimal solution can be obtained by solving a generalized eigenvalue problem. The proposed algorithm is tested on both synthetic and real datasets, showing significant performance gain over existing hashing algorithms.
Keywords: binary codes; eigenvalues and eigenfunctions; error statistics; optimisation; SNR maximization hashing; compact binary codes; generalized eigenvalue problem; novel robust hashing algorithm; robust hashing error probability; signal-to-noise ratio maximization; statistical model; Arrays; Fingerprint recognition; Music; Robustness; Signal to noise ratio; Training; Robust hashing; SNR maximization; content identification; generalized eigenproblem (ID#: 15-8827)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178259&isnumber=7177909
Ouali, C.; Dumouchel, P.; Gupta, V., "Efficient Spectrogram-Based Binary Image Feature for Audio Copy Detection," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 1792-1796, 19-24 April 2015. doi: 10.1109/ICASSP.2015.7178279
Abstract: This paper presents the latest improvements on our Spectro system that detects transformed duplicate audio content. We propose a new binary image feature derived from a spectrogram matrix by using a threshold based on the average of the spectral values. We quantize this binary image by applying a tile of fixed size and computing the sum of each small square in the tile. Fingerprints of each binary image encode the positions of the selected tiles. Evaluation on TRECVID 2010 CBCD data shows that this new feature improves significantly the Spectro system for transformations that add irrelevant speech to the audio. Compared to a state-of-the-art audio fingerprinting system, the proposed method reduces the minimal Normalized Detection Cost Rate (min NDCR) by 33%, improves localization accuracy by 28% and results in 40% fewer missed queries.
Keywords: feature extraction; matrix algebra; TRECVID 2010 CBCD data; audio copy detection; audio fingerprinting system; efficient spectrogram-based binary image feature; minimal normalized detection cost rate; spectrogram matrix; Feature extraction; Fingerprint recognition; Graphics processing units; Multimedia communication; Robustness; Spectrogram; Speech; Content-based copy detection; TRECVID; audio fingerprints; spectrogram (ID#: 15-8828)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7178279&isnumber=7177909
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Automated Response Actions 2015 |
A recurring problem in cybersecurity is the need to automate systems to reduce human effort and error and to be able to react rapidly and accurately to an intrusion or insertion. The articles cited here describe a number of interesting approaches related to the Science of Security hard topics, including resilience and composability.
Ossenbuhl, S.; Steinberger, J.; Baier, H., "Towards Automated Incident Handling: How to Select an Appropriate Response against a Network-Based Attack?," in IT Security Incident Management & IT Forensics (IMF), 2015 Ninth International Conference on, pp. 51-67, 18-20 May 2015. doi: 10.1109/IMF.2015.13
Abstract: The increasing amount of network-based attacks evolved to one of the top concerns responsible for network infrastructure and service outages. In order to counteract these threats, computer networks are monitored to detect malicious traffic and initiate suitable reactions. However, initiating a suitable reaction is a process of selecting an appropriate response related to the identified network-based attack. The process of selecting a response requires to take into account the economics of an reaction e.g., risks and benefits. The literature describes several response selection models, but they are not widely adopted. In addition, these models and their evaluation are often not reproducible due to closed testing data. In this paper, we introduce a new response selection model, called REASSESS, that allows to mitigate network-based attacks by incorporating an intuitive response selection process that evaluates negative and positive impacts associated with each countermeasure. We compare REASSESS with the response selection models of IE-IRS, ADEPTS, CS-IRS, and TVA and show that REASSESS is able to select the most appropriate response to an attack in consideration of the positive and negative impacts and thus reduces the effects caused by an network-based attack. Further, we show that REASSESS is aligned to the NIST incident life cycle. We expect REASSESS to help organizations to select the most appropriate response measure against a detected network-based attack, and hence contribute to mitigate them.
Keywords: computer network security; telecommunication traffic; ADEPTS; CS-IRS; IE-IRS; NIST incident life cycle; REASSESS; TVA; automated incident handling; closed testing data; computer network monitoring; malicious traffic detection; network infrastructure; network-based attack; network-based attacks; reaction initiation; response selection models; service outages; Adaptation models; Biological system modeling; Delays; Internet; NIST; Network topology; Security; automatic mitigation; cyber security; intrusion response systems; network security}, (ID#: 15-8904)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7195806&isnumber=7195793
Goldman, R.P.; Burstein, M.; Benton, J.; Kuter, U.; Mueller, J.; Robertson, P.; Cerys, D.; Hoffman, A.; Bobrow, R., "Active Perception for Cyber Intrusion Detection and Defense," in Self-Adaptive and Self-Organizing Systems Workshops (SASOW), 2015 IEEE International Conference on, pp. 92-101, 21-25 Sept. 2015.doi: 10.1109/SASOW.2015.20
Abstract: This paper describes an automated process of active perception for cyber defense. Our approach is informed by theoretical ideas from decision theory and recent research results in neuroscience. Our cognitive agent allocates computational and sensing resources to (approximately) optimize its Value of Information. To do this, it draws on models to direct sensors towards phenomena of greatest interest to inform decisions about cyber defense actions. By identifying critical network assets, the organization's mission measures interest (and value of information). This model enables the system to follow leads from inexpensive, inaccurate alerts with targeted use of expensive, accurate sensors. This allows the deployment of sensors to build structured interpretations of situations. From these, an organization can meet mission-centered decision-making requirements with calibrated responses proportional to the likelihood of true detection and degree of threat.
Keywords: decision theory; security of data; active perception; cognitive agent; critical network assets; cyber intrusion defense; cyber intrusion detection; decision theory; direct sensors; mission-centered decision-making; neuroscience; value of information; Context; Malware; Sensor phenomena and characterization; Servers; Visualization; Workstations; IDS correlation; active perception; cyber defense; intrusion detection (ID#: 15-8905)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306563&isnumber=7306518
Patil, Sushant; Parmar, Urvil; Karegaonkar, Rohit, "Automated Software Testing for High-End Touch Screen Automotive Displays," in Transportation Electrification Conference (ITEC), 2015 IEEE International, pp. 1-4, 27-29 Aug. 2015. doi: 10.1109/ITEC-India.2015.7386880
Abstract: The Current Methodologies of virtual and Automated Testing on touch screen displays are limited, since, minor screen changes and display software updates will cause the automated tests to fail and hence there will be a need to fix them. Addition of new items to a drop-down list, movement of buttons, reusability and conversion of Test scripts for similar devices will be a cumbersome job. The general approach of testing Display manually, is not feasible due to sluggishness, monotony and repetitiveness of work, which can cause manual errors in different volumes at different times, it will also reduce the option of Batch executions of tests 24???7. The automated approach for testing such Displays through XY coordinates pointing on screen, taking Screen captures and using optical character recognition for verification methods are non-reliable and affects the performance of testing. This makes automating a test difficult, causes a high amount of risk as well as high maintenance costs. This paper offers recent ideas and its implementation of software testing, on touch screen Displays, which are widely used in sophisticated passenger cars as well as in off-highway equipment like tractors, construction and Forestry etc. In this method of testing the Display's User interface, we use ???Test Events??? which are linked with different objects/Icons on displays. Every user action like selecting menu item from dropdown, pressing button, swiping, turning pages are considered as unique actions, and these actions are then bundled with unique Object Id data to create the events on the Display. These user events are simulated automatically and the response is monitored and logged for identifying and analyzing Software defects. The architecture is based on "Data Driven Model" where test data is separated from script in order to take care of any changes in quick and agile way. The Data such as unique references to objects and Pools on the screen are placed in SQL Database, this enables- to access data from different locations by multiple users. Though, the approach can be widely applied to majority of Touch screen applications, we restrict the scope of this paper to automotive domain due to the Standards and protocol used during implementation.
Keywords: Automation; Computer architecture; Databases; Manuals; Servers; Software; Testing (ID#: 15-8906)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7386880&isnumber=7386861
Kumar, R.; Kumar, S., "Automated Fault Tolerant System for Control Computational Power in Desktop Grid," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 818-821, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154820
Abstract: Fault tolerant Resource consumption in Desktop grid is a motivational area in research. The present research paper focuses on the dimensions of Fault Tolerant resource usage especially in area of available computational power. Desktop grid resources are accountable for generation of computational power. Alchemi Desktop middleware is useful for collection of computational power on diverse machines in Microsoft window based environment. Failure and Fault in execution side can create serious problem, in addition to a direct impact on computational power in Real Time Environment. In the Environment of faults, control on the available computational power is very necessary in grid middleware. This problem has not been addressed so far. Alchemi Desktop Grid Middleware provides a manual Procedure for control of computational power in Real Time Environment. There is no automated mechanism available for controlling the processing power in alchemi desktop grid. This Research work has proposed, designed & developed automated framework for Alchemi Grid middleware. Framework can take control on available computational power in Real Time Environment at Time of Fault in execution processes. Testing for the framework is done in Real Time environment. Results after test show that framework gives quick response for controlling available computational power. Framework is able to detect defective process machine and correct fault in milliseconds which will cooperative to maintain level of available computational power In Real time Environment. This Research work has tried to eliminate Manual Procedure for controlling computational power by using automated Method for quick action in case of execution side faults.
Keywords: fault tolerant computing; grid computing; middleware; power aware computing; real-time systems; Alchemi desktop middleware; Microsoft Window based environment; alchemi desktop grid; automated fault tolerant system; automated mechanism; control computational power; desktop grid resource; fault tolerant resource consumption; grid middleware; real time environment; Fault tolerance; Fault tolerant systems; Grid computing; Local area networks; Middleware; Process control; Real-time systems; Alche; Computational Power; Middleware; fault Tolerant (ID#: 15-8907)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154820&isnumber=7154658
Tasdighi, M.; Kezunovic, M., "Impact Analysis of Network Topology Change on Transmission Distance Relay Settings," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286152
Abstract: One big challenge raised by frequent topology change in today's power system is assessing the system protection security and dependability afterwards. This paper reviews the setting algorithm for the distance relays and proposes an automated setting calculation module. The calculation procedure is broken down into blocks which could be processed in parallel in order to improve the computation speed. The module could be used to assess the system protection vulnerabilities following a topology change in instances when multiple switching actions are done in response to occurrence of cascading faults or as a result of intentional control action. The module performance is tested on New England 39-bus and IEEE 118-bus systems. A sensitivity analysis in the form of N-2 contingency impact on the network relay settings is conducted on both test systems.
Keywords: IEEE standards; power system faults; power system protection; power system security; relay protection; sensitivity analysis; IEEE 118-bus system; N-2 contingency; New England 39-bus system; cascading fault occurrence; network topology analysis; power system protection; power system security; sensitivity analysis; transmission distance relay setting; Circuit faults; Impedance; Network topology; Protective relaying; Switches; Topology; N-2 contingency; Power system protection security and dependability phase distance settings; relay ranking; topology control; vulnerability (ID#: 15-8908)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286152&isnumber=7285590
Songfan Yang; Le An; Kafai, M.; Bhanu, B., "To Skip or Not to Skip? A Dataset of Spontaneous Affective Response of Online Advertising (SARA) for Audience Behavior Analysis," in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol. 1, pp. 1-8, 4-8 May 2015. doi: 10.1109/FG.2015.7163153
Abstract: In marketing and advertising research, “zapping” is defined as the action when a viewer skips a commercial advertisement. Researchers analyze audience's behavior in order to prevent zapping, which helps advertisers to design effective commercial advertisements. Since emotions can be used to engage consumers, in this paper, we leverage automated facial expression analysis to understand consumers' zapping behavior. To this end, we collect 612 sequences of spontaneous facial expression videos by asking 51 participants to watch 12 advertisements from three different categories, namely Car, Fast Food, and Running Shoe. In addition, the participants also provide self-reported reasons of zapping. We adopt a data-driven approach to formulate a zapping/non-zapping binary classification problem. With an in-depth analysis of expression response, specifically smile, we show a strong correlation between zapping behavior and smile response. We also show that the classification performance of different ad categories correlates with the ad's intention for amusement. The video dataset and self-reports are available upon request for the research community to study and analyze the viewers' behavior from their facial expressions.
Keywords: advertising data processing; face recognition; image classification; image sequences; video signal processing; SARA; audience behavior analysis; automated facial expression analysis; car; consumer zapping behavior; data-driven approach; effective commercial advertisement design; fast food; marketing research; running shoe; smile response; spontaneous affective response of online advertising dataset; spontaneous facial expression video sequences; video dataset; zapping-nonzapping binary classification problem; Advertising; Data collection; Face; Face recognition; Footwear; Videos; YouTube (ID#: 15-8909)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163153&isnumber=7163073
Azuma, S.-I.; Nakamoto, T.; Izumi, S.; Kitao, T.; Maruta, I., "Randomized Automated Demand Response for Real-Time Pricing," in Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, pp. 1-5, 18-20 Feb. 2015. doi: 10.1109/ISGT.2015.7131807
Abstract: Automated demand response (ADR) is an essential technology for demand management such as real-time pricing. A major issue for ADR is the design of the units which control electric supply to each electric devise according to the price, so as to benefit both the supplier and consumer sides. Although a kind of universal design principle must be useful for the issue, it has never been established so far. This paper thus attempts to derive a design principle of the ADR units for real-time pricing and proposes ADR units based on it. First, as a design principle, it is clarified that the heterogeneity of the consumer-side actions is essential to control the total electric power consumption. Based on it, we propose randomized ADR units to artificially produce the heterogeneity. In each proposed unit, a random number is generated and electric power is provided to the connected electric devise only if a price-dependent condition is satisfied for the resulting random number. The proposed units enable the consumers to automatically buy electricity with a low price and allow the supplier to control the total consumption. They also guarantee the scalability of the resulting real-time pricing system.
Keywords: demand side management; power consumption; pricing; demand management; electric supply control; random number generation; randomized ADR unit design principle; randomized automated demand response; real-time pricing; total electric power consumption control; Conferences; Load management; Power supplies; Pricing; Real-time systems; Smart grids (ID#: 15-8910)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131807&isnumber=7131775
Akhoun, I.; Bestel, J.; Pracht, P.; El-Zir, E.; Van-den-Abbeele, T., "Automated Classification of Electrically-Evoked Compound Action Potentials," in Neural Engineering (NER), 2015 7th International IEEE/EMBS Conference on, pp. 687-690, 22-24 April 2015. doi: 10.1109/NER.2015.7146716
Abstract: Electrically-evoked compound action potentials (ECAPs) is an objective measure of peripheral neural encoding of electrical stimulation delivered by cochlear implants (CIs) at the auditory nerve level. ECAPs play a key role in automated CI fitting and outcome diagnosis, as long as presence of genuine ECAP is accurately detected automatically. Combination of ECAP amplitudes and signal-to-noise ratio are shown to efficiently detect true responses, by comparing them to subjective visual expert judgments. Corresponding optimal thresholds were calculated from Receiver-Operating-Characteristic curves. This was conducted separately on three artifact rejection methods: alternate polarity, masker-probe and modified-masker-probe. This model resulted in sensitivity and specificity error of 3.3% in learning, 3.5% in testing and 5.0% in verification. It was found that the following combination of ECAP amplitude and signal-to-noise ratio would be accurate predictors: 22 μV and 1.3 dB SNR thresholds for alternate polarity, 35 μV and -0.2 dB for masker-probe and 44 μV and -0.2 dB for modified-masker-probe.
Keywords: bioelectric potentials; cochlear implants; medical signal processing; neurophysiology; sensitivity analysis; signal classification; signal denoising; ECAP amplitudes; artifact rejection methods; auditory nerve level; automated CI fitting; automated classification; cochlear implants; electrical stimulation; electrically-evoked compound action potentials; modified-masker-probe; outcome diagnosis; peripheral neural encoding; receiver-operating characteristic curves; signal-to-noise ratio; specificity error; subjective visual expert judgments; Biomedical measurement; Current measurement; Pollution measurement; Sensitivity; Signal to noise ratio; Testing; Visualization; biomedical signal processing; cochlear implants; data mining (ID#: 15-8911)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7146716&isnumber=7146535
Varnavsky, A.N., "Automated System for Correction of Functional State of Production Workers," in Control and Communications (SIBCON), 2015 International Siberian Conference on, pp. 1-4, 21-23 May 2015. doi: 10.1109/SIBCON.2015.7147315
Abstract: The article discusses the creation of automated system for correction of negative functional states of production workers on the basis of application equipment USB-6008 and the programming environment LabVIEW National Instruments. The USB-6008 is used for the removal of bioelectric signals worker whose analysis is a virtual instrument used to assess the presence of negative conditions and issue the necessary corrective action. Describes the version of the implementation of the automated system.
Keywords: medical signal processing; occupational health; virtual instrumentation; LabVIEW national instrument programming environment;USB-6008 application equipment; automated system; bioelectric signal removal; negative functional state correction; production workers; virtual instrument; Fatigue; Instruments; Productivity; Programming environments; Skin; Stress; USB-6008; corrective action; galvanic skin response; the correction of functional state; the negative functional status; virtual instrument (ID#: 15-8912)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7147315&isnumber=7146959
Mobasheri, A.; Bakillah, M., "Towards a Unified Infrastructure for Automated Management and Integration of Heterogeneous Geo-Datasets in Disaster Response," in Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International, pp. 4570-4573, 26-31 July 2015. doi: 10.1109/IGARSS.2015.7326845
Abstract: Disaster response actors and decision makers need to perform several tasks and decisions in a short time. Handling such tasks requires access to sufficient, relevant and up-to-date datasets. Some of these datasets are static such as road network infrastructure and map of buildings. While several other kind of required information are dynamic and change during the occurrence of disaster. Such information may include number of casualties, wind speed, wind direction, road obstacles, etc. Semantic integration of various sources of information is the key to making efficient and fast actions by the actors in the filed as well as top-level decision makers. In this paper, we elaborate on the research challenges of data integration from multiple heterogeneous sources by proposing the system architecture of ASSIST (Access, Semantic Search and Integration Service and Translation). The paper concludes with discussing the future work on this smart service.
Keywords: disasters; emergency management; geographic information systems; geophysics computing; roads; wind; ASSIST system architecture; Access, Semantic Search and Integration Service and Translation; automated disaster management; building map; disaster occurrence; disaster response; geodataset;road network infrastructure; road obstacle; wind direction; wind speed; Disaster management; Floods; Geospatial analysis; Real-time systems; Semantics; Spatial databases; Wireless sensor networks; Data integration; Disaster response; Geo-Sensor web; Semantic Web; VGI (ID#: 15-8913)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7326845&isnumber=7325670
Fuller, T.R.; Deane, G.E., "Creating Complex Applications via Self-Adapting Autonomous Agents in an Intelligent System Framework," in Self-Adaptive and Self-Organizing Systems (SASO), 2015 IEEE 9th International Conference on, pp. 164-165, 21-25 Sept. 2015. doi: 10.1109/SASO.2015.27
Abstract: In this paper, we present a process, developed over years of practical commercial use, where applications accomplishing a wide variety of complex tasks are created from a common framework, through the use, recombination and iterative refinement of autonomous agents in a Multi-Agent Intelligent System. Driven by a need to solve real-world problems, our focus is to make businesses run more efficiently in an increasingly complex world of systems and software that must work together seamlessly. By listening closely to our customers' problems, we discovered points of commonality, as well as patterns of anomalies related to the flow of data through communication channels and data processing systems, include accounting, inventory, customer relationship management, scheduling systems and many more. We solved their problems through the creation of an Intelligent System, where we defined and implemented software agents that were highly configurable, responsive in real-time and useable in various settings. Autonomous agents adhere to a standard format of three major components: the goal or triggering criteria, the action, and the adaptation response. Agents run within a common Intelligent System framework and agent libraries provide a vast set of component behaviors to build applications from. Agents have one or more of the following component behaviors: sensory aware, geo-position aware, temporally aware, API aware, device aware, and many more. Additionally, there are manager-level agents whose goal is to keep the overall system in balance, through dynamic resource allocation on a system level. To prove the viability of this process, we present a variety of applications representing wide ranging behaviors, many with overlapping agents, created via this approach, all of which are in active commercial use. Finally, we discuss future enhancements toward self-organization, where end users express their requirements declaratively to solve larger business needs, resulting in the automatic instantiation of a solution specific intelligent system.
Keywords: application program interfaces; business data processing; customer relationship management; multi-agent systems; resource allocation; scheduling; software agents; API aware behavior; agent libraries; business needs; businesses; complex applications; component behaviors; customer relationship management; device aware behavior; dynamic resource allocation; geo-position aware behavior; intelligent system framework; iterative refinement; manager-level agents; multiagent intelligent system; scheduling systems; self-adapting autonomous agents; sensory aware behavior; software agents; solution specific intelligent system; temporally aware behavior; Conferences; application framework; artificial intelligence; automated integration; autonomous agent; dynamic processing; multi-agent intelligent system; resource allocation (ID#: 15-8914)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306609&isnumber=7306581
Meraoumia, A.; Chitroub, S.; Bouridane, A., "An Automated Ear Identification System Using Gabor Filter Responses," in New Circuits and Systems Conference (NEWCAS), 2015 IEEE 13th International, pp. 1-4, 7-10 June 2015. doi: 10.1109/NEWCAS.2015.7182085
Abstract: About some years ago, several biometric technologies are considered mature enough to be a new tool for security and ear-based person identification is one of these technologies. This technology provides a reliable, low cost and user-friendly viable solution for a range of access control applications. In this paper, we propose an efficient online personal identification system based on ear images. In this purpose, the identification algorithm aims to extract, for each ear, a specific set of features. Based on Gabor filter response, three ear features have been used in order to extract different and complementary information: phase, module and a combination of the real and imaginary parts. Using these features, several combinations are tested in the fusion phase in order to achieve an optimal multi-representation system which leads to a better identification accuracy. The obtained experimental results show that the system yields the best performance for identifying a person and it is able to provide the highest degree of biometrics-based system security.
Keywords: Gabor filters; biometrics (access control);ear; feature extraction; image fusion; Gabor filter response; automated ear identification system; biometrics-based system security; ear-based person identification; feature extraction; fusion phase; multirepresentation system; online personal identification system; Accuracy; Biomedical imaging; Biometrics (access control);Databases; Ear; Feature extraction; System performance; Biometrics; Data fusion;Ear; Gabor filter; Identification (ID#: 15-8915)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7182085&isnumber=7181973
Kumar, G.; Saini, H., "Secure Composition of ECC-PAKE Protocol for Multilayer Consensus Using Signcryption," in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, pp. 740-745, 4-6 April 2015. doi: 10.1109/CSNT.2015.91
Abstract: The manuscript provides a derivation approach based on the challenge response session specific protocol, wherever similar methods are applies for alternative algorithms, such as Diffie-Hellman, RSA, Elliptic Curve Cryptography etc. The only change in the primitive generators. Further, described the key generation for password authenticated key exchange for multilayer consensus and then after signcryption approach applied which logically combines the computational cost and communicational cost into a single stride. Proposed methodology using signcryption potentially reduces the overall computation time, needed for key generation and signature. The results of multilayer consensus key generation approach are tested on SPAN and Automated Validation of Internet Security Protocol Architecture (AVISAP) tool.
Keywords: cryptographic protocols; Diffie-Hellman; ECC-PAKE protocol; RSA; SPAN; automated validation of internet security protocol architecture tool; challenge response session specific protocol; elliptic curve cryptography; multilayer consensus; password authenticated key exchange; primitive generators; secure composition; signcryption; Elliptic curve cryptography; Encryption; Nonhomogeneous media; Protocols; Challenge-Response; ECC; MCEPAK; PDS; Secure composition; Signcryption (ID#: 15-8916)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280017&isnumber=7279856
Al-Ali, Zaid; Al-Duwairi, Basheer; Al-Hammouri, Ahmad T., "Handling System Overload Resulting from DDoS Attacks and Flash Crowd Events," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 512-512, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.66
Abstract: This paper presents a system that provides mitigation for DDoS attacks as a service, and is capable of handling flash crowd events at the same time. Providing DDoS protection as a service represents an important solution especially for Websites that have limited resources with no infrastructure in place for defense against these attacks. The proposed system is composed of two main components: (i) The distributed CAPTCHA service, which comprises a large number of powerful nodes geographically and suitably distributed in the Internet acting as a large distributed firewall, and (ii) The HTTP redirect module, which is a stateless HTTP server that redirects Web requests destined to the targeted Webserver to one of the CAPTCHA nodes. The CAPTCHA node can then segregate legitimate clients from automated attacks by requiring them to solve a challenge. Upon successful response, legitimate clients (humans) are forwarded through a given CAPTCHA node to the Webserver.
Keywords: Ash; CAPTCHAs; Computer crime; Conferences; Relays; Servers (ID#: 15-8917)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371531&isnumber=7371418
Balfour, R.E., "Building The “Internet of Everything” (IoE) for First Responders," in Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, pp. 1-6, 1-1 May 2015. doi: 10.1109/LISAT.2015.7160172
Abstract: The “Internet of Everything” (IoE) describes the “bringing together of people, process, data, and things to make networked connections more relevant and valuable than ever before”. IoE encompasses both machine-to-machine (M2M) and Internet-of-Things (IoT) technologies, and it is the pervasiveness of IoE than can be leveraged to achieve many things for many people, including first responders. The emerging IoE will continue to evolve over the next ten years and beyond, but the IoT can happen now, with automated M2M communications bringing first responder communications and situational awareness to the leading-edge of IoE-leveraged technology - exactly where they belong as they risk their lives to protect and save others. Presented here are a number of technological capabilities that are critical to achieving the IoE, especially for first responders and emergency managers, including (a) Security; (b) a global M2M standard; (c) powerful four-dimensional M2M applications; and (d) Data Privacy and trust. For advanced security, Software Defined network Perimeters (SDP) can provide the critical functionality to protect and secure M2M nodes in an ad-hoc M2M IoT/IoE network. Without a secure, dynamic, M2M network, the vision of an emergency responder instantly communicating with a “smart building” would not be feasible. But with SDP, it can, and will, happen. SDP enables an ad-hoc, secure M2M network to rapidly deploy and “hide in plain sight”. In an emergency response situation, this is exactly what we need. For M2M/IoT to go mobile and leverage global IoE capabilities anywhere (which is what emergency responders need as emergency locations are somewhat unpredictable and change every day), a global industry standard must be, and is being, developed: oneM2M. And the existing fourDscape® technology/platform could quickly support a oneM2M system structure that can be deployed in the short term, with the fo- rDscape browser providing powerful M2M IoT/IoE applications and 4D visualizations. Privacy-by-design principles can also be applied and other critical related issues addressed beyond privacy (i.e. once privacy is achieved and available IoE sensors/data can be leveraged), such as trusting, scaling, hacking, and securing M2M IoT/IoE devices and systems. Without the full package of IoE innovation embracing the very public IoE world in a very private and secure way, and can continue to evolve in parallel with emerging commercial IoE technology, first responders would not be able to leverage the commercial state-of-the-art in the short term and in the years to come. Current technology innovation can change that.
Keywords: Internet of Things; computer crime; data privacy; data visualisation; innovation management; software defined networking; trusted computing;4D visualizations; Internet of Everything; Internet-of-Things technologies; IoE pervasiveness; IoT technologies; M2M network security;SDP; ad-hoc M2M IoT/IoE network; ad-hoc network; automated M2M communications; data privacy; emergency responder; emergency response situation; four-dimensional M2M applications; fourDscape browser; global IoE capabilities; global M2M standard; global industry standard; hacking; machine-to-machine; oneM2M system structure; privacy-by-design principles; responder communications; situational awareness; smart building; software defined network perimeters; technology innovation; trust;Ad hoc networks; Buildings; Computer architecture; Mobile communication; Security; Tablet computers; Internet-of-Everything; Internet-of-Things; IoE; IoT; M2M; Machine-to-Machine; PbD; Privacy-by-Design; SDP; Software Defined Network Perimeters; fourDscape (ID#: 15-8918)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160172&isnumber=7160171
Hegarty, Rob; Haggerty, John, "SlackStick: Signature-Based File Identification for Live Digital Forensics Examinations," in Intelligence and Security Informatics Conference (EISIC), 2015 European, pp. 24-29, 7-9 Sept. 2015. doi: 10.1109/EISIC.2015.28
Abstract: A digital forensics investigation may involve procedures for both live forensics and for gathering evidence from a device in a forensics laboratory. Due to the focus on capturing volatile data during a live forensics investigation, tools have been developed that are aimed at capturing specific data surrounding state information. However, there may be circumstances whereby non-volatile data analysis, such as the identification of files of interest, is also required. In such an investigation, the ability to use file-wise, or hash, signatures is precluded due to pre-processing requirements by the forensics tools. Therefore, this paper presents SlackStick, a novel automated approach run from a USB memory device for the identification of files of interest or non-volatile evidence triage using an alternative signature scheme. Moreover, the approach may be used by inexpert users during a first-response phase of an investigation. The results of the case study presented in this paper demonstrate the applicability of the approach.
Keywords: Computers; Digital forensics; File systems; Object recognition; Operating systems; Digital forensics; file signatures; live investigations (ID#: 15-8919)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7379719&isnumber=7379706
Knirsch, F.; Engel, D.; Frincu, M.; Prasanna, V., "Model-Based Assessment for Balancing Privacy Requirements and Operational Capabilities in the Smart Grid," in Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, pp. 1-5, 18-20 Feb. 2015. doi: 10.1109/ISGT.2015.7131805
Abstract: The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increase of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and - if feasible - an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.
Keywords: approximation theory; distributed power generation; power generation protection; power system security; smart power grids; USC microgrid; University of Southern California; billing; common high-level use case; demand response; forward mapping privacy; heterogeneous systems; load curtailment; model-based assessment; numeric approximation; operational capabilities; operational requirement; optimal balancing algorithm; privacy requirements; privacy-aware data processing; privacy-aware data storage; smart grid; smart metering; Data privacy; Mathematical model; Merging; Numerical models; Privacy; Security; Smart grids (ID#: 15-8920)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131805&isnumber=7131775
Ferreira, A.; Lenzini, G., "An Analysis of Social Engineering Principles in Effective Phishing," in Socio-Technical Aspects in Security and Trust (STAST), 2015 Workshop on, pp. 9-16, 13-13 July 2015. doi: 10.1109/STAST.2015.10
Abstract: Phishing is a widespread practice and a lucrative business. It is invasive and hard to stop: a company needs to worry about all emails that all employees receive, while an attacker only needs to have a response from a key person, e.g., a finance or human resources' responsible, to cause a lot of damages. Some research has looked into what elements make phishing so successful. Many of these elements recall strategies that have been studied as principles of persuasion, scams and social engineering. This paper identifies, from the literature, the elements which reflect the effectiveness of phishing, and manually quantifies them within a phishing email sample. Most elements recognised as more effective in phishing commonly use persuasion principles such as authority and distraction. This insight could lead to better automate the identification of phishing emails and devise more appropriate countermeasures against them.
Keywords: computer crime; social aspects of automation; unsolicited e-mail; authority; distraction; effective phishing; persuasion principles; phishing emails identification; scams; social engineering principles; Decision making; Electronic mail; Internet; Psychology; Security; Social network services; classification; phishing emails; principles of persuasion; social engineering (ID#: 15-8921)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7351971&isnumber=7351960
Proudfoot, J.G.; Jenkins, J.L.; Burgoon, J.K.; Nunamaker, J.F., "Deception is in the Eye of the Communicator: Investigating Pupil Diameter Variations in Automated Deception Detection Interviews," in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, pp. 97-102, 27-29 May 2015. doi: 10.1109/ISI.2015.7165946
Abstract: Deception is pervasive, often leading to adverse consequences for individuals, organizations, and society. Information systems researchers are developing tools and evaluating sensors that can be used to augment human deception judgments. One sensor exhibiting particular promise is the eye tracker. Prior work evaluating eye trackers for deception detection has focused on the detection and interpretation of brief eye behavior variations in response to stimuli (e.g, images) or interview questions. However, research is needed to understand how eye behaviors evolve over the course of an interaction with a deception detection system. Using latent growth curve modeling, we test how pupil diameter evolves over one's interaction with a deception detection system. The results indicate that pupil diameter changes over the course of a deception detection interaction, and that these trends are indicative of deception during the interaction, regardless if incriminating target items are shown.
Keywords: behavioural sciences computing; gaze tracking; image sensors; object detection; automated deception detection interviews; communicator eye; deception detection interaction; deception detection system; eye behavior variations; eye stimuli; eye tracker; human deception judgments; information systems; latent growth curve modeling; pupil diameter variations; sensor; Accuracy; Analytical models; Information systems; Interviews; Organizations; Sensors; deception detection systems; eye tracking; latent growth curve modeling; pupil diameter (ID#: 15-8922)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165946&isnumber=7165923
Khandal, D.; Somwanshi, D., "A Novel Cost Effective Access Control and Auto Filling Form System Using QR Code," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 1-5, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275575
Abstract: QR codes are used to store information in two dimensional grids which can be decoded quickly. The proposed work here deals with Quick response (QR) code extending its encoding and decoding implementation to design a new articulated user authentication and access control mechanism. The work also proposes a new simultaneous registration system for offices and organizations. The proposed system retrieves the candidate's information from their QR identification code and transfers the data to the digital application form, along with granting authentication to authorized QR image from the database. The system can improve the quality of service and thus it can increase the productivity of any organization.
Keywords: QR codes; authorisation; cryptography; decoding; image coding; information retrieval; information storage; quality of service; QR identification code; articulated user authentication design; authorized QR image; auto filling form system; candidate information retrieval; cost effective access control system; data transfer; decoding implementation; digital application form; encoding implementation; information storage; offices; organizations; quality of service improvement; quick response code; registration system; two-dimensional grid; Decoding; Handwriting recognition; IEC; ISO; Image recognition; Magnetic resonance imaging; Monitoring; Authentication; Automated filling form; Code Reader; Embedded system; Encoding-Decoding; Proteus; QR codes; Security (ID#: 15-8923)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275575&isnumber=7275573
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Compiler Security 2015 |
Much of software security focuses on applications, but compiler security should also be an area of concern. Compilers can “correct” secure coding in the name of efficient processing. The works cited here look at various approaches and issues in compiler security. These works were presented in 2015.
D'Silva, V.; Payer, M.; Song, D., "The Correctness-Security Gap in Compiler Optimization," in Security and Privacy Workshops (SPW), 2015 IEEE, pp.73-87, 21-22 May 2015. doi: 10.1109/SPW.2015.33
Abstract: There is a significant body of work devoted to testing, verifying, and certifying the correctness of optimizing compilers. The focus of such work is to determine if source code and optimized code have the same functional semantics. In this paper, we introduce the correctness-security gap, which arises when a compiler optimization preserves the functionality of but violates a security guarantee made by source code. We show with concrete code examples that several standard optimizations, which have been formally proved correct, in-habit this correctness-security gap. We analyze this gap and conclude that it arises due to techniques that model the state of the program but not the state of the underlying machine. We propose a broad research programme whose goal is to identify, understand, and mitigate the impact of security errors introduced by compiler optimizations. Our proposal includes research in testing, program analysis, theorem proving, and the development of new, accurate machine models for reasoning about the impact of compiler optimizations on security.
Keywords: optimising compilers; program diagnostics; program testing; reasoning about programs; security of data; theorem proving; compiler optimization; correctness certification; correctness testing; correctness verification; correctness-security gap; functional semantics; machine model; optimized code; optimizing compiler; program analysis; program state; program testing; reasoning; security error; security guarantee; source code; theorem proving; Cryptography; Optimization; Optimizing compilers; Semantics; Standards; Syntactics; compiler optimization; formal correctness; security (ID#: 15-8829)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163211&isnumber=7163193
Agosta, G.; Barenghi, A.; Pelosi, G.; Scandale, M., "Information Leakage Chaff: Feeding Red Herrings to Side Channel Attackers," in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1-6, 8-12 June 2015. doi: 10.1145/2744769.2744859
Abstract: A prominent threat to embedded systems security is represented by side-channel attacks: they have proven effective in breaching confidentiality, violating trust guarantees and IP protection schemes. State-of-the-art countermeasures reduce the leaked information to prevent the attacker from retrieving the secret key of the cipher. We propose an alternate defense strategy augmenting the regular information leakage with false targets, quite like chaff countermeasures against radars, hiding the correct secret key among a volley of chaff targets. This in turn feeds the attacker with a large amount of invalid keys, which can be used to trigger an alarm whenever the attack attempts a content forgery using them, thus providing a reactive security measure. We realized a LLVM compiler pass able to automatically apply the proposed countermeasure to software implementations of block ciphers. We provide effectiveness and efficiency results on an AES implementation running on an ARM Cortex-M4 showing performance overheads comparable with state-of-the-art countermeasures.
Keywords: cryptography; program compilers; trusted computing; AES implementation; ARM Cortex-M4; IP protection schemes; LLVM compiler; confidentiality breaching; content forgery; defense strategy; embedded system security; information leakage chaff; reactive security measure; side channel attackers; software implementations; trust guarantees; Ciphers; Correlation; Optimization; Software; Switches; Embedded Security; Side Channel Attacks; Software Countermeasures (ID#: 15-8830)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167217&isnumber=7167177
Prasad, T.S.; Kisore, N.R., "Application of Hidden Markov Model for Classifying Metamorphic Virus," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 1201-1206, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154893
Abstract: Computer virus is a rapidly evolving threat to the computing community. These viruses fall into different categories. It is generally believed that metamorphic viruses are extremely difficult to detect. Metamorphic virus generating kits are readily available using which potentially dangerous viruses can be created with very little knowledge or skill. Classification of computer virus is very important for effective defection of any malware using anti virus software. It is also necessary for building and applying right software patch to overcome the security vulnerability. Recent research work on Hidden Markov Model (HMM) analysis has shown that it is more effective tool than other techniques like machine learning in detecting of computer viruses and their classification. In this paper, we present a classification technique based on Hidden Markov Model for computer virus classification. We trained multiple HMMs with 500 malware files belonging to different virus families as well as compilers. Once trained the model was used to classify new malware of its kind efficiently.
Keywords: computer viruses; hidden Markov models; invasive software; pattern classification; HMM analysis; antivirus software; compilers; computer virus classification; hidden Markov model; malware files; metamorphic virus classification; security vulnerability; software patch; Computational modeling; Computers; Hidden Markov models; Malware; Software; Training; Viruses (medical); Hidden Markov Model; Malware Classification; Metamorphic Malware; N-gram (ID#: 15-8831)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7154893&isnumber=7154658
Maldonado-Lopez, F.A.; Calle, E.; Donoso, Y., "Detection and Prevention of Firewall-Rule Conflicts on Software-Defined Networking," in Reliable Networks Design and Modeling (RNDM), 2015 7th International Workshop on, pp. 259-265, 5-7 Oct. 2015. doi: 10.1109/RNDM.2015.7325238
Abstract: Software-Defined Networking (SDN) is a different approach to manage a network by software. It could use well-defined software expressions and predicates to regulate network behavior. Current SDN controllers, such as Floodlight, offer a framework to develop, test and run applications that control the network operation, including the firewall function. However, they are not able to validate firewall policies, detect conflicts; neither avoids contradictory configurations on network devices. Some compilers only detect conflicts by a subset of the language; hence, it cannot detect conflicts related to contradicting rules with security controls. This paper presents our framework based on Alloy called FireWell. FireWell is able to model firewall policies as formal predicates to validate, detect and prevent conflicts in firewall policies. In addition we present the implementation of FireWell and test it using the Floodlight controller and firewall application.
Keywords: computer network management; firewalls; floodlighting; software defined networking; FireWell; SDN; contradictory configuration avoidance; firewall-rule conflict detection; firewall-rule conflict prevention; floodlight controller; network management; security control; software defined networking; Metals; Network topology; Ports (Computers);Protocols; Semantics; Shadow mapping; Topology; Conflict detection; model checking; policy-based network management; protocol verification}, (ID#: 15-8832)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325238&isnumber=7324297
Carrozza, G.; Cinque, M.; Giordano, U.; Pietrantuono, R.; Russo, S., "Prioritizing Correction of Static Analysis Infringements for Cost-Effective Code Sanitization," in Software Engineering Research and Industrial Practice (SER&IP), 2015 IEEE/ACM 2nd International Workshop on, pp. 25-31, 17-17 May 2015. doi: 10.1109/SERIP.2015.13
Abstract: Static analysis is a widely adopted technique in the industrial development of software systems. It allows to automatically check for code compliance with respect to predefined programming rules. When applied to large software systems, sanitizing the code in an efficient way requires a careful guidance, as a high number of (more or less relevant) rule infringements can result from the analysis. We report the results of a static analysis study conducted on several industrial software systems developed by SELEX ES, a large manufacturer of software-intensive mission-critical systems. We analyzed results on a set of 156 software components developed in SELEX ES, based on them, we developed and experimented an approach to prioritize components and violated rules to correct for a cost-effective code sanitization. Results highlight the benefits that can be achieved in terms of quality targets and incurred cost.
Keywords: program compilers; program diagnostics; program verification; safety-critical software; software development management; SELEX ES; code compliance; cost effective code sanitization; industrial software system development; prioritize components; software components development; software intensive mission-critical system; static analysis; Companies; Encoding; Programming; Resource management; Security; Software; Standards; critical systems; defect analysis; effort allocation; industrial study; static analysis (ID#: 15-8833)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210490&isnumber=7210477
Abdellaoui, Z.; Ben Mbarek, I.; Bouhouch, R.; Hasnaoui, S., "DDS Middleware on FlexRay Network: Simulink Blockset Implementation of Wheel's Sub-blocks and its Adaptation to DDS Concept," in Intelligent Signal Processing (WISP), 2015 IEEE 9th International Symposium on, pp. 1-6, 15-17 May 2015. doi: 10.1109/WISP.2015.7139166
Abstract: Due to the search for improving vehicle safety, security and reliability, the challenges in the automotive sector have continued to increase in order to confront these requirements. In this context, we have implemented a vehicle Simulink blockset model. The proposed blockset corresponds to the Society of Automotive Engineers SAE benchmark model which is normally connected by the CAN bus we extended it to the FlexRay Bus. We have chosen the Embedded Matlab tool for implementing this blockset. It permits us to generate C code for different blocks in order to validate the vehicle system design. In this paper, we have focused our interest on the Blockset Simulink implementation of the wheels model with its different sub-blocks. Then we have identified the DDS Data Readers and Data Writers adapted to this Blockset using the FlexRay Network.
Keywords: C language; automobiles; automotive engineering; controller area networks; embedded systems; field buses; middleware; program compilers; protocols; road safety; vehicular ad hoc networks; wheels; C code generation; CAN bus; DDS data readers; DDS middleware; FlexRay bus; FlexRay network; SAE benchmark model; Society of Automotive Engineers; automotive sector; data writers; embedded Matlab tool; vehicle Simulink blockset model; vehicle reliability; vehicle safety; vehicle security; vehicle system design; wheel subblocks; wheels model; Benchmark testing; Data models; Mathematical model; Software packages; Suspensions; Vehicles; Wheels; DDS; Embedded MATLAB; FlexRay; SAE Benchmark; Simulink Blockset; Wheels (ID#: 15-8834)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7139166&isnumber=7139146
Chang Liu; Xiao Shaun Wang; Nayak, K.; Yan Huang; Shi, E., "ObliVM: A Programming Framework for Secure Computation," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 359-376, 17-21 May 2015. doi: 10.1109/SP.2015.29
Abstract: We design and develop ObliVM, a programming framework for secure computation. ObliVM offers a domain specific language designed for compilation of programs into efficient oblivious representations suitable for secure computation. ObliVM offers a powerful, expressive programming language and user-friendly oblivious programming abstractions. We develop various showcase applications such as data mining, streaming algorithms, graph algorithms, genomic data analysis, and data structures, and demonstrate the scalability of ObliVM to bigger data sizes. We also show how ObliVM significantly reduces development effort while retaining competitive performance for a wide range of applications in comparison with hand-crafted solutions. We are in the process of open-sourcing ObliVM and our rich libraries to the community (www.oblivm.com), offering a reusable framework to implement and distribute new cryptographic algorithms.
Keywords: cryptography; programming; specification languages; ObliVM programming framework; cryptographic algorithms; domain specific language; program compilation; programming abstraction; secure computation; Cryptography; Libraries; Logic gates; Program processors; Programming; Protocols; Compiler; Oblivious Algorithms; Oblivious RAM; Programming Language; Secure Computation; Type System (ID#: 15-8835)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163036&isnumber=7163005
Skalicky, S.; Lopez, S.; Lukowiak, M.; Schmidt, A.G., "A Parallelizing Matlab Compiler Framework and Run Time for Heterogeneous Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 232-237, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.51
Abstract: Compute-intensive applications incorporate ever increasing data processing requirements on hardware systems. Many of these applications have only recently become feasible thanks to the increasing computing power of modern processors. The Matlab language is uniquely situated to support the description of these compute-intensive scientific applications, and consequently has been continuously improved to provide increasing computational support in the form of multithreading for CPUs and utilizing accelerators such as GPUs and FPGAs. Moreover, to take advantage of the computational support in these heterogeneous systems from the problem domain to the computer architecture necessitates a wide breadth of knowledge and understanding. In this work, we present a framework for the development of compute-intensive scientific applications in Matlab using heterogeneous processor systems. We investigate systems containing CPUs, GPUs, and FPGAs. We leverage the capabilities of Matlab and supplement them by automating the mapping, scheduling, and parallel code generation. Our experimental results on a set of benchmarks achieved from 20x to 60x speedups compared to the standard Matlab CPU environment with minimal effort required on the part of the user.
Keywords: graphics processing units; mathematics computing; multi-threading; parallel architectures; parallelising compilers; FPGA; GPU; Matlab compiler framework; Matlab language; compute-intensive scientific application; computer architecture; heterogeneous processor system; heterogeneous system; multithreading; parallel code generation; standard Matlab CPU environment; Data transfer; Field programmable gate arrays; Kernel; MATLAB; Message systems; Processor scheduling; Scheduling; Heterogeneous computing; Matlab; compiler (ID#: 15-8836)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336169&isnumber=7336120
Qining Lu; Farahani, M.; Jiesheng Wei; Thomas, A.; Pattabiraman, K., "LLFI: An Intermediate Code-Level Fault Injection Tool for Hardware Faults," in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, pp. 11-16, 3-5 Aug. 2015. doi: 10.1109/QRS.2015.13
Abstract: Hardware errors are becoming more prominent with reducing feature sizes, however tolerating them exclusively in hardware is expensive. Researchers have explored software-based techniques for building error resilient applications for hardware faults. However, software based error resilience techniques need configurable and accurate fault injection techniques to evaluate their effectiveness. In this paper, we present LLFI, a fault injector that works at the LLVM compiler's intermediate representation (IR) level of the application. LLFI is highly configurable, and can be used to inject faults into selected targets in the program in a fine-grained manner. We demonstrate the utility of LLFI by using it to perform fault injection experiments into nine programs, and study the effect of different injection choices on their resilience, namely instruction type, register target and number of bits flipped. We find that these parameters have a marked effect on the evaluation of overall resilience.
Keywords: software fault tolerance; LLFI; error resilient applications; fault injection techniques; fine-grained manner; hardware errors; hardware faults; intermediate code-level fault injection tool; intermediate representation level; software based error resilience techniques; software-based techniques; Benchmark testing; Computer crashes; Hardware; Instruments; Registers; Resilience; Software (ID#: 15-8837)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272909&isnumber=7272893
Hataba, M.; Elkhouly, R.; El-Mahdy, A., "Diversified Remote Code Execution Using Dynamic Obfuscation of Conditional Branches," in Distributed Computing Systems Workshops (ICDCSW), 2015 IEEE 35th International Conference on, pp. 120-127, June 29 2015-July 2 2015. doi: 10.1109/ICDCSW.2015.37
Abstract: Information leakage via timing side-channel attacks is one of the main threats that target code executing on remote platforms such as the cloud computing environment. These attacks can be further leveraged to reverse-engineer or even tamper with the running code. In this paper, we propose a security obfuscation technique, which helps making the generated code more resistant to these attacks, by means of increasing logical complexity to hinder the formulation of a solid hypothesis about code behavior. More importantly, this software solution is portable, generic and does not require special setup or hardware or software modifications. In particular, we consider mangling the control-flow inside a program via converting a random set of conditional branches into linear code, using if conversion transformation. Moreover, our method exploits the dynamic compilation technology to continually and randomly alter the branches. All of this mangling should diversify code execution, hence it becomes difficult for an attacker to infer timing correlations through statistical analysis. We extend the LLVM JIT compiler to provide for an initial investigation of this approach. This makes our system applicable to a wide variety of programming languages and hardware platforms. We have studied the system using a simple test program and selected benchmarks from the standard SPEC CPU 2006 suite with different input loads and experimental setups. Initial results show significant changes in program's control-flow and hence data dependences, resulting in noticeable different execution times even for the same input data, thereby complicating such attacks. More notably, the performance penalty is within reasonable margins.
Keywords: cloud computing; program compilers; security of data; LLVM JIT compiler; cloud computing environment; conditional branches; diversified remote code execution; dynamic compilation technology; dynamic obfuscation; information leakage; security obfuscation technique; standard SPEC CPU 2006 suite; statistical analysis; timing side-channel attacks; Benchmark testing; Cloud computing; Hardware; Optimization; Program processors; Runtime; Security; If-; JIT Compilation; Obfuscation; Side-Channels (ID#: 15-8838)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165094&isnumber=7165001
Seuschek, H.; Rass, S., "Side-Channel Leakage Models for RISC Instruction Set Architectures from Empirical Data," in Digital System Design (DSD), 2015 Euromicro Conference on, pp. 423-430, 26-28 Aug. 2015. doi: 10.1109/DSD.2015.117
Abstract: Side-channel attacks are currently among the most serious threats for embedded systems. Popular countermeasures to mitigate the impact of such attacks are masking schemes, where secret intermediate values are split in two or more values by virtue of secret sharing. Processing the secret happens on separate execution paths, which are executed on the same central processing unit (CPU). In case of unwanted correlations between different registers inside the CPU the shared secret may leak out through a side-channel. This problem is particularly evident on low cost embedded systems, such as nodes for the Internet of Things (IoT), where cryptographic algorithms are often implemented in pure software on a reduced instruction set computer (RISC). On such an architecture, all data manipulation operations are carried out on the contents of the CPU's register file. This means that all intermediate values of the cryptographic algorithm at some stage pass through the register file. Towards avoiding unwanted correlations and leakages thereof, special care has to be taken in the mapping of the registers to intermediate values of the algorithm. In this work, we describe an empirical study that reveals effects of unintended unmasking of masked intermediate values and thus leaking secret values. The observed phenomena are related to the leakage of masked hardware implementations caused by glitches in the combinatorial path of the circuit but the effects are abstracted to the level of the instruction set architecture on a RISC CPU. Furthermore, we discuss countermeasures to have the compiler thwart such leakages.
Keywords: cryptography; embedded systems; program compilers; reduced instruction set computing; RISC CPU;RISC instruction set architectures; central processing unit; compiler; cryptographic algorithm; data manipulation operations; embedded systems; masked hardware implementations; masking schemes; secret sharing; side-channel attacks; side-channel leakage models; Central Processing Unit; Computer architecture; Correlation; Cryptography; Hamming distance; Reduced instruction set computing; Registers (ID#: 15-8839)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302305&isnumber=7302233
Papadakis, M.; Yue Jia; Harman, M.; Le Traon, Y., "Trivial Compiler Equivalence: A Large Scale Empirical Study of a Simple, Fast and Effective Equivalent Mutant Detection Technique," in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 1, pp. 936-946, 16-24 May 2015. doi: 10.1109/ICSE.2015.103
Abstract: Identifying equivalent mutants remains the largest impediment to the widespread uptake of mutation testing. Despite being researched for more than three decades, the problem remains. We propose Trivial Compiler Equivalence (TCE) a technique that exploits the use of readily available compiler technology to address this long-standing challenge. TCE is directly applicable to real-world programs and can imbue existing tools with the ability to detect equivalent mutants and a special form of useless mutants called duplicated mutants. We present a thorough empirical study using 6 large open source programs, several orders of magnitude larger than those used in previous work, and 18 benchmark programs with hand-analysis equivalent mutants. Our results reveal that, on large real-world programs, TCE can discard more than 7% and 21% of all the mutants as being equivalent and duplicated mutants respectively. A human-based equivalence verification reveals that TCE has the ability to detect approximately 30% of all the existing equivalent mutants.
Keywords: formal verification; program compilers; program testing; TCE technique; duplicated mutants; human-based equivalence verification; mutant detection technique; mutation testing; trivial compiler equivalence technology; Benchmark testing; Java; Optimization; Scalability; Syntactics (ID#: 15-8840)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194639&isnumber=7194545
Husak, M.; Velan, P.; Vykopal, J., "Security Monitoring of HTTP Traffic Using Extended Flows," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 258-265, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.42
Abstract: In this paper, we present an analysis of HTTP traffic in a large-scale environment which uses network flow monitoring extended by parsing HTTP requests. In contrast to previously published analyses, we were the first to classify patterns of HTTP traffic which are relevant to network security. We described three classes of HTTP traffic which contain brute-force password attacks, connections to proxies, HTTP scanners, and web crawlers. Using the classification, we were able to detect up to 16 previously undetectable brute-force password attacks and 19 HTTP scans per day in our campus network. The activity of proxy servers and web crawlers was also observed. Symptoms of these attacks may be detected by other methods based on traditional flow monitoring, but detection using the analysis of HTTP requests is more straightforward. We, thus, confirm the added value of extended flow monitoring in comparison to the traditional method.
Keywords: computer network security; program compilers; telecommunication traffic; transport protocols; HTTP request parsing; HTTP traffic; brute-force password attack; network flow monitoring; network security monitoring; Crawlers; IP networks; Monitoring; Protocols; Security; Web servers (ID#: 15-8841)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299924&isnumber=7299862
Dewey, D.; Reaves, B.; Traynor, P., "Uncovering Use-After-Free Conditions in Compiled Code," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 90-99, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.61
Abstract: Use-after-free conditions occur when an execution path of a process accesses an incorrectly deal located object. Such access is problematic because it may potentially allow for the execution of arbitrary code by an adversary. However, while increasingly common, such flaws are rarely detected by compilers in even the most obvious instances. In this paper, we design and implement a static analysis method for the detection of use-after-free conditions in binary code. Our new analysis is similar to available expression analysis and traverses all code paths to ensure that every object is defined before each use. Failure to achieve this property indicates that an object is improperly freed and potentially vulnerable to compromise. After discussing the details of our algorithm, we implement a tool and run it against a set of enterprise-grade, publicly available binaries. We show that our tool can not only catch textbook and recently released in-situ examples of this flaw, but that it has also identified 127 additional use-after-free conditions in a search of 652 compiled binaries in the Windows system32 directory. In so doing, we demonstrate not only the power of this approach in combating this increasingly common vulnerability, but also the ability to identify such problems in software for which the source code is not necessarily publicly available.
Keywords: software engineering; Windows system32 directory; binary code; compiled code; static analysis method; use-after-free conditions; Algorithm design and analysis; Binary codes; Object recognition; Runtime; Security; Software; Visualization; Binary Decompilation; Software Security; Static Analysis (ID#: 15-8842)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299902&isnumber=7299862
Catherine S, M.; George, G., "S-Compiler: A Code Vulnerability Detection Method," in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, pp. 1-4, 24-25 Jan. 2015. doi: 10.1109/EESCO.2015.7254018
Abstract: Nowadays, security breaches are greatly increasing in number. This is one of the major threats that are being faced by most organisations which usually lead to a massive loss. The major cause for these breaches could potentially be the vulnerabilities in software products. There are many tools available to detect such vulnerabilities but detection and correction of vulnerabilities during development phase would be more beneficial. Though there are many standard secure coding practices to be followed in development phase, software developers fail to utilize them and this leads to an unsecured end product. The difficulty in manual analysis of vulnerabilities in source code is what leads to the evolution of automated analysis tools. Static and dynamic analyses are the two complementary methods used to detect vulnerabilities in development phase. Static analysis scans the source code which eliminates the need of execution of the code but it has many false positives and false negatives. On the other hand, dynamic analysis tests the code by running it along with the test cases. The proposed approach integrates static and dynamic analysis. This eliminates the false positives and false negatives problem of the existing practices and helps developers to correct their code in the most efficient way. It deals with common buffer overflow vulnerabilities and vulnerabilities from Common Weakness Enumeration (CWE). The whole scenario is implemented as a web interface.
Keywords: source coding; telecommunication security; S-compiler; automated analysis tools; code vulnerability detection method; common weakness enumeration; false negatives; false positives; source code; Buffer overflows; Buffer storage; Encoding; Forensics; Information security; Software; Buffer overflow; Dynamic analysis; Secure coding; Static analysis (ID#: 15-8843)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7254018&isnumber=7253613
Saito, T.; Miyazaki, H.; Baba, T.; Sumida, Y.; Hori, Y., "Study on Diffusion of Protection/Mitigation against Memory Corruption Attack in Linux Distributions," in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, pp. 525-530, 8-10 July 2015. doi: 10.1109/IMIS.2015.73
Abstract: Memory corruption attacks that exploit software vulnerabilities have become a serious problem on the Internet. Effective protection and/or mitigation technologies aimed at countering these attacks are currently being provided with operating systems, compilers, and libraries. Unfortunately, the attacks continue. One of the reasons for this state of affairs can be attributed to the uneven diffusion of the latest (and thus most potent) protection and/or mitigation technologies. This results because attackers are likely to have found ways of circumventing most well-known older versions, thus causing them to lose effectiveness. Therefore, in this paper, we will explore diffusion of relatively new technologies, and analyze the results of a Linux distributions survey.
Keywords: Linux; security of data; Internet; Linux distributions; memory corruption attack mitigation; memory corruption attack protection; software vulnerabilities; Buffer overflows; Geophysical measurement techniques; Ground penetrating radar; Kernel; Libraries; Linux; Anti-thread; Buffer Overflow; Diffusion of countermeasure techniques; Memory corruption attacks (ID#: 15-8844)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7285008&isnumber=7284886
Crane, S.; Liebchen, C.; Homescu, A.; Davi, L.; Larsen, P.; Sadeghi, A.-R.; Brunthaler, S.; Franz, M., "Readactor: Practical Code Randomization Resilient to Memory Disclosure," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 763-780, 17-21 May 2015. doi: 10.1109/SP.2015.52
Abstract: Code-reuse attacks such as return-oriented programming (ROP) pose a severe threat to modern software. Designing practical and effective defenses against code-reuse attacks is highly challenging. One line of defense builds upon fine-grained code diversification to prevent the adversary from constructing a reliable code-reuse attack. However, all solutions proposed so far are either vulnerable to memory disclosure or are impractical for deployment on commodity systems. In this paper, we address the deficiencies of existing solutions and present the first practical, fine-grained code randomization defense, called Read actor, resilient to both static and dynamic ROP attacks. We distinguish between direct memory disclosure, where the attacker reads code pages, and indirect memory disclosure, where attackers use code pointers on data pages to infer the code layout without reading code pages. Unlike previous work, Read actor resists both types of memory disclosure. Moreover, our technique protects both statically and dynamically generated code. We use a new compiler-based code generation paradigm that uses hardware features provided by modern CPUs to enable execute-only memory and hide code pointers from leakage to the adversary. Finally, our extensive evaluation shows that our approach is practical -- we protect the entire Google Chromium browser and its V8 JIT compiler -- and efficient with an average SPEC CPU2006 performance overhead of only 6.4%.
Keywords: online front-ends; program compilers; Google Chromium browser; ROP; Readactor; V8 JIT compiler; code randomization; code-reuse attacks; compiler-based code generation paradigm; memory disclosure; return-oriented programming; Hardware; Layout; Operating systems; Program processors; Security; Virtual machine monitors (ID#: 15-8845)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163059&isnumber=7163005
Jeehong Kim; Young Ik Eom, "Fast and Space-Efficient Defense Against Jump-Oriented Programming Attacks," in Big Data and Smart Computing (BigComp), 2015 International Conference on, pp. 7-10, 9-11 Feb. 2015. doi: 10.1109/35021BIGCOMP.2015.7072839
Abstract: Recently, Jump-oriented Programming (JOP) attack has become widespread in various systems including server, desktop, and smart devices. JOP attack rearranges existing code snippets in program to make gadget sequences, and hijacks control flow of program by chaining and executing gadget sequences consecutively. However, existing defense schemes have limitations such as high execution overhead, high binary size increase overhead, and low applicability. In this paper, to solve these problems, we introduce target shepherding, which is a fast and space-efficient defender against general JOP attack. Our defense scheme generates monitoring code to determine whether the target is legitimate or not just before each indirect jump instruction at compile time, and then checks whether a control flow has been subverted by JOP attack at run time. We achieved very low run-time overhead with very small increase in file size. In our experimental results, the performance overhead is 2.36% and the file size overhead is 5.82% with secure execution.
Keywords: program compilers; security of data; JOP attack; code snippets; compile time; control flow; file size overhead; gadget sequences; indirect jump instruction; jump-oriented programming attack; monitoring code generation; performance overhead; program hijacks control flow; run-time overhead; space-efficient defense; target shepherding; Law; Monitoring; Programming; Registers; Security; Servers; Code Reuse Attack; Jump-oriented Programming; Return-oriented Programming; Software Security (ID#: 15-8846)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072839&isnumber=7072806
Chia-Nan Kao; I-Ju Liao; Yung-Cheng Chang; Che-Wei Lin; Nen-Fu Huang; Rong-Tai Liu; Hsien-Wei Hung, "A Retargetable Multiple String Matching Code Generation for Embedded Network Intrusion Detection Platforms," in Communication Software and Networks (ICCSN), 2015 IEEE International Conference on, pp. 93-99, 6-7 June 2015. doi: 10.1109/ICCSN.2015.7296134
Abstract: The common means of defense for network security systems is to block the intrusions by matching the signatures. Intrusion-signature matching is the critical operation. However, small and medium-sized enterprise (SME) or Small Office Home Office (SOHO) network security systems may not have sufficient resources to maintain good matching performance with full-set rules. Code generation is a technique used to convert data structures or instruction to other forms to obtain greater benefits within execution environments. This study analyzes intrusion detection system (IDS) signatures and discovers character occurrence to be significantly uneven. Based on this property, this study designs a method to generate a string matching source code according to the state table of AC algorithm for embedded network intrusion detection platforms. The generated source code requires less memory and relies not only on table lookup, but also on the ability of processor. This method can upgrade the performance by compiling optimization and contribute to the application of network processors and DSP-like based platforms. From evaluation, this method requires use of only 20% memory and can achieve 86% performance in clean traffic compared to the original Aho-Corasick algorithm (AC).
Keywords: computer network security; digital signatures; program compilers; string matching; AC algorithm; DSP-like based platforms; character occurrence discovery; data structures; embedded network intrusion detection platforms; intrusion detection system signatures; intrusion-signature matching; network security systems; optimization compilation; processor ability; retargetable multiple string matching code generation; table lookup; Arrays; Intrusion detection; Memory management; Optimization; Switches; Table lookup; Thyristors; Code Generation; Intrusion Detection System; String Matching (ID#: 15-8847)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296134&isnumber=7296115
Costello, C.; Fournet, C.; Howell, J.; Kohlweiss, M.; Kreuter, B.; Naehrig, M.; Parno, B.; Zahur, S., "Geppetto: Versatile Verifiable Computation," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 253-270, 17-21 May 2015. doi: 10.1109/SP.2015.23
Abstract: Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client's cost to verify the correctness of their results, but the overhead to produce proofs remains largely impractical. Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi QAPs, Geppetto reduces the cost of sharing state between computations (e.g, For MapReduce) or within a single computation by up to two orders of magnitude. Via a careful choice of cryptographic primitives, Geppetto's instantiation of bounded proof bootstrapping improves on prior bootstrapped systems by up to five orders of magnitude, albeit at some cost in universality. Geppetto also efficiently verifies the correct execution of proprietary (i.e, Secret) algorithms. Finally, Geppetto's use of energy-saving circuits brings the prover's costs more in line with the program's actual (rather than worst-case) execution time. Geppetto is implemented in a full-fledged, scalable compiler and runtime that consume LLVM code generated from a variety of source C programs and cryptographic libraries.
Keywords: cloud computing; computer bootstrapping; cryptographic protocols; program compilers; program verification; Geppetto; LLVM code generation; QAPs; bootstrapped systems; bounded proof bootstrapping; cloud computing; compiler; correctness verification; cryptographic libraries; cryptographic primitives; energy-saving circuits; outsource computation security; prover flexibility; prover overhead reduction; source C programs; verifiable computation protocols; Cryptography; Generators; Libraries; Logic gates; Protocols; Random access memory; Schedules (ID#: 15-8848)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163030&isnumber=7163005
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cross Layer Security 2015 |
Protocol architectures traditionally followed strict layering principles to ensure interoperability, rapid deployment, and efficient implementation. But a lack of coordination between layers limits the performance of these architectures. More important, the lack of coordination may introduce security vulnerabilities and potential threat vectors. The literature cited here addresses the problems and opportunities available for cross layer security published in 2015.
Dakhore, S.; Lohiya, P., "Location Aware Selective Unlocking & Secure Verification Safer Card Forenhancing RFID Security by Using SHA-3," in Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, pp. 477-482, 1-2 May 2015. doi: 10.1109/ICACCE.2015.65
Abstract: In This Paper, we report a new approach for providing security as well as privacy to the corporate user. With the help of locations sensing mechanism by using GPS we can avoid the un-authorized reading & relay attacks on RFID system. For example, location sensing mechanism with RFID card is used for location specific application such as ATM cash transfer van for open the door of van. So after reaching the pre-specified location (ATM) the RFID card is active & then it accepts the fingerprint of the registered person only. In this way we get a stronger cross layer security. SHA-3 algorithm is used to avoid the collision (due to fraud fingerprint) effect on server side.
Keywords: Global Positioning System; banking; cryptography; fingerprint identification; mobility management (mobile radio);radiofrequency identification; relay networks (telecommunication); smart cards; telecommunication security; ATM cash transfer van; GPS; Global Positioning System; RFID card; RFID security; RFID system;SHA-3 algorithm; Secure Hash Algorithm 3; cross layer security; fingerprint; location aware selective unlocking; location sensing mechanism; location specific application; relay attacks; secure verification; Fingerprint recognition; Global Positioning System; Privacy; Radiofrequency identification; Relays; Security; Servers; Java Development kit (JDK); Location Aware Selective unlocking; RFID; Secure Hash Algorithm (ID#: 15-8881)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306732&isnumber=7306547
Umar, I.A.; Hanapi, Z.M.; Sali, A.; Zulkarnain, Z.A., "A Forwarding Strategy for DWSIGF Routing Protocol," in IT Convergence and Security (ICITCS), 2015 5th International Conference on, pp. 1-4, 24-27 Aug. 2015. doi: 10.1109/ICITCS.2015.7292917
Abstract: Routing protocols in Wireless Sensor Network (WSN) are responsible for propagating and coordinating of information transfer from one end of the network to the other. Dynamic Window Secured Implicit Geographic Forwarding (DWSIGF) is a robust, cross layer, security bound routing protocol that propagates information in a multi-hop network using the greedy and random forwarding strategies. These strategies are known for their poor resistivity to interference and erratic behavior in path selection. In this paper, we propose a forwarding strategy that uses an optimal distance to mitigate these problems. The optimal distance is computed based on the path loss coefficient and energy dissipated in the hardware (sensor). Extensive simulations have been conducted to evaluate the performance of the proposed approach. The results illustrate that the proposed approach performs better than the compared strategies in terms of packet delivery ratio and energy consumption.
Keywords: routing protocols; telecommunication security; wireless sensor networks; DWSIGF routing protocol; WSN; cross layer; dynamic window secured implicit geographic forwarding; energy consumption; forwarding strategy; hardware sensor; information transfer; interference; multihop network; optimal distance; packet delivery ratio; path loss coefficient; path selection; security bound routing protocol; wireless sensor network; Energy consumption; Hardware; Routing; Routing protocols; Security; Wireless sensor networks (ID#: 15-8882)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7292917&isnumber=7292885
Ward, Jon R.; Younis, Mohamed, "A Cross-Layer Defense Scheme for Countering Traffic Analysis Attacks in Wireless Sensor Networks," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 972-977, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357571
Abstract: In most Wireless Sensor Network (WSN) applications the sensors forward their readings to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary's attack. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to locate the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Published anonymity-boosting techniques mainly focus on a single layer of the communication protocol stack and assume that changes in the protocol operation will not be detectable. In fact, existing single-layer techniques may not be able to protect the network if the adversary could guess what anonymity measure is being applied by identifying which layer is being exploited. In this paper we propose combining physical-layer and network-layer techniques to boost the network resilience to anonymity attacks. Our cross-layer approach avoids the shortcomings of the individual single-layer schemes and allows a WSN to effectively mask its behavior and simultaneously misdirect the adversary's attention away from the BS's location. We confirm the effectiveness of our cross-layer anti-traffic analysis measure using simulation.
Keywords: Array signal processing; Computer security; Measurement; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy; wireless sensor networks (ID#: 15-8883)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357571&isnumber=7357245
Bhattacharyya, A.; Bose, T.; Bandyopadhyay, S.; Ukil, A.; Pal, A., "LESS: Lightweight Establishment of Secure Session: A Cross-Layer Approach Using CoAP and DTLS-PSK Channel Encryption," in Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, pp. 682-687, 24-27 March 2015. doi: 10.1109/WAINA.2015.52
Abstract: Secure yet lightweight protocol for communication over the Internet is a pertinent problem for constrained environments in the context of Internet of Things (IoT) / Machine to Machine (M2M) applications. This paper extends the initial approaches published in [1], [2] and presents a novel cross-layer lightweight implementation to establish a secure channel. It distributes the responsibility of communication over secure channel in between the application and transport layers. Secure session establishment is performed using a payload embedded challenge response scheme over the Constrained Application Protocol (CoAP) [3]. Record encryption mechanism of Datagram Transport Layer Security (DTLS) [4] with Pre-Shared Key (PSK) [5] is used for encrypted exchange of application layer data. The secure session credentials derived from the application layer is used for encrypted exchange over the transport layer. The solution is designed in such a way that it can easily be integrated with an existing system deploying CoAP over DTLS-PSK. The proposed method is robust under different security attacks like replay attack, DoS and chosen cipher text. The improved performance of the proposed solution is established with comparative results and analysis.
Keywords: Internet; cryptography; CoAP; DTLS; DTLS-PSK channel encryption; DoS; Internet; LESS; M2M applications; PSK; cipher text; constrained application protocol; constrained environments; cross layer approach; datagram transport layer security; encrypted exchange; layer data application; lightweight establishment of secure session; lightweight protocol; machine to machine applications; pre-shared key; record encryption mechanism; replay attack; secure channel; security attacks; transport layer; transport layers; Bandwidth; Encryption; Internet; Payloads; Servers; CoAP; DTLS; IoT; M2M; lightweight; pre-shared-key; secure session (ID#: 15-8884)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096256&isnumber=7096097
Marve, T.K.; Sambhe, N.U., "A Review on Cross Layer Intrusion Detection System in Wireless Ad Hoc Network," in Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, pp. 1-4, 5-7 March 2015. doi: 10.1109/ICECCT.2015.7226109
Abstract: Wireless ad-hoc networks is a collection of small randomly dispersed device deployed in large number that provide essential function like monitor physical and environmental condition also provide efficient, reliable communication via wireless Network, ad-hoc network are vulnerable to various type of security threat and attack, various way are possible to overcome vulnerabilities in wireless ad-hoc network from attack and threat, mostly used solution is an Intrusion detection system (IDS) that suites the security needs and characteristics of ad-hoc networks for efficient and effective performance against intrusion. In this paper we propose a cross layer intrusion detection system (CIDS) which overcome demerits such as false positive present in traditional IDS, a cross layer design framework that will exploit the information available across different layer of the protocol stack by triggering two level of detection that utilizes the knowledge of network and node condition in determining the node behavior, and enhance the accuracy of detection.
Keywords: ad hoc networks; routing protocols; security of data; telecommunication security; wireless channels; cross layer intrusion detection system; environmental condition; physical condition; protocol stack; reliable communication; security attack; security threat; small randomly dispersed device; wireless ad hoc network; Jamming; Monitoring; Threat model; cross layer intrusion detection system (CIDS); intrusion detection system (ID#: 15-8885)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226109&isnumber=7225915
Liyang Zhang; Melodia, T., "Hammer and Anvil: The Threat of a Cross-Layer Jamming-Aided Data Control Attack in Multihop Wireless Networks," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 361-369, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346847
Abstract: This paper considers potential risks to data security in multi-hop infrastructureless wireless networks where cross-layer routing protocols are used. We show that an adversary, as long as it controls a few of the nodes, and with the help of a few assisting jammers, can extend control over a significant portion of the data in the network even with very simple strategies and limited resources, by creating a so-called “wormhole” even without off-band links. We refer to this jamming-assisted data control threat as hammer and anvil attack. We model a prototype of the hammer and anvil attack in a wireless sensor network scenario with distributed cross-layer routing protocols. We show through extensive performance evaluation that the attack poses a serious threat to the resulting data security, and we provide observations that can be helpful in fine-tuning the attack, as well as in designing defense mechanisms against it.
Keywords: jamming; routing protocols; telecommunication security; wireless sensor networks; cross-layer jamming-aided data control attack; data security; distributed cross-layer routing protocols; hammer-and-anvil attack; multihop infrastructureless wireless networks; wireless sensor network scenario; wormhole; Delays; Jamming; Routing; Routing protocols; Security; Wireless sensor networks (ID#: 15-8886)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346847&isnumber=7346791
Hossain, Akbar; Sarkar, Nurul I, "Cross Layer Rendezvous in Cognitive Radio Ad-Hoc Networks," in Telecommunication Networks and Applications Conference (ITNAC), 2015 International, pp. 149-154, 18-20 Nov. 2015. doi: 10.1109/ATNAC.2015.7366804
Abstract: Rendezvous in cognitive radio networks (CRNs) facilitates cognitive radio (CR) users to find common channels and establish a communication link. Due to the dynamic radio environment, rendezvous on a predetermined common control channel (CCC) is limited by a single point of failure, congestion and security. Channel hopping (CH) provides an efficient solution to achieve rendezvous in cognitive radio ad-hoc networks (CRAHNs). In this paper, a cross layer CH rendezvous protocol is proposed which use the channel preference of a CR user to establish a communication link. The channel preference of a CR user is determined by channel ranking based on PUs and CRs activities which is physical layer parameter. We formulate the channel ranking as a linear optimization problem based on the channel availability under collision constraints. Thereby, abreast of channel quantity, we integrate the channel quality to design a CH rendezvous protocol. Simulation results show that the proposed Channel ranking based channel hopping (CRCH) scheme outperforms with similar CH schemes in terms of average time-to-rendezvous (ATTR) and the degree of overlap in asymmetric channel scenario.
Keywords: Ad hoc networks; Cognitive radio; Cross layer design; Protocols; Sensors; Yttrium (ID#: 15-8887)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366804&isnumber=7366770
Resner, D.; Frohlich, A.A., "Design Rationale of a Cross-Layer, Trustful Space-Time Protocol for Wireless Sensor Networks," in Emerging Technologies & Factory Automation (ETFA), 2015 IEEE 20th Conference on, pp. 1-8, 8-11 Sept. 2015. doi: 10.1109/ETFA.2015.7301413
Abstract: In this paper, we introduce a cross-layer, application-oriented communication protocol for Wireless Sensor Networks (WSN). TSTP - Trustful Space-Time Protocol - integrates most services recurrently needed by WSN applications: Medium Access Control (MAC), spatial localization, geographic routing, time synchronization and security, and is tailored for geographical monitoring applications. By integrating shared data from multiple services into a single network layer, TSTP is able to eliminate replication of information across services, and achieve a very small overhead in terms of control messages. For instance, spatial localization data is shared by the MAC and routing scheme, the location estimator, and the application itself. Application-orientation allows synergistic co-operation of services and allows TSTP to deliver functionality efficiently while eliminating the need for additional, heterogeneous software layers that usually come with an integration cost.
Keywords: access protocols; routing protocols; synchronisation; telecommunication security; wireless sensor networks; MAC; TSTP; WSN; cross-layer application-oriented communication protocol; geographic routing; geographical monitoring application; medium access control; spatial localization; time synchronization; trustful space-time protocol; wireless sensor network; Clocks; Peer-to-peer computing; Protocols; Routing; Security; Synchronization; Wireless sensor networks; Application-oriented; Cross-Layer; Geographic; Protocol; Space-Time; Trustful; Wireless Sensor Networks (ID#: 15-8888)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301413&isnumber=7301399
Khandaker, M.R.A.; Kai-Kit Wong, "Simultaneous Information and Power Transfer in MISO Interference Systems," in Signal and Information Processing (ChinaSIP), 2015 IEEE China Summit and International Conference on, pp. 596-600, 12-15 July 2015. doi: 10.1109/ChinaSIP.2015.7230473
Abstract: This paper considers simultaneous wireless information and power transfer (SWIPT) in multiple-input single-output (MISO) interference systems in presence of energy harvesting nodes. We investigate the interference temperature minimization problem while satisfying signal-to-interference-and-noise ratio (SINR) and energy harvesting thresholds at the information and energy receivers, respectively. The objective is to improve the received SINR as well as to reduce cross-link information leakage in order to improve physical-layer security. The formulation leads to a non-convex problem which we solve using semidefinite relaxation (SDR) technique. A rank-constrained optimization algorithm is proposed and a rank reduction procedure is developed in order to achieve a lower rank solution. Interestingly, we show that the SDR is in fact tight and an optimal rank-one solution can be developed in certain scenarios. Numerical simulations are performed to demonstrate the effectiveness of the proposed algorithm.
Keywords: concave programming; energy harvesting; minimisation; numerical analysis; radio receivers; radiofrequency interference; radiofrequency power transmission; telecommunication power management; telecommunication security; MISO interference system; SDR technique; SINR; SINR improvement; SWIPT; cross-link information leakage reduction; energy harvesting node; energy harvesting threshold; energy receiver; information receiver; interference temperature minimization problem; multiple input single output interference system; nonconvex problem; numerical simulation; physical layer security improvement; rank reduction procedure; rank-constrained optimization algorithm; semidefinite relaxation technique; signal-to-interference-and-noise ratio; simultaneous wireless information and power transfer; Array signal processing; Energy harvesting; Interference; Receivers; Signal to noise ratio; Transmitters; Wireless communication (ID#: 15-8889)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230473&isnumber=7230339
Wrona, Konrad; Oudkerk, Sander, "Integrated Content-Based Information Security for Future Military Systems," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 1230-1235, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357614
Abstract: Future military operations require versatile and integrated mechanisms for enforcement of the security policies in all three domains of information protection: confidentiality, integrity and availability. We discuss challenges and use cases related to enforcement of integrity and availability policies in federated mission environments and we demonstrate how the concept of Content-based Protection and Release (CPR) can be extended to support such policies. Furthermore, we present an approach to cross-layer enforcement of the CPR policies and introduce a proof-of-concept implementation of the CPR enforcement mechanisms in a software-defined networking environment.
Keywords: Bridges; Chlorine; Computer security; Military communication; Sensitivity; TCPIP; Access control; communication system security; data security; information security; software-defined networking (ID#: 15-8890)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357614&isnumber=7357245
Dutt, Nikil; Jantsch, Axel; Sarma, Santanu, "Self-aware Cyber-Physical Systems-on-Chip," in Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, pp. 46-50, 2-6 Nov. 2015. doi: 10.1109/ICCAD.2015.7372548
Abstract: Self-awareness has a long history in biology, psychology, medicine, and more recently in engineering and computing, where self-aware features are used to enable adaptivity to improve a system's functional value, performance and robustness. With complex many-core Systems-on-Chip (SoCs) facing the conflicting requirements of performance, resiliency, energy, heat, cost, security, etc. - in the face of highly dynamic operational behaviors coupled with process, environment, and workload variabilities - there is an emerging need for self-awareness in these complex SoCs. Unlike traditional MultiProcessor Systems-on-Chip (MPSoCs), self-aware SoCs must deploy an intelligent co-design of the control, communication, and computing infrastructure that interacts with the physical environment in real-time in order to modify the system's behavior so as to adaptively achieve desired objectives and Quality-of-Service (QoS). Self-aware SoCs require a combination of ubiquitous sensing and actuation, health-monitoring, and statistical model-building to enable the SoC's adaptation over time and space. After defining the notion of self-awareness in computing, this paper presents the Cyber-Physical System-on-Chip (CPSoC) concept as an exemplar of a self-aware SoC that intrinsically couples on-chip and cross-layer sensing and actuation using a sensor-actuator rich fabric to enable self-awareness.
Keywords: Computational modeling; Computer architecture; Context; Predictive models; Sensors; Software; System-on-chip (ID#: 15-8891)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372548&isnumber=7372533
Shutang You; Lin Zhu; Yong Liu; Hesen Liu; Yilu Liu; Shankar, M.; Robertson, R.; King, T., "A Survey on Next-Generation Power Grid Data Architecture," in Power & Energy Society General Meeting, 2015 IEEE, pp. 1-5, 26-30 July 2015. doi: 10.1109/PESGM.2015.7286394
Abstract: The operation and control of power grids will increasingly rely on data. A high-speed, reliable, flexible and secure data architecture is the prerequisite of the next-generation power grid. This paper summarizes the challenges in collecting and utilizing power grid data, and then provides reference data architecture for future power grids. Based on the data architecture deployment, related research on data architecture is reviewed and summarized in several categories including data measurement/actuation, data transmission, data service layer, data utilization, as well as two cross-cutting issues, interoperability and cyber security. Research gaps and future work are also presented.
Keywords: power grids; power system control; power system interconnection; power system reliability; power system security; security of data; data measurement; data service layer; data transmission; data utilization; next-generation power grid data architecture; power grid control; power grid operation; Computer architecture; Interoperability; Security; Smart grids; Standards; Smart grid; data architecture; information system; survey (ID#: 15-8892)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7286394&isnumber=7285590
Carbino, T.J.; Temple, M.A.; Bihl, T.J., "Ethernet Card Discrimination using Unintentional Cable Emissions and Constellation-Based Fingerprinting," in Computing, Networking and Communications (ICNC), 2015 International Conference on, pp. 369-373, 16-19 Feb. 2015. doi: 10.1109/ICCNC.2015.7069371
Abstract: Improved network security is addressed using device dependent physical-layer (PHY) based fingerprints from Ethernet cards to augment traditional MAC-based ID verification. The investigation uses unintentional Ethernet cable emissions and device fingerprints comprised of Constellation-Based, Distinct Native Attribute (CB-DNA) features. Near-field collection probe derivative effects dictated the need for developing a two-dimensional (2D) binary constellation for demodulation and CB-DNA extraction. Results show that the 2D constellation provides reliable demodulation (bit estimation) and device discrimination using symbol cluster statistics for CB-DNA. Bit Error Rate (BER) and Cross-Manufacturer Discrimination (CMD) results are provided for 16 devices from 4 different manufactures. Device discrimination is assessed using both Nearest Neighbor (NN) and Multiple Discriminant Analysis, Maximum Likelihood (MDA/ML) classifiers. Overall results are promising and include CMD average classification accuracy of %C = 76.73% (NN) and %C = 91.38% (MDA/ML).
Keywords: computer network security; demodulation; error statistics; fingerprint identification; local area networks; 2D constellation; BER; CB-DNA extraction; CMD; Ethernet card discrimination; MAC-based ID verification; MDA-ML classifier; PHY; bit error rate; bit estimation; constellation-based distinct native attribute feature; constellation-based fingerprinting; cross-manufacturer discrimination; demodulation; device dependent physical-layer; multiple discriminant analysis-maximum likelihood classifier; near-field collection probe; nearest neighbor analysis; network security; symbol cluster statistic; two-dimensional binary constellation; unintentional cable emission; Artificial neural networks; Constellation diagram; Demodulation; Fingerprint recognition; Probes; Radio frequency; Security (ID#: 15-8893)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069371&isnumber=7069279
Zainudin, M.N.Shah; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran, "Activity Recognition Based on Accelerometer Sensor Using Combinational Classifiers," in Open Systems (ICOS), 2015 IEEE Conference on, pp. 68-73, 24-26 Aug. 2015. doi: 10.1109/ICOS.2015.7377280
Abstract: In recent years, people nowadays easily to contact each other by using smartphone. Most of the smartphone now embedded with inertial sensors such accelerometer, gyroscope, magnetic sensors, GPS and vision sensors. Furthermore, various researchers now dealing with this kind of sensors to recognize human activities incorporate with machine learning algorithm not only in the field of medical diagnosis, forecasting, security and for better live being as well. Activity recognition using various smartphone sensors can be considered as a one of the crucial tasks that needs to be studied. In this paper, we proposed various combination classifiers models consists of J48, Multi-layer Perceptron and Logistic Regression to capture the smoothest activity with higher frequency of the result using vote algorithm. The aim of this study is to evaluate the performance of recognition the six activities using ensemble approach. Publicly accelerometer dataset obtained from Wireless Sensor Data Mining (WISDM) lab has been used in this study. The result of classification was validated using 10-fold cross validation algorithm in order to make sure all the experiments perform well.
Keywords: Accelerometers; Classification algorithms; Feature extraction; Gyroscopes; Hidden Markov models; Robot sensing systems; Support vector machines; accelerometer; activity; classification; sensors (ID#: 15-8894)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7377280&isnumber=7377263
Ke Meng; Hui Zeng; Hongmei Deng; Hongjun Li, "Delay/Disruption-Tolerant Network (DTN) Network Management for Space Networking," in Aerospace Conference, 2015 IEEE, pp. 1-8, 7-14 March 2015. doi: 10.1109/AERO.2015.7119086
Abstract: To ensure reliable communication in the next-generation space networks, a novel network management system is needed to support greater levels of autonomy and possess greater awareness of the environment and knowledge of assets. Toward this, we developed a security-enhanced autonomous network management approach for the space networks through network monitoring, network analysis, cross-layer negotiation, and network adaptation. In our approach, bundle-based delay/disruption-tolerant networking (DTN) is used as the underlying networking technology. Our approach allows the system to adaptively reconfigure its network elements based upon awareness of network conditions, policies, and mission requirements. Although SEANM is generically applicable to any radio network, for validation it has been prototyped and evaluated on two specific networks - a commercial off-the-shelf hardware testbed using IEEE 802.11 WiFi devices, and a military radio testbed using JTRS AN/PRC-154 Rifleman Radio platforms. Through tests, it has been shown that our solution provides autonomous network management resulting in reliable communications in the delay/disruptive prone environments.
Keywords: delay tolerant networks; next generation networks; wireless LAN; DTN network management; IEEE 802.11 WiFi devices; JTRS AN PRC-154 Rifleman radio platforms; SEANM; cross-layer negotiation; delay-tolerant management system; disruption-tolerant network management system; military radio testbed; network adaptation; network monitoring; security-enhanced autonomous network management approach; space networking; Artificial intelligence; Biomedical monitoring; Low earth orbit satellites; Monitoring; Servers; Visualization; Welding (ID#: 15-8895)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119086&isnumber=7118873
Bittl, S., "Efficient Distribution of Static or Slowly Changing Configuration Parameters in VANETs," in Reliable Networks Design and Modeling (RNDM), 2015 7th International Workshop on, pp. 301-306, 5-7 Oct. 2015. doi: 10.1109/RNDM.2015.7325244
Abstract: Vehicular ad hoc networks (VANETs) based on Car2X communication technologies are about to enter mass production in the next years. Thereby, bandwidth efficiency is a core point of concern due to sharing of a single control channel among many participating stations with high mobility. Up to now, neighborhood aware content dissemination has only been considered for VANET security mechanisms, but not for other protocol layers. Thus, we show that extending on demand distribution of fixed or slowly changing data sets to all layers can reduce delay until full cooperative awareness about cooperating stations is achieved. Moreover, the developed strategy is able to reduce average bandwidth requirements. Thereby, the management entity foreseen in currently standardized VANET frameworks is used to coordinate content dissemination between different protocol layers. A simulation based evaluation is provided, which shows good performance of the proposed mechanism within the current ETSI ITS framework.
Keywords: telecommunication security; vehicular ad hoc networks; wireless channels; Car2X communication technology; VANET security mechanism; bandwidth efficiency; content dissemination; delay reduction; demand distribution; single control channel sharing; slowly changing configuration parameter efficient distribution; static configuration parameter efficient distribution; vehicular ad hoc network; Computer aided manufacturing; Containers; Cross layer design; Delays; Security; Vehicular ad hoc networks (ID#: 15-8896)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325244&isnumber=7324297
Nurain, N.; Mostakim, M.; Al Islam, A.B.M.A., "Towards Empirical Study Based Mathematical Modeling for Throughput of MANETs," in Networking Systems and Security (NSysS), 2015 International Conference on, pp. 1-6, 5-7 Jan. 2015. doi: 10.1109/NSysS.2015.7043524
Abstract: Mathematical modeling for throughput of MANETs considering the impact of different layers in the protocol stack in addition to that of different network parameters remains unexplored till now even though such modeling is considered as the fastest and the most cost-effective tool for evaluating the performance of a network. Therefore, in this paper, we attempt to develop a mathematical model for throughput of MANETs considering both of the aspects. In addition, we also focus on developing mathematical models for delivery ratio and drop ratio, these metrics limit the maximum throughput of a network. In our analysis, we perform rigorous simulation utilizing ns-2 to capture the performance of MANETs under diversified settings. Our rigorous empirical study reveals that we need to develop cross-layer mathematical models for throughput, delivery ratio, and drop ratio to represent the performance of MANETs and such mathematical models need to resolve higher-order polynomial equations. Consequently, our study uncovers a key finding that mathematical modeling of MANETs considering variation in all parameters is not feasible.
Keywords: mobile ad hoc networks; polynomial matrices; protocols; MANET throughput matrix; cross-layer mathematical model; empirical study based mathematical model; higher-order polynomial equation;ns-2 simulator; protocol stack; Ad hoc networks; Fluctuations; Market research; Mathematical model; Measurement; Mobile computing; Throughput; MANET; Mathematical modeling; ns-2 (ID#: 15-8897)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043524&isnumber=7042935
Satam, P., "Cross Layer Anomaly Based Intrusion Detection System," in Self-Adaptive and Self-Organizing Systems Workshops (SASOW), 2015 IEEE International Conference on, pp. 157-161, 21-25 Sept. 2015. doi: 10.1109/SASOW.2015.31
Abstract: Since the start of the 21st century, computer networks have been through an exponential growth in terms of the network capacity, the number of the users and the type of tasks that are performed over the network. With the resent boom of mobile devices (e.g., Tablet computers, smart phones, smart devices, and wearable computing), the number of network users is bound to increase exponentially. But, most of the communications protocols, that span over the 7 layers of the OSI model, were designed in the late 1980's or 90's. Although most of these protocols have had subsequent updates over time, most of these protocols still remain largely unsecure and open to attacks. Hence it is critically important to secure these protocols across the 7 layers of the OSI model. As a part of my PhD research, I am working on a cross layer anomaly behavior detection system for various protocols. This system will be comprised of intrusion detection systems (IDS) for each of the protocols that are present in each layer. The behavior analysis of each protocol will be carried out in two phases. In the first phase (training), the features that accurately characterize the normal operations of the protocol are identified using data mining and statistical techniques and then use them to build a runtime model of protocol normal operations. In addition, some known attacks against the studied protocol are also studied to develop a partial attack model for the protocol. The anomaly behavior analysis modules of each layer are then fused to generate a highly accurate detection system with low false alarms. In the second phase, the cross-layer anomaly based IDS is used to detect attacks against any communication protocols. We have already developed anomaly behavior modules for TCP, UDP, IP, DNS and Wi-Fi protocols. Our experimental results show that our approach can detect attacks accurately and with very low false alarms.
Keywords: data mining; protocols; security of data; statistical analysis; DNS protocols; IDS; IP protocols; OSI model; TCP protocols; UDP protocols; Wi-Fi protocols; anomaly behavior analysis modules; communications protocols; computer networks; cross layer anomaly based intrusion detection system; data mining; false alarms; mobile devices; network capacity; partial attack model; smart devices; smart phones; statistical techniques; tablet computers; wearable computing; Conferences; Cross layer design; Databases; IEEE 802.11 Standard; Intrusion detection; Open systems; Protocols; Cross layer anomaly based intrusion detection system; DNS; Wi-Fi; data mining; machine learning (ID#: 15-8898)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306574&isnumber=7306518
ing Sun; Guangjie Han; Tongtong Wu; Jinfang Jiang; Lei Shu, "A Reliable and Energy Efficient VBF-Improved Cross-Layer Protocol for Underwater Acoustic Sensor Network," in Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, pp. 44-49, 19-20 Aug. 2015. Doi: (not provided)
Abstract: Underwater sensor networks (USNs) has many characteristics different from terrestrial wireless sensor networks (WSNs), such as dynamic network topology, unreliable acoustic communication, which increases the difficulty in energy efficiency and reliability of data transmission, for traditional WSN protocols are not suitable for underwater acoustic sensor networks (UASNs). Vector based forwarding (VBF) protocol is an energy efficient routing protocol for UASNs, by using the location information of nodes to limit the scale of flooding so that to save energy consumption and handle the mobility of nodes. In this paper a cross-layer protocol is proposed, which not only utilizes the VBF-based routing algorithm, but also considers the residual energy and the times of data relay in a cycle time to make more optimized decision whether a node will forward data or not. According to the simulation results, more evenly energy consumption and reliable data transmission are achieved, compared to previous VBF-based routing protocols for UASNs.
Keywords: marine communication; routing protocols; telecommunication network reliability; underwater acoustic communication; vectors; wireless sensor networks; UASN; WSN protocols; acoustic communication; data relay; data transmission; dynamic network topology; energy consumption; energy efficient VBF-improved cross-layer protocol; location information; network reliability; residual energy; routing protocols; terrestrial wireless sensor networks; underwater acoustic sensor network; vector based forwarding protocol; Data communication; Energy consumption; Reliability; Routing; Routing protocols; Wireless sensor networks; UASN; VBF; cross-layer protocol; energy efficiency; reliability (ID#: 15-8899)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332540&isnumber=7332527
Douziech, P.-E.; Curtis, B., "Cross-Technology, Cross-Layer Defect Detection in IT Systems -- Challenges and Achievements," in Complex Faults and Failures in Large Software Systems (COUFLESS), 2015 IEEE/ACM 1st International Workshop on, pp. 21-26, 23-23 May 2015. doi: 10.1109/COUFLESS.2015.11
Abstract: Although critical for delivering resilient, secure, efficient, and easily changed IT systems, cross-technology, cross- layer quality defect detection in IT systems still faces hurdles. Two hurdles involve the absence of an absolute target architecture and the difficulty of apprehending multi-component anti-patterns. However, Static analysis and measurement technologies are now able to both consume contextual input and detect system-level anti-patterns. This paper will provide several examples of the information required to detect system-level anti-patterns using examples from the Common Weakness Enumeration repository maintained by MITRE Corp.
Keywords: program diagnostics; program testing; software architecture; software quality; IT systems; MITRE Corp; common weakness enumeration repository; cross-layer quality defect detection; cross-technology defect detection; measurement technologies; multicomponent antipatterns; static analysis; system-level antipattern detection; Computer architecture; Java; Organizations; Reliability; Security; Software; Software measurement; CWE; IT systems; software anti-patterns; software architecture; software pattern detection; software quality measures; structural quality (ID#: 15-8900)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181478&isnumber=7181467
Pohls, H.C., "JSON Sensor Signatures (JSS): End-to-End Integrity Protection from Constrained Device to IoT Application," in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, pp. 306-312, 8-10 July 2015. doi: 10.1109/IMIS.2015.48
Abstract: Integrity of sensor readings or actuator commands is of paramount importance for a secure operation in the Internet-of-Things (IoT). Data from sensors might be stored, forwarded and processed by many different intermediate systems. In this paper we apply digital signatures to achieve end-to-end message level integrity for data in JSON. JSON has become very popular to represent data in the upper layers of the IoT domain. By signing JSON on the constrained device we extend the end-to-end integrity protection starting from the constrained device to any entity in the IoT data-processing chain. Just the JSON message's contents including the enveloped signature and the data must be preserved. We reached our design goal to keep the original data accessible by legacy parsers. Hence, signing does not break parsing. We implemented an elliptic curve based signature algorithm on a class 1 (following RFC 7228) constrained device (Zolertia Z1: 16-bit, MSP 430). Furthermore, we describe the challenges of end-to-end integrity when crossing from IoT to the Web and applications.
Keywords: Internet of Things; Java; data integrity; digital signatures; public key cryptography; Internet-of-Things; IoT data-processing chain; JSON sensor signatures; actuator commands; digital signatures; elliptic curve based signature algorithm; end-to-end integrity protection; end-to-end message level integrity; enveloped signature; legacy parsers; sensor readings integrity; Data structures; Digital signatures; Elliptic curve cryptography; NIST; Payloads; XML; ECDSA; IoT; JSON; digital signatures; integrity (ID#: 15-8901)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7284966&isnumber=7284886
Laizhong Cui; Genghui Li; Xianghua Fu; Nan Lu, "CPPStreaming: A Cloud-Assisted Peer-to-Peer Live Streaming System," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 7-13, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.25
Abstract: Although P2P has been the main solution for live streaming distribution, the dynamic restricts the performance. Cloud computing is a new promising solution, which could be introduced as a supplement for P2P. It is a good direction on combining cloud computing and P2P to leverage the live streaming system performance. However, for seeking the design of the hybrid system architecture and deployment for good transmission performance, there has been no mature and integral solution so far. In this paper, we design a cloud-assisted P2P live streaming system called CPPStreaming by combing two state-of-the-art video distribution technologies: cloud computing and P2P. We introduce a two layer framework of CPPStreaming, including the cloud layer and P2P layer. As for the two layers respectively, we propose the corresponding formation and evolution method. For the system deployment, we formulate the leasing cloud servers strategy for an optimal problem and propose a greedy algorithm based on the heuristic solution for solving it. The experiment results show that our system can out perform two classical P2P live streaming systems, in terms of the transmission performance and the reduction of cross-region traffic.
Keywords: cloud computing; file servers; greedy algorithms; peer-to-peer computing; telecommunication traffic; video streaming;CPPStreaming;P2P layer; cloud computing; cloud layer; cloud server strategy; cloud-assisted peer-to-peer live streaming system; cross-region traffic reduction; evolution method; formation method; greedy algorithm; heuristic solution; hybrid system architecture; live streaming distribution; transmission performance; video distribution technologies; Bandwidth; Cloud computing; Computer architecture; Servers; Topology; Vegetation; P2P; cloud; live streaming (ID#: 15-8902)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336136&isnumber=7336120
Iacobelli, L.; Panza, G.; Piri, E.; Vehkapera, J.; Mazzotti, M.; Moretti, S.; Cicalo, S.; Bokor, L.; Varga, N.; Martini, M.G., "An Architecture for M-Health Services: The CONCERTO Project Solution," in Networks and Communications (EuCNC), 2015 European Conference on, pp. 118-122, June 29 2015-July 2 2015. doi: 10.1109/EuCNC.2015.7194052
Abstract: The provisioning of e-health and specifically m-health services requires the usage of advanced and reliable communication techniques to offer acceptable Quality of Experience (QoE) for doctors in the transfer of biomedical data between involved parties (i.e. flawless, or almost flawless, and prompt enough delivery) using wired or wireless access networks. To overcome the restrictions of conventional communication systems and to address the challenges imposed by wireless/mobile multimedia transfer and adaptation for healthcare applications, the CONCERTO project proposes a cross-layer optimized architecture with all the needed critical building blocks integrated for medical media content fusion, delivery and access, even on the move in emergency contexts. This paper describes the proposed reference system architecture, presenting the developed components and mechanisms in a comprehensive way, depicting and clarifying the overall picture and highlighting the impact of the CONCERTO approach in the healthcare domain. The evaluation of the proposed system is carried out both via simulation analysis and, more importantly, via validation involving real medical staff.
Keywords: biomedical communication; health care; quality of experience; radio access networks; sensor fusion; CONCERTO project; QoE; biomedical data; critical building blocks; cross-layer optimized architecture; e-health services; healthcare applications; healthcare domain; m-health services; medical media content fusion; quality of experience; simulation analysis; wired access networks; wireless access networks; wireless-mobile multimedia transfer; Hospitals; Medical diagnostic imaging; Multimedia communication; Streaming media; Wireless communication; Cross-layer signalling; QoE; cross-layer optimization ;end-user; first responder; m-health; network simulation (ID#: 15-8903)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194052&isnumber=7194024
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
DNA Cryptography 2015 |
DNA-based cryptography is a developing interdisciplinary area combining cryptography, mathematical modeling, biochemistry and molecular biology as the basis for encryption. Research includes authentication, steganography, and masking. This research was presented in 2015.
Sundaram, G.S.; Pavithra, S.; Arthi, A.; Bala, B.M.; Mahalakshmi, S., "Cellular Automata Based DNA Cryptography Algorithm," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-6, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282333
Abstract: DNA cryptography is a new area of research in cryptographic field. In this DNA component is used as the information carrier. Most of the encryption techniques based on the cellular automata have limitations. To overcome this lacuna, we propose a novel DNA cryptography algorithm with cellular automata to achieve randomness, parallelism, uniformity, reversibility and stable. Finally a comparison is made with the research works based on the cryptographic attack parameter.
Keywords: biocomputing; cellular automata; cryptography; DNA component; DNA cryptography algorithm; cellular automata; cryptographic attack parameter; cryptographic field; encryption technique; information carrier; lacuna; research work; Algorithm design and analysis; Automata; Ciphers; Conferences; DNA; Encryption; Cellular automata; DNA; Thymine; Uracil (ID#: 16-8950)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282333&isnumber=7282219
Shweta; Indora, Sanjeev, "Cascaded DNA Cryptography and Steganography," in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, pp. 104-107, 8-10 Oct. 2015. doi: 10.1109/ICGCIoT.2015.7380438
Abstract: The redundancy in English words helps the unauthorized entity to guess the cipher text. The DNA sequences do not follow such properties. It means the conversion of message to DNA sequences make it robust against attacks. This paper performs the DNA cryptography and then hides the DNA sequence in to the random frame of a video. The result analysis shows that the frame is imperceptible. The video seems to be same. The enhancement in PSNR value and reduction in MSE shows the effectiveness of the technique.
Keywords: Cryptography; DNA; Indexes; MATLAB; Media; Observers; Robustness; Cryptography; DNA Cryptography; Frame; MSE; PSNR; video (ID#: 16-8951)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380438&isnumber=7380415
Singh, A.; Singh, R., "Information Hiding Techniques Based on DNA Inconsistency: An Overview," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 2068-2072, 11-13 March 2015. Doi: (not provided)
Abstract: Redundancy of words and characters in English language helps cryptanalysts in guessing the cipher text. DNA sequences do not follow any linguistic properties. Hence, translating any language first to DNA sequences and then applying cryptography technique upon it can prevent the attacks based on frequency analysis.. DNA cryptography is a branch of cryptography derived from DNA computing and based on difficult biological processes. In this paper, various operations that can be used in DNA computation along with DNA based cryptographic techniques have been discussed. Further, how advancement in DNA computation can bring a serious problem to traditional cryptographic system has also been explained. The work is concluded with the challenges to present DNA cryptographic system and future direction in this field.
Keywords: biocomputing; cryptography; data encapsulation; DNA computation; DNA computing; DNA cryptography; DNA inconsistency; DNA sequences; English language; biological processes; character redundancy; cipher text; cryptography technique; frequency analysis; information hiding techniques; word redundancy; DNA; DNA computing; Encoding; Encryption; Entropy; Indexes; Cryptography; DNA coding sequence; DNA cryptography; Hybridization; Indexing; Microdots (ID#: 16-8952)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100604&isnumber=7100186
Gupta, S.; Jain, A., "Efficient Image Encryption Algorithm Using DNA Approach," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 726-731, 11-13 March 2015. Doi: (not provided)
Abstract: DNA computing is a new computational field which harnesses the immense parallelism, high density information and low power dissipation that brings probable challenges and opportunities to conventional cryptography. In recent years, many image encryption algorithms have been proposed using DNA solicit but many are not secure as such. In this regard, this paper proposes an improved and efficient algorithm to encrypt a grayscale image of any size based on DNA sequence addition operation. The original image is encrypted into two phases. In the first phase, the intermediate cipher is obtained by addition of the DNA sequence matrix and masking matrix. In the second phase, pixel values are scrambled to make it more robust. In this way the original image is encrypted. The results of simulated experiment and security analysis of the proposed image encryption algorithm, evaluated from histogram analysis and key sensitivity analysis, depicts that scheme not only can attain good encryption but can also hinder exhaustive attack and statistical attack. Thus, results are passable.
Keywords: biocomputing; cryptography; image processing; sensitivity analysis; DNA computing; DNA masking matrix; DNA sequence addition operation; DNA sequence matrix; exhaustive attack; grayscale image; histogram analysis; image encryption algorithm; intermediate cipher; key sensitivity analysis; security analysis; statistical attack; Algorithm design and analysis; DNA; Encryption; Histograms; Image coding; Matrix converters; DNA encoding; DNA sequence addition and subtraction; chaotic maps; image encryption (ID#: 16-8953)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100345&isnumber=7100186
Jana, Sunanda; Maji, Arnab Kumar; Pal, Rajat Kumar, "A Novel Sudoku Solving Technique Using Column Based Permutation," in Advanced Computing and Communication (ISACC), 2015 International Symposium on, pp. 71-77, 14-15 Sept. 2015. doi: 10.1109/ISACC.2015.7377318
Abstract: “Sudoku” is the Japanese abbreviation of “Suuji wa dokushin ni kagiru”, which means “the numbers must occur only once”. It is a challenging and interesting puzzle that trains our mind logically. In recent years, solving Sudoku puzzles has become a widespread phenomenon. The problem of solving a given Sudoku puzzle finds numerous applications in the domain of Steganography, Visual Cryptography, DNA Computing, Watermarking, etc. Thus, solving the Sudoku puzzle in efficient manner is very important. However, incidentally all the existing Sudoku solving techniques are primarily either guess based heuristics or computation intensive soft computing methodologies. They also solve the puzzle by traversing on each and every individual cell. In this paper, a novel Sudoku solving technique is proposed which solves Sudoku puzzles without guessing a cell and by generating only the desired permutations among columns, which consist of only groups of cells.
Keywords: Algorithm; backtracking; cell; column; difficulty level; permutation; Sudoku (ID#: 16-8954)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7377318&isnumber=7377296
Saranya, M.R.; Mohan, A.K.; Anusudha, K., "Algorithm for Enhanced Image Security using DNA and Genetic Algorithm," in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, pp.1-5, 19-21 Feb. 2015. doi: 10.1109/SPICES.2015.7091462
Abstract: An efficient image encryption algorithm with improved image security has been developed by using chaotic function, deoxyribonucleic acid (DNA) sequencing and genetic algorithm (GA). A chaotic sequence of desired length is generated by using the logistic map function whose initial value is calculated using the secret key. A number of DNA masks are generated and these masks along with the chaotic sequences are used to encrypt the digital image. Finally genetic algorithm is employed to get the best mask for encryption. The proposed method can resist various types of attacks and produce high entropy and very low correlation between pixels.
Keywords: DNA; correlation methods; cryptography; entropy; genetic algorithms; image processing; DNA masks; chaotic sequence; correlation; deoxyribonucleic acid sequencing; digital image; entropy; genetic algorithm; image encryption algorithm; image security enhancement; logistic map function; DNA; Decision support systems; Encoding; Encryption; Entropy; Genetic algorithms; Logistics; Deoxyribonucleic acid (DNA);Entropy; Genetic algorithm (GA);Image encryption; Logistic map (ID#: 16-8955)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091462&isnumber=7091354
Saranya, M.R.; Mohan, A.K.; Anusudha, K., "A Hybrid Algorithm for Enhanced Image Security Using Chaos and DNA Theory," in Computer Communication and Informatics (ICCCI), 2015 International Conference on, pp. 1-4, 8-10 Jan. 2015. doi: 10.1109/ICCCI.2015.7218102
Abstract: An image encryption algorithm based on chaotic theory and Deoxyribonucleic acid (DNA) sequencing is proposed here. Initially, two chaotic sequences are generated from the logistic map function, one for image permutation and other for image diffusion. The two internal secret keys derived from the 120 bit user defined secret key serve as the initial condition for the chaotic sequences. Both the image and the mask for diffusion are encoded into DNA sequences using the possible eight DNA complementary rules. After that, image permutation and diffusion operations are performed in the DNA domain. DNA XOR operation is used to carry out diffusion which significantly reduces the correlation between adjacent pixels of the plain image. Simulation results and performance analysis show that the proposed work has high security, large key space, and high key sensitivity and it is also able to resist all types of attacks.
Keywords: biocomputing; chaos ;image coding; private key cryptography; 120 bit user defined secret key; DNA XOR operation; DNA complementary rules; DNA theory; chaotic sequences; chaotic theory; deoxyribonucleic acid sequencing; enhanced image security; high security; hybrid algorithm; image diffusion; image encryption algorithm; internal secret keys; key sensitivity; key space; logistic map function; plain image; Chaotic communication; Correlation; DNA; Encryption; Entropy; chaotic map; deoxyribonucleic acid (DNA);image encryption; key image; key sensitivity; key space; logistic map function (ID#: 16-8956)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218102&isnumber=7218046
Verma, I.; Jain, S.K., "Biometrics Security System: A Review of Multimodal Biometrics Based Techniques for Generating Crypto-Key," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 1189-1192, 11-13 March 2015. Doi: (not provided)
Abstract: In today's world the life is very fast we want to do everything so quickly and easily without putting much physical and mental effort. With the advancement of technology we are provided with higher level of knowledge through the invention of different devices. However each technological innovation has its pros and cons. One of the emerging devise for biometric security system is Smartphone's we are using today. Today we cannot think of living without smart phones as they have become the part of our life. We depend on our phone for our so man day to day activities like paying bills connecting with friends and office, making money transaction. But using the traditional security features we cannot get appropriate security as anyone who knows the password to unlock my phone can get access to my phone. Using biometrics traits like fingerprint, voice, face, and iris one cannot get access to the device. In this paper we focus on how biometrics help in making the device more secure and fool proof and what were the lacking in the traditional methods of security system which give birth to the implementation of biometric security system.
Keywords: biometrics (access control); cryptography; biometric security system; biometrics traits; crypto-key; multimodal biometrics based techniques; security features; smartphone; technological innovation; Cryptography; Face; Fingerprint recognition; Iris recognition; Biometrics; DNA; Face; Fingerprint; Hand geometry; Iris; Retina; Vein geometry (ID#: 16-8957)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100436&isnumber=7100186
Strizhov, M.; Ray, I., "Substring Position Search over Encrypted Cloud Data Using Tree-Based Index," in Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 165-174, 9-13 March 2015. doi: 10.1109/IC2E.2015.33
Abstract: Existing Searchable Encryption (SE) solutions are able to handle simple Boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.
Keywords: cloud computing; cryptography; query processing; trees (mathematics); DNA data; SSP-SSE; adaptive adversary; boolean search queries; chosen-keyword attacks; cloud data; leakage functions; multikeyword queries; security properties; single keyword queries; substring position search; substring position searchable symmetric encryption; tree-based index; Cloud computing; Encryption; Indexes; Keyword search; Probabilistic logic; cloud computing; position heap tree; searchable symmetric encryption; substring position search (ID#: 16-8958)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092914&isnumber=7092808
Xin Jin; Yulu Tian; Chenggen Song; Guangzheng Wei; Xiaodong Li; Geng Zhao; Huaichao Wang, "An Invertible and Anti-Chosen Plaintext Attack Image Encryption Method Based on DNA Encoding and Chaotic Mapping," in Chinese Automation Congress (CAC), 2015, pp. 1159-1164, 27-29 Nov. 2015. doi: 10.1109/CAC.2015.7382673
Abstract: With the rapid development of network, more and more digital images need to be stored and communicated. Due to the openness and network sharing, the problems of digital image security become an important threat In this paper, we propose a novel gray image encryption algorithm based on chaotic mapping and DNA (Deoxyribonucleic Acid) encoding. We solve the error of irreversibility of a previous work, which can only encrypt the plain image, and cannot decrypt the cipher image with the correct secret key and can be attacked by the chosen plaintext. To make the algorithm invertible, we encode the input gray image by DNA encoding and generate a random matrix based on the logistic chaotic mapping. The DNA addition operation is conducted on the random matrix follow by the DNA complement operation guided by a random binary matrix generate by 2 logistic chaotic mapping sequences. We solve the problem of the irreversibility successfully. In addition, the algorithm can now resistant the several attacks such as chosen plaintext attack, brute-force attack, and statistic attack.
Keywords: Cryptography; DNA; ISO; Resists; Welding; DNA encoding; chaotic mapping; cloud security; image encryption (ID#: 16-8959)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7382673&isnumber=7382455
Aieh, A.; Sen, A.; Dash, S.R.; Dehuri, S., "Deoxyribonucleic Acid (DNA) for a Shared Secret Key Cryptosystem with Diffie Hellman Key Sharing Technique," in Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1-6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060130
Abstract: A shared secret key based symmetric cryptographic algorithm using Diffie Hellman key sharing technique has been proposed in the paper. The shared secret key is used for encryption as well as decryption of the intended plain text. But, we are not transferring the original shared secret key through the channel. We are using Diffie Hellman key sharing technique to generate the shared secret key in both the side by exchanging the public key of the sender and receiver with each other through the channel DNA Hybridization technique has been used to produce the cipher text from the DNA sequence of plain text and the shared secret key. A numerical study with the basic parametric assumption confirms that the proposed cryptosystem is very scalable, secure and robust to use in real time system.
Keywords: biocomputing; public key cryptography; real-time systems; DNA sequence; Diffie Hellman key sharing technique; channel DNA hybridization technique; cipher text; decryption; deoxyribonucleic acid; encryption; public key; real time system; shared secret key based symmetric cryptographic algorithm; shared secret key cryptosystem; shared secret key generation; Ciphers; DNA; Encryption; Public key; Receivers; DNA Hybridization; Diffie Hellman Key sharing; Encryption-Decryption; Prime (ID#: 16-8960)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060130&isnumber=7060104
Mohammed Misbahuddin, C. S. Sreeja; “A Secure Image-Based Authentication Scheme Employing DNA Crypto and Steganography;” WCI '15 Proceedings of the Third International Symposium on Women in Computing and Informatics, August 2015, Pages 595-601. Doi: 10.1145/2791405.2791503
Abstract: Authentication is considered as one of the critical aspects of Information security to ensure identity. Authentication is generally carried out using conventional authentication methods such as text based passwords, but considering the increased usage of electronic services a user has to remember many id-password pairs which often leads to memorability issues. This inspire users to reuse passwords across e-services, but this practice is vulnerable to security attacks. To improve security strength, various authentication techniques have been proposed including two factor schemes based on smart card, tokens etc. and advanced biometric techniques. Graphical Image based authentication systems has received relevant diligence as it provides better usability by way of memorable image passwords. But the tradeoff between usability and security is a major concern while strengthening authentication. This paper proposes a novel two-way secure authentication scheme using DNA cryptography and steganography considering both security and usability. The protocol uses text and image password of which text password is converted into cipher text using DNA cryptography and embedded into image password by applying steganography. Hash value of the generated stego image is calculated using SHA-256 and the same will be used for verification to authenticate legitimate user.
Keywords: Authentication, DNA, DNA Cryptography, DNA Steganography, Image password, Information security (ID#: 16-8961)
URL: http://doi.acm.org/10.1145/2791405.2791503
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Data Sanitization 2015 |
For security researchers, privacy protection during data mining is a major concern. Sharing information over the Internet or holding it in a database requires methods of sanitizing data so that personal information cannot be obtained. The methods described in the articles listed here include SQL injections, provenance workflows, itemset hiding, differential privacy, and a framework for a mathematical definition of privacy. The work cited here was presented in 2015.
Abdullah, Hadi; Siddiqi, Ahsan; Bajaber, Fuad, "A Novel Approach of Data Sanitization by Noise Addition and Knowledge Discovery by Clustering," in Computer Networks and Information Security (WSCNIS), 2015 World Symposium on, pp. 1-9, 19-21 Sept. 2015. doi: 10.1109/WSCNIS.2015.7368283
Abstract: Security of published data cannot be less important as compared to unpublished data or the data which is not made public. Therefore, PII (Personally Identifiable Information) is removed and data sanitized when organizations recording large volumes of data publish that data. However, this approach of ensuring data privacy and security can result in loss of utility of that published data for knowledge discovery. Therefore, a balance is required between privacy and the utility needs of published data. In this paper we study this delicate balance by evaluating four data mining clustering techniques for knowledge discovery and propose two privacy/utility quantification parameters. We subsequently perform number of experiments to statistically identify which clustering technique is best suited with desirable level of privacy/utility while noise is incrementally increased by simultaneously degrading data accuracy, completeness and consistency.
Keywords: Data privacy; Data security; Databases; Knowledge discovery; Privacy; data mining; data utility; noise; privacy; security (ID#: 15-8741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7368283&isnumber=7368275
Li, Bo; Vorobeychik, Yevgeniy; Li, Muqun; Malin, Bradley, "Iterative Classification for Sanitizing Large-Scale Datasets," in Data Mining (ICDM), 2015 IEEE International Conference on, pp. 841-846, 14-17 Nov. 2015
doi: 10.1109/ICDM.2015.11
Abstract: Cheap ubiquitous computing enables the collection of massive amounts of personal data in a wide variety of domains. Many organizations aim to share such data while obscuring features that could disclose entities or other sensitive information. Much of the data now collected exhibits weak structure (e.g., natural language text) and machine learning approaches have been developed to identify and remove sensitive entities in such data. Learning-based approaches are never perfect and relying upon them to sanitize data can leak sensitive information as a consequence. However, a small amount of risk is permissible in practice, and, thus, our goal is to balance the value of data published and the risk of an adversary discovering leaked sensitive information. We model data sanitization as a game between1) a publisher who chooses a set of classifiers to apply to data and publishes only instances predicted to be non-sensitive and 2) an attacker who combines machine learning and manual inspection to uncover leaked sensitive entities (e.g., personal names). We introduce an iterative greedy algorithm for the publisher that provably executes no more than a linear number of iterations, and ensures a low utility for a resource-limited adversary. Moreover, using several real world natural language corpora, we illustrate that our greedy algorithm leaves virtually no automatically identifiable sensitive instances for a state-of-the-art learning algorithm, while sharing over 93% of the original data, and complete after at most 5 iterations.
Keywords: Data models; Inspection; Manuals; Natural languages; Predictive models; Publishing; Yttrium; Privacy preserving; game theory; weak structured data sanitization (ID#: 15-8742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373399&isnumber=7373293
Shanmugasundaram, G.; Ravivarman, S.; Thangavellu, P., "A Study on Removal Techniques of Cross-Site Scripting From Web Applications," in Computation of Power, Energy Information and Communication (ICCPEIC), 2015 International Conference on, pp. 0436-0442, 22-23 April 2015. doi: 10.1109/ICCPEIC.2015.7259498
Abstract: Cross site scripting (XSS) vulnerability is among the top 10 web application vulnerabilities based on survey by Open Web Applications Security Project of 2013 [9]. The XSS attack occurs when web based application takes input from users through web pages without validating them. An attacker or hacker uses this to insert malicious scripts in web pages through such inputs. So, the scripts can perform malicious actions when a client visits the vulnerable web pages. This study concentrates on various security measures for removal of XSS from web applications (say defensive coding technique) and their issues of defensive technique based on that measures is reported in this paper.
Keywords: Internet; security of data; Web application vulnerability; XSS attack; cross-site scripting; removal technique; Encoding; HTML; Java; Uniform resource locators; cross site scripting; data sanitization; data validation; defensive coding technique; output escaping; scripting languages; vulnerabilities (ID#: 15-8743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7259498&isnumber=7259434
Adebayo, J.; Kagal, L., "A Privacy Protection Procedure for Large Scale Individual Level Data," in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, pp. 120-125, 27-29 May 2015. doi: 10.1109/ISI.2015.7165950
Abstract: We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically `sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.
Keywords: data analysis; data mining; data privacy; principal component analysis; data mining algorithm; large scale individual level data; orthogonal projection; principal component analysis; privacy protection procedure; sanitization; sensitive attribute removal; Data privacy; Loans and mortgages; Noise; Prediction algorithms; Principal component analysis; Privacy; PCA; data mining; orthogonal projection; privacy preserving (ID#: 15-8744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165950&isnumber=7165923
Yuan Hong; Vaidya, J.; Haibing Lu; Karras, P.; Goel, S., "Collaborative Search Log Sanitization: Toward Differential Privacy and Boosted Utility," in Dependable and Secure Computing, IEEE Transactions on, vol. 12, no. 5, pp. 504-518, Sept.-Oct. 1 2015. doi: 10.1109/TDSC.2014.2369034
Abstract: Severe privacy leakage in the AOL search log incident has attracted considerable worldwide attention. However, all the web users' daily search intents and behavior are collected in such data, which can be invaluable for researchers, data analysts and law enforcement personnel to conduct social behavior study [14], criminal investigation [5] and epidemics detection [10]. Thus, an important and challenging research problem is how to sanitize search logs with strong privacy guarantee and sufficiently retained utility. Existing approaches in search log sanitization are capable of only protecting the privacy under a rigorous standard [24] or maintaining good output utility [25] . To the best of our knowledge, there is little work that has perfectly resolved such tradeoff in the context of search logs, meeting a high standard of both requirements. In this paper, we propose a sanitization framework to tackle the above issue in a distributed manner. More specifically, our framework enables different parties to collaboratively generate search logs with boosted utility while satisfying Differential Privacy. In this scenario, two privacy-preserving objectives arise: first, the collaborative sanitization should satisfy differential privacy; second, the collaborative parties cannot learn any private information from each other. We present an efficient protocol -Collaborative sEarch Log Sanitization (CELS) to meet both privacy requirements. Besides security/privacy and cost analysis, we demonstrate the utility and efficiency of our approach with real data sets.
Keywords: Internet; collaborative filtering; data privacy; protocols; security of data; AOL search log incident; CELS protocol; Collaborative sEarch Log Sanitization; Web user behavior; Web user daily search intent; boosted utility; collaborative search log generation; cost analysis; criminal investigation; data analysts; differential privacy; epidemics detection; law enforcement personnel; privacy guarantee; privacy leakage; privacy protection; privacy requirements; privacy-preserving objectives; private information; security; social behavior study; Collaboration; Data privacy; Diabetes; Equations; Google; Histograms; Privacy; Differential Privacy; Optimization; Sampling; Search Log; Search log; Secure Multiparty Computation; differential privacy; optimization; sampling; secure multiparty computation (ID#: 15-8745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6951353&isnumber=7240136
Lwin Khin Shar; Briand, L.C.; Hee Beng Kuan Tan, "Web Application Vulnerability Prediction Using Hybrid Program Analysis and Machine Learning," in Dependable and Secure Computing, IEEE Transactions on, vol. 12, no. 6, pp. 688-707, Nov.-Dec. 1 2015. doi: 10.1109/TDSC.2014.2373377
Abstract: Due to limited time and resources, web software engineers need support in identifying vulnerable code. A practical approach to predicting vulnerable code would enable them to prioritize security auditing efforts. In this paper, we propose using a set of hybrid (static+dynamic) code attributes that characterize input validation and input sanitization code patterns and are expected to be significant indicators of web application vulnerabilities. Because static and dynamic program analyses complement each other, both techniques are used to extract the proposed attributes in an accurate and scalable way. Current vulnerability prediction techniques rely on the availability of data labeled with vulnerability information for training. For many real world applications, past vulnerability data is often not available or at least not complete. Hence, to address both situations where labeled past data is fully available or not, we apply both supervised and semi-supervised learning when building vulnerability predictors based on hybrid code attributes. Given that semi-supervised learning is entirely unexplored in this domain, we describe how to use this learning scheme effectively for vulnerability prediction. We performed empirical case studies on seven open source projects where we built and evaluated supervised and semi-supervised models. When cross validated with fully available labeled data, the supervised models achieve an average of 77 percent recall and 5 percent probability of false alarm for predicting SQL injection, cross site scripting, remote code execution and file inclusion vulnerabilities. With a low amount of labeled data, when compared to the supervised model, the semi-supervised model showed an average improvement of 24 percent higher recall and 3 percent lower probability of false alarm, thus suggesting semi-supervised learning may be a preferable solution for many real world applications where vulnerability data is missing.
Keywords: Internet; learning (artificial intelligence); program diagnostics; security of data; SQL injection; Web application vulnerability prediction; cross site scripting; dynamic program analyses; false alarm probability; file inclusion vulnerabilities; hybrid program analysis; hybrid static+dynamic code attributes; input sanitization code patterns; input validation code patterns; machine learning; open source projects; remote code execution; security auditing; semisupervised learning; static program analyses; vulnerability prediction techniques; vulnerability predictors; vulnerable code identification; vulnerable code prediction; Computer security; Data models; HTML; Predictive models; Semisupervised learning; Servers; Software protection; Vulnerability prediction; empirical study; input validation and sanitization; program analysis; security measures (ID#: 15-8746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963442&isnumber=7322332
Yamaguchi, F.; Maier, A.; Gascon, H.; Rieck, K., "Automatic Inference of Search Patterns for Taint-Style Vulnerabilities," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 797-812, 17-21 May 2015. doi: 10.1109/SP.2015.54
Abstract: Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered "Heart bleed" vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.
Keywords: application program interfaces; data flow analysis; public domain software; security of data; software engineering; C code; attacker-controlled data; automatic inference; code property graph; data flow; data security; inferred search pattern; memory function; open-source project; project- specific API; search pattern security-sensitive sink; sensitive sink; software development; source-sink system; taint-style vulnerability; Databases; Libraries; Payloads; Programming; security; Software; Syntactics; Clustering; Graph Databases; Vulnerabilities (ID#: 15-8747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163061&isnumber=7163005
Jinkun Pan; Xiaoguang Mao; Weishi Li, "Analyst-Oriented Taint Analysis by Taint Path Slicing and Aggregation," in Software Engineering and Service Science (ICSESS), 2015 6th IEEE International Conference on, pp. 145-148, 23-25 Sept. 2015. doi: 10.1109/ICSESS.2015.7339024
Abstract: Taint analysis determines whether values from untrusted or private sources may flow into security-sensitive or public sinks, and can discover many common security vulnerabilities in both Web and mobile applications. Static taint analysis detects suspicious data flows without running the application and achieves a good coverage. However, most existing static taint analysis tools only focus on discovering taint paths from sources to sinks and do not concern about the requirements of analysts for sanitization check and exploration. The sanitization can make a taint path no more dangerous but should be checked or explored by analysts manually in many cases and the process is very costly. During our preliminary study, we found that many statements along taint paths are not relevant to the sanitization and there are a lot of redundancies among taint paths with the same source or sink. Based on these two observations, we have designed and implemented the taint path slicing and aggregation algorithms, aiming at mitigating the workload of the analysts and helping them get a better comprehension of the taint behaviors of target applications. Experimental evaluations on real-world applications show that our proposed algorithms can reduce the taint paths effectively and efficiently.
Keywords: program slicing; security of data; Web application; aggregation algorithms; analyst-oriented taint analysis; exploration; mobile application; public sink; sanitization check; security vulnerabilities; security-sensitive sink; static taint analysis tools; taint path slicing; Algorithm design and analysis; Androids; Filtering; Humanoid robots; Mobile applications; Redundancy; Security; analyst; taint analysis; taint path (ID#: 15-8748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7339024&isnumber=7338993
Jingyu Hua; Yue Gao; Sheng Zhong, "Differentially Private Publication of General Time-Serial Trajectory Data," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 549-557, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218422
Abstract: Trajectory data, i.e., human mobility traces, is extremely valuable for a wide range of mobile applications. However, publishing raw trajectories without special sanitization poses serious threats to individual privacy. Recently, researchers begin to leverage differential privacy to solve this challenge. Nevertheless, existing mechanisms make an implicit assumption that the trajectories contain a lot of identical prefixes or n-grams, which is not true in many applications. This paper aims to remove this assumption and propose a differentially private publishing mechanism for more general time-series trajectories. One natural solution is to generalize the trajectories, i.e., merge the locations at the same time. However, trivial merging schemes may breach differential privacy. We, thus, propose the first differentially-private generalization algorithm for trajectories, which leverage a carefully-designed exponential mechanism to probabilistically merge nodes based on trajectory distances. Afterwards, we propose another efficient algorithm to release trajectories after generalization in a differential private manner. Our experiments with real-life trajectory data show that the proposed mechanism maintains high data utility and is scalable to large trajectory datasets.
Keywords: data privacy; time series; differential privacy; differentially private publication; differentially-private generalization algorithm; human mobility traces; time-serial trajectory data; time-series trajectories; Computers; Conferences; Data Publishing; Differential Privacy; Trajectory (ID#: 15-8749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218422&isnumber=7218353
Farea, A.; Karci, A., "Applications of Association Rules Hiding Heuristic Approaches," in Signal Processing and Communications Applications Conference (SIU), 2015 23th, pp. 2650-2653, 16-19 May 2015. doi: 10.1109/SIU.2015.7130434
Abstract: Data Mining allows large database owners to extract useful knowledge that could not be deduced with traditional approaches like statistics. However, these sometimes reveal sensitive knowledge or preach individual privacies. The term sanitization is given to the process of changing original database into another one from which we can mine without exposing sensitive knowledge. In this paper, we give a detailed explanation of some heuristic approaches for this purpose. We applied them on a number of publically available datasets and examine the results.
Keywords: data mining; data privacy; association rules hiding heuristic; data mining; database sanitization; Data mining; Itemsets; Data Mining; association rule; confidence; frequent pattern; itemset; sanitization; support; transaction (ID#: 15-8750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7130434&isnumber=7129794
Jung-Woo Sohn; Jungwoo Ryoo, "Securing Web Applications with Better "Patches": An Architectural Approach for Systematic Input Validation with Security Patterns," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 486-492, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.106
Abstract: Some of the most rampant problems in software security originate from improper input validation. This is partly due to ad hoc approaches taken by software developers when dealing with user inputs. Therefore, it is a crucial research question in software security to ask how to effectively apply well-known input validation and sanitization techniques against security attacks exploiting the user input-related weaknesses found in software. This paper examines the current ways of how input validation is conducted in major open-source projects and attempts to confirm the main source of the problem as these ad hoc responses to the input validation-related attacks such as SQL injection and cross-site scripting (XSS) attacks through a case study. In addition, we propose a more systematic software security approach by promoting the adoption of proactive, architectural design-based solutions to move away from the current practice of chronic vulnerability-centric and reactive approaches.
Keywords: Internet; security of data; software architecture; SQL injection attack; Web application security; XSS attack; ad hoc approaches; architectural approach; architectural design-based solution; chronic vulnerability-centric approach; cross-site scripting attack; input validation-related attacks; proactive-based solution; reactive approach; sanitization techniques; security patterns; systematic input validation; systematic software security approach; user input-related weaknesses; architectural patterns; improper input validation; intercepting validator; software security (ID#: 15-8751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299956&isnumber=7299862
Riboni, D.; Villani, A.; Vitali, D.; Bettini, C.; Mancini, L.V., "Obfuscation of Sensitive Data for Incremental Release of Network Flows," in Networking, IEEE/ACM Transactions on, vol. 23, no. 2, pp. 672-686, April 2015. doi: 10.1109/TNET.2014.2309011
Abstract: Large datasets of real network flows acquired from the Internet are an invaluable resource for the research community. Applications include network modeling and simulation, identification of security attacks, and validation of research results. Unfortunately, network flows carry extremely sensitive information, and this discourages the publication of those datasets. Indeed, existing techniques for network flow sanitization are vulnerable to different kinds of attacks, and solutions proposed for microdata anonymity cannot be directly applied to network traces. In our previous research, we proposed an obfuscation technique for network flows, providing formal confidentiality guarantees under realistic assumptions about the adversary's knowledge. In this paper, we identify the threats posed by the incremental release of network flows, we propose a novel defense algorithm, and we formally prove the achieved confidentiality guarantees. An extensive experimental evaluation of the algorithm for incremental obfuscation, carried out with billions of real Internet flows, shows that our obfuscation technique preserves the utility of flows for network traffic analysis.
Keywords: Internet; security of data; Internet; adversary knowledge; datasets; microdata anonymity; network flows incremental release; network traces; network traffic analysis; obfuscation technique; real network flows; research community; security attacks; sensitive data obfuscation; Data privacy; Encryption; IP networks; Knowledge engineering; Privacy; Uncertainty; Data sharing; network flow analysis; privacy; security (ID#: 15-8752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6774971&isnumber=7086110
Panja, B.; Gennarelli, T.; Meharia, P., "Handling Cross Site Scripting Attacks using Cache Check to Reduce Webpage Rendering Time with Elimination of Sanitization and Filtering in Light Weight Mobile Web Browser," in Mobile and Secure Services (MOBISECSERV), 2015 First Conference on, pp. 1-7, 20-21 Feb. 2015. doi: 10.1109/MOBISECSERV.2015.7072878
Abstract: In this paper we propose a new approach to prevent and detect potential cross-site scripting attacks. Our method called Buffer Based Cache Check, will utilize both the server-side as well as the client-side to detect and prevent XSS attacks and will require modification of both in order to function correctly. With Cache Check, instead of the server supplying a complete whitelist of all the known trusted scripts to the mobile browser every time a page is requested, the server will instead store a cache that contains a validated “trusted” instance of the last time the page was rendered that can be checked against the requested page for inconsistencies. We believe that with our proposed method that rendering times in mobile browsers will be significantly reduced as part of the checking is done via the server, and fewer checking within the mobile browser which is slower than the server. With our method the entire checking process isn't dumped onto the mobile browser and as a result the mobile browser should be able to render pages faster as it is only checking for “untrusted” content whereas with other approaches, every single line of code is checked by the mobile browser, which increases rendering times.
Keywords: cache storage; client-server systems; mobile computing; online front-ends; security of data; trusted computing; Web page rendering time; XSS attacks; buffer based cache check; client-side; cross-site scripting attacks; filtering; light weight mobile Web browser; sanitization; server-side; trusted instance; untrusted content; Browsers; Filtering; Mobile communication; Radio access networks; Rendering (computer graphics); Security; Servers; Cross site scripting; cache check; mobile browser; webpage rendering (ID#: 15-8753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072878&isnumber=7072857
Reffett, C.; Fleck, D., "Securing Applications with Dyninst," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225297
Abstract: While significant bodies of work exist for sandboxing potentially malicious software and for sanitizing input, there has been little investigation into using binary editing software to perform either of these tasks. However, because binary editors do not require source code and can modify the software, they can generate secure versions of arbitrary binaries and provide better control over the software than existing approaches. In this paper, we explore the application of the binary editing library Dyninst to both the sandboxing and sanitization problems. We also create a prototype of a more advanced graphical tool to perform these tasks. Finally, we lay the groundwork for more complex and functional tools to solve these problems.
Keywords: program diagnostics; security of data; software libraries; Dyninst; arbitrary binaries; binary editing library; binary editing software; binary editors; graphical tool; input sanitization; malicious software; sandboxing; sanitization problems; secure versions; securing applications; Graphical user interfaces; Instruments; Libraries; Memory management; Monitoring; Runtime; Software; binary instrumentation; dyninst; input sanitization; sandboxing (ID#: 15-8754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225297&isnumber=7190491
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Data in Motion and Data at Rest 2015 |
Data protection has distinguished between data in motion and data at rest for more than a decade. Research into these areas continues with the proliferation of cloud and mobile technologies. For the Science of Security community, the topic relates to resilience and composability. The articles cited here, separated by motion and rest, were offered in 2015.
Sidorov, V.; Wee Keong Ng, "Transparent Data Encryption for Data-in-Use and Data-at-Rest in a Cloud-Based Database-as-a-Service Solution," in Services (SERVICES), 2015 IEEE World Congress on, pp. 221-228, June 27 2015-July 2 2015. doi: 10.1109/SERVICES.2015.40
Abstract: With high and growing supply of Database-as-a-Service solutions from cloud platform vendors, many enterprises still show moderate to low demand for them. Even though migration to a DaaS solution might result in a significantly reduced bill for IT maintenance, data security and privacy issues are among the reasons of low popularity of these services. Such a migration is also often only justified if it could be done seamlessly, with as few changes to the system as possible. Transparent Data Encryption could help, but solutions for TDE shipped with major database systems are limited to securing only data-at-rest, and appear to be useless if the machine could be physically accessed by the adversary, which is a probable risk when hosting in the cloud. This paper proposes a different approach to TDE, which takes into account cloud-specific risks, extends encryption to cover data-in-use and partly data-in-motion, and is capable of executing large subsets of SQL including heavy relational operations, complex operations over attributes, and transactions.
Keywords: SQL; cloud computing; cryptography; data privacy; database management systems; DaaS solution; IT maintenance; SQL; TDE; attributes; cloud platform vendors; cloud-specific risks; complex operations; data security; data-at-rest; data-in-motion; data-in-use; database-as-a-service solution; privacy issues; relational operations; transactions; transparent data encryption; Data models; Databases; Encryption; Protocols; Transforms; data privacy; data security; query processing; relational databases (ID#: 15-8755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7196528&isnumber=7196486
Althobaiti, A.; Calyam, P.; Akella, R.; Vallabhaneni, P., "Data Integrity Protection through Security Monitoring for Just-in-Time News Feeds," in Cloud Networking (CloudNet), 2015 IEEE 4th International Conference on, pp. 184-190, 5-7 Oct. 2015. doi: 10.1109/CloudNet.2015.7335303
Abstract: There has recently been a major shift in news related media consumption trends and readers are increasingly relying on just-in-time news feeds versus the traditional newspaper print medium. Cloud-networked infrastructures are being setup by the media companies to aggregate news feeds from affiliates, and to meet the elastic demands of Internet-scale users accessing news feeds. However, cyber attacks could compromise these just-in-time news feed services and hackers could particularly launch data integrity as well as denial-of-service attacks that: (a) tarnish the reputation of media companies and (b) impact the service availability for users. In this paper, we describe data integrity and availability checking techniques to protect just-in-time news feed services against cyber attacks in use cases such as: (a) “Data-in-Motion” - when obtaining just-in-time news feeds (e.g., RSS feeds) from affiliates and (b) “Data-at-Rest” - when compiled news feeds reside within cloud-networked infrastructure for real-time premium subscriber access. Using concepts of distributed trust and anomaly detection and a realistic testbed environment in the DeterLab infrastructure, we show the impact of the different cyber attacks and propose solutions to defend against them.
Keywords: cloud computing; data integrity; data protection; electronic publishing; trusted computing; DeterLab infrastructure; Internet-scale users; RSS feeds; anomaly detection; cloud-networked infrastructures; compiled news feeds; cyber attacks; data integrity protection; data-at-rest; data-in-motion; denial-of-service attacks; distributed trust; elastic demands; just-in-time news feed service protection; news feed aggregation; real-time premium subscriber access; security monitoring; service availability checking techniques; Cloud computing; Companies; Feeds; Media; Monitoring; Servers; Uniform resource locators (ID#: 15-8756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335303&isnumber=7335267
Zerfos, Petros; Yeo, Hangu; Paulovicks, Brent D.; Sheinin, Vadim, "SDFS: Secure Distributed File System for Data-at-Rest Security for Hadoop-as-a-service," in Big Data (Big Data), 2015 IEEE International Conference on, pp. 1262-1271, Oct. 29 2015-Nov. 1 2015. doi: 10.1109/BigData.2015.7363881
Abstract: Cloud service providers are offering the popular Hadoop analytics platform following an "as-a-service" model, i.e. clusters of machines in their cloud infrastructures pre-configured with Hadoop software. Such offerings lower the cost and complexity of deploying a comparable system on-premises, however security considerations and in particular data confidentiality hamper wider adoption of such services by enterprises that handle data of sensitive nature. In this paper, we describe our efforts in providing security for data-at-rest (i.e. data that is stored) when Hadoop is offered as a cloud service. We analyze the requirements and architecture for such service and further describe a new distributed file system that we developed for Hadoop called SDFS, towards supporting this premise. We analyze parameter tuning for SDFS and through experiments on a real test-bed we evaluate its performance. We further present simulation results that explore the parameter space and can guide tuning.
Keywords: Cloud computing; Encryption; File systems; Redundancy; Servers; Data-at-rest security; Shamir's secret sharing; hadoop-as-a-service; information dispersal; secure distributed file system (ID#: 15-8757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363881&isnumber=7363706
Vivek, S.Sree; Ramasamy, Rajkumar, "Forward Secure On-device Encryption Scheme Withstanding Cold Boot Attack," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 488-493, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.43
Abstract: Encryption of data residing on the permanent memory of a device, also known as On-Device Encryption (ODE), is a well studied problem with many popular software available these days. We consider the adversary who is capable of taking one RAM snapshot (e.g: Cold Boot Attack) when the device is in locked state. Writing data securely, when the device is in locked state can be handled in the presence of this strong adversary, by employing public key encryption techniques. When it comes to reading of data from a locked device, it is not known until now, whether it is possible. In this paper, we state the impossibility of performing the read operation securely, when the device is in locked state. Moreover, we propose a new forward secure ODE scheme which supports secure writing in locked state and is more efficient when compared to the public key based solution. We have proposed the security model for forward secure ODE and proved the security of our scheme in the proposed security model.
Keywords: Encryption; Hardware; Performance evaluation; Public key; Random access memory; Data at Rest; Forward Secure Symmetric Key Encryption; On-Device Encryption (ODE); Provable Security; Random Oracle (ID#: 15-8758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371527&isnumber=7371418
Hein, D.; Winter, J.; Fitzek, A., "Secure Block Device -- Secure, Flexible, and Efficient Data Storage for ARM TrustZone Systems," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 222-229, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.378
Abstract: Recent years have seen a flurry of activity in the area of efficient and secure file systems for cloud storage, and also in the area of memory protection for secure processors. Both scenarios use cryptographic methods for data protection. Here, we consider the middle ground: the problem of using cryptographic methods to protect data integrity and confidentiality on a system with two strongly isolated execution environments, specifically an ARM TrustZone system with a Trusted Execution Environment. We introduce the Secure Block Device, a secure, easy to use, flexible, efficient, and widely applicable minimal Trusted Computing Base solution to provide data confidentiality and integrity for Data at Rest. The Secure Block Device is an open source C software library that uses a Merkle-Tree in conjunction with a selectable Authenticated Encryption scheme to provide an easy to integrate solution for applications that require fast and secure random access to data, but not a fully fledged file system. It was designed for Trusted Applications running in a Trusted Execution Environment that already have secure storage for cryptographic keys, but no secure general purpose data store. Beyond that, the Secure Block Device is applicable in all similar scenarios. We evaluate the Secure Block Device by using it as the core component in a secure storage Trusted Application that uses the ARM TrustZone Trusted Execution Environment to provide a confidential and integrity protected block device to the normal world OS.
Keywords: cloud computing; data integrity; data protection; private key cryptography; public key cryptography; storage management; trusted computing; ARM TrustZone systems; ARM TrustZone trusted execution environment; Data at Rest integrity; Merkle-Tree; authenticated encryption scheme; cloud storage; confidential block device; cryptographic keys; cryptographic methods; data confidentiality; data integrity protection; integrity protected block device; memory protection; open source C software library; secure block device; secure file systems; secure storage trusted application; secure-flexible-efficient-data storage; trusted computing base solution; Cryptography; Hardware; Kernel; Memory; Program processors; Secure storage; ARM TrustZone; Authenicated Encryption; Merkle-Tree; Secure storage; Trusted Applications (ID#: 15-8759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345286&isnumber=7345233
Althobaiti, A.; Calyam, P.; Akella, R.; Vallabhaneni, P., "Data Integrity Protection through Security Monitoring for Just-in-Time News Feeds," in Cloud Networking (CloudNet), 2015 IEEE 4th International Conference on, pp. 184-190, 5-7 Oct. 2015.doi: 10.1109/CloudNet.2015.7335303
Abstract: There has recently been a major shift in news related media consumption trends and readers are increasingly relying on just-in-time news feeds versus the traditional newspaper print medium. Cloud-networked infrastructures are being setup by the media companies to aggregate news feeds from affiliates, and to meet the elastic demands of Internet-scale users accessing news feeds. However, cyber attacks could compromise these just-in-time news feed services and hackers could particularly launch data integrity as well as denial-of-service attacks that: (a) tarnish the reputation of media companies and (b) impact the service availability for users. In this paper, we describe data integrity and availability checking techniques to protect just-in-time news feed services against cyber attacks in use cases such as: (a) “Data-in-Motion” - when obtaining just-in-time news feeds (e.g., RSS feeds) from affiliates and (b) “Data-at-Rest” - when compiled news feeds reside within cloud-networked infrastructure for real-time premium subscriber access. Using concepts of distributed trust and anomaly detection and a realistic testbed environment in the DeterLab infrastructure, we show the impact of the different cyber attacks and propose solutions to defend against them.
Keywords: cloud computing; data integrity; data protection; electronic publishing; trusted computing; DeterLab infrastructure; Internet-scale users; RSS feeds; anomaly detection; cloud-networked infrastructures; compiled news feeds; cyber attacks ;data integrity protection; data-at-rest; data-in-motion; denial-of-service attacks; distributed trust; elastic demands; just-in-time news feed service protection; news feed aggregation; real-time premium subscriber access; security monitoring; service availability checking techniques; Cloud computing; Companies; Feeds; Media; Monitoring; Servers; Uniform resource locators (ID#: 15-8760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7335303&isnumber=7335267
Rettig, Laura; Khayati, Mourad; Cudre-Mauroux, Philippe; Piorkowski, Michal, "Online Anomaly Detection over Big Data streams," in Big Data (Big Data), 2015 IEEE International Conference on, pp. 1113-1122, Oct. 29 2015-Nov. 1 2015. doi: 10.1109/BigData.2015.7363865
Abstract: Data quality is a challenging problem in many real world application domains. While a lot of attention has been given to detect anomalies for data at rest, detecting anomalies for streaming applications still largely remains an open problem. For applications involving several data streams, the challenge of detecting anomalies has become harder over time, as data can dynamically evolve in subtle ways following changes in the underlying infrastructure. In this paper, we describe and empirically evaluate an online anomaly detection pipeline that satisfies two key conditions: generality and scalability. Our technique works on numerical data as well as on categorical data and makes no assumption on the underlying data distributions. We implement two metrics, relative entropy and Pearson correlation, to dynamically detect anomalies. The two metrics we use provide an efficient and effective detection of anomalies over high velocity streams of events. In the following, we describe the design and implementation of our approach in a Big Data scenario using state-of-the-art streaming components. Specifically, we build on Kafka queues and Spark Streaming for realizing our approach while satisfying the generality and scalability requirements given above. We show how a combination of the two metrics we put forward can be applied to detect several types of anomalies ??? like infrastructure failures, hardware misconfiguration or user-driven anomalies ??? in large-scale telecommunication networks. We also discuss the merits and limitations of the resulting architecture and empirically evaluate its scalability on a real deployment over live streams capturing events from millions of mobile devices.
Keywords: Big data; Correlation; Data structures; Entropy; Measurement; Sparks; Yttrium (ID#: 15-8761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363865&isnumber=7363706
Baughman, A.K.; Bogdany, R.J.; McAvoy, C.; Locke, R.; O'Connell, B.; Upton, C., "Predictive Cloud Computing with Big Data: Professional Golf and Tennis Forecasting [Application Notes]," in Computational Intelligence Magazine, IEEE, vol. 10, no. 3, pp. 62-76, Aug. 2015. doi: 10.1109/MCI.2015.2437551
Abstract: Major Golf and Grand Slam Tennis tournaments such as Australian Open, The Masters, Roland Garros, United States Golf Association (USGA), Wimbledon, and United States Tennis Association (USTA) United States (US) Open provide real-time and historical sporting information to immerse a global fan base in the action. Each tournament provides realtime content, including streaming video, game statistics, scores, images, schedule of play, and text. Due to the game popularities, some of the web servers are heavily visited and some are not, therefore, we need a method to autonomously provision servers to provide a smooth user experience. Predictive Cloud Computing (PCC) has been developed to provide a smart allocation/deallocation of servers by combining ensembles of forecasts and predictive modeling to determine the future origin demand for web site content. PCC distributes processing through analytical pipelines that correlate streaming data, such as scores, media schedules, and player brackets with a future-simulated tournament state to measure predicted demand spikes for content. Social data streamed from Twitter provides social sentiment and popularity features used within predictive modeling. Data at rest, such as machine logs and web content, provide additional features for forecasting. While the duration of each tournament varies, the number of origin website requests range from 29,000 to 110,000 hits per minute. The PCC technology was developed and deployed to all Grand Slam tennis events and several major golf tournaments that took place in 2013 and to the present, which has decreased wasted computing consumption by over 50%. We propose a novel forecasting ensemble that includes residual, vector, historical, partial, adjusted, cubic and quadratic forecasters. In addition, we present several predictive models based on Multiple Regression as inputs into several of these forecasters. We conclude by empirically demonstrating that the predictive cloud technology is able- to forecast the computing load on origin web servers for professional golf and tennis tournaments.
Keywords: Big Data; Internet; cloud computing; file servers; regression analysis; social networking (online);sport; Big Data; Grand Slam Tennis tournaments; PCC technology; Twitter; Web servers; Web site content; forecasting ensemble; major golf; multiple regression; predictive cloud computing; predictive modeling; professional golf; smart allocation-deallocation; social data; streaming data; tennis forecasting; Cloud computing; Entertainment; Forecasting; Games; Predictive models; Real-time systems; Servers (ID#: 15-8762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160840&isnumber=7160805
Jiangping Li; Hongbin Ma; Chenguang Yang; Mengyin Fu, "Discrete-Time Adaptive Control of Robot Manipulator with Payload Uncertainties," in Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, pp.1971-1976, 8-12 June 2015
doi: 10.1109/CYBER.2015.7288249
Abstract: In this paper, a new discrete-time adaptive control scheme for controlling robot manipulators is proposed. The objective is to control position of a robot manipulator end effector in the presence of uncertainties caused by unknown fixed or time-varying payload. For simplicity, the unknown payload is considered as the only unknown factor and the data in use is sampled from the true continuous-time plant with constant fixed sampling interval. We estimate the payload according to the available history information and design a discrete-time adaptive controller based on the estimation of the external payload. The adaptive estimator adopted in the adaptive controller only uses one step history and is capable of fast adaptation. The simulation results demonstrated that the new controller can yield a satisfactory tracking performance in the presence of payload uncertainties.
Keywords: adaptive control; continuous time systems; control system synthesis; discrete time systems; end effectors; position control; uncertain systems; adaptive estimator; constant fixed sampling interval; continuous-time plant; discrete-time adaptive controller design; external payload estimation; payload uncertainties; position control; robot manipulator end effector; step history; time-varying payload; tracking performance; unknown fixed payload; End effectors; Estimation; Mathematical model; Payloads; Uncertainty; Discrete-time Adaptive Control; One-step Guess; Payload Estimation; Payload Uncertainty; Robot Manipulator (ID#: 15-8763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288249&isnumber=7287893
“IEEE Standard Cybersecurity Requirements for Substation Automation, Protection, and Control Systems," in IEEE Std C37.240-2014, pp. 1-38, Jan. 30 2015. doi: 10.1109/IEEESTD.2015.7024885
Abstract: Cybersecurity measures require that a balance be achieved between technical feasibility and economic feasibility and that this balance addresses the risks expected to be present at a substation. Further, cybersecurity measures must be designed and implemented in such a manner that access and operation to legitimate activities is not impeded, particularly during times of emergency or restoration activity. This standard presents a balance of the above factors.
Keywords: IEEE standards; power engineering computing; security of data; substation automation; substation protection; IEEE Std C37.240-2014;IEEE standard cybersecurity requirements; emergency; restoration activity; substation automation; substation control systems; substation protection; Access controls; Authentication; Computer crime; Computer security; Encryption; IEEE Standards; Passwords; Remote access; IEEE C37.240;critical infrastructure protection; cybersecurity; electronic access; encryption; password management; remote access;substations (ID#: 15-8764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024885&isnumber=7024884
Elliptic Curve Cryptography 2015 |
Elliptic curve cryptography is a major research area globally. The work cited here focuses on areas of specific interest to the Science of Security community, including cyber physical systems security. The work was presented in 2015.
Borges, F.; Volk, F.; Muhlhauser, M., "Efficient, Verifiable, Secure, and Privacy-Friendly Computations for the Smart Grid," in Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, pp. 1-5, 18-20 Feb. 2015. doi: 10.1109/ISGT.2015.7131862
Abstract: In this paper, we present a privacy-preserving protocol between an energy provider and smart meters. Many details about the life of customers can be inferred from fine-grained information on their energy consumption. Different from other state-of-the-art protocols, the presented protocol addresses this issue as well as the integrity of electricity bills. Therefore, our protocol provides secure aggregation of measured consumption per round of measurement and verifiable billing after any period. Aggregation of measured consumption ensures that energy suppliers know the consolidated consumption of their customers. Verifiable billing ensures fairness for customers and their energy supplier. We adapt a homomorphic encryption scheme based on elliptic curve cryptography to efficiently protect the data series of measurements that are collected by smart meters. Moreover, energy suppliers can detect and locate energy loss or fraud in the power grid while retaining the privacy of all consumers.
Keywords: energy consumption; public key cryptography; smart meters; smart power grids; elliptic curve cryptography; energy consumption; homomorphic encryption scheme; privacy-friendly computations; privacy-preserving protocol; smart grid; smart meters; verifiable billing; Elliptic curve cryptography; Energy measurement; Phasor measurement units; Protocols; Smart grids; Smart meters; Data Series; Elliptic Curve Cryptography; Homomorphic Encryption; Performance; Privacy; Security; Smart Grid (ID#: 15-8719)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7131862&isnumber=7131775
Vijayalakshmi, V.; Sharmila, R.; Shalini, R., "Hierarchical Key Management Scheme using Hyper Elliptic Curve Cryptography in Wireless Sensor Networks," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-5, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219840
Abstract: Wireless Sensor Network (WSN) be a large scale network with thousands of tiny sensors moreover is of utmost importance as it is used in real time applications. Currently WSN is required for up-to-the-minute applications which include Internet of Things (IOT), Smart Card, Smart Grid, Smart Phone and Smart City. However the greatest issue in sensor network is secure communication for which key management is the primary objective. Existing key management techniques have many limitations such as prior deployment knowledge, transmission range, insecure communication and node captured by the adversary. The proposed novel Track-Sector Clustering (TSC) and Hyper Elliptic Curve Cryptography (HECC) provides better transmission range and secure communication. In TSC, the overall network is separated into circular tracks and triangular sectors. Power Aware Routing Protocol (PARP) was used for routing of data in TSC, which reduces the delay with increased packet delivery ratio. Further for secure routing HECC was implemented with 80 bits key size, which reduces the memory space and computational overhead than the existing Elliptic Curve Cryptography (ECC) key management scheme.
Keywords: pattern clustering; public key cryptography; routing protocols; telecommunication power management; telecommunication security; wireless sensor networks; ECC; IOT; Internet of Things; PARP; TSC; WSN; computational overhead reduction; data routing; hierarchical key management scheme; hyper elliptic curve cryptography; memory space reduction; packet delivery ratio; power aware routing protocol; secure communication; smart card; smart city; smart grid; smart phone; track-sector clustering; up-to-the-minute application; wireless sensor network; Convergence; Delays; Elliptic curve cryptography; Real-time systems; Throughput; Wireless sensor networks; Hyper Elliptic Curve Cryptography; Key Management Scheme; Power Aware Routing; Track-Sector Clustering; Wireless Sensor network (ID#: 15-8720)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219840&isnumber=7219823
Raso, O.; Mlynek, P.; Fujdiak, R.; Pospichal, L.; Kubicek, P., "Implementation of Elliptic Curve Diffie Hellman in Ultra-Low Power Microcontroller," in Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, pp. 662-666, 9-11 July 2015. doi: 10.1109/TSP.2015.7296346
Abstract: In this article, the ECDH crypto library is introduced. This library is designed to ultra-low power MSP430 microcontroller and it allows implement time and memory consuming cryptographic operations in this microcontroller with limited resources. The main part of the article focuses on the way of ECDH implementation to the MSP430 microcontroller. Some implementation problems were discussed here. The practical part of the article focuses on measuring of computing times and memory size requirements. Our solution of ECDH crypto library allows the use of public key cryptography for key establishment for microcontroller with limited resources without adding any additional specialized equipment.
Keywords: low-power electronics; microcontrollers; public key cryptography; ECDH crypto library; Elliptic Curve Diffie Hellman; computing times measurement; key establishment; memory consuming cryptographic operations; memory size requirements; public key cryptography; time consuming cryptographic operations; ultra-low power MSP430 microcontroller; Elliptic curve cryptography; Libraries; Memory management; Microcontrollers; Size measurement; Diffie Hellman; Elliptic Curve Cryptography; Public Key Cryptography; Smart Grid (ID#: 15-8721)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7296346&isnumber=7296206
Soykan, Elif Ustundag; Demirag Ersoz, Seda; Soykan, Gurkan, "Identity Based Signcryption for Advanced Metering Infrastructure," in Smart Grid Congress and Fair (ICSG), 2015 3rd International Istanbul, pp. 1-5, 29-30 April 2015. doi: 10.1109/SGCF.2015.7354933
Abstract: In smart grid, Advanced Metering Infrastructure (AMI) system provides measuring, storing, analyzing, and utilizing energy consumption data. It enables a link between customers and electric power utilities. The AMI is also responsible for transmitting requests, commands, pricing-information and software updates from the authorized parties to the smart meters. As the AMI security threats from inside and outside grow exponentially; confidentiality, authentication, integrity and non-repudiation security services should be deployed to overcome possible threats. In this paper we give an overview on the main components and security requirements of the AMI and present possible security solutions. Then we propose an identity based security architecture, namely a signcryption scheme for smart metering infrastructure to provide necessary security services by taking advantage of identity based cryptography to ensure efficiency in addition to security through eliminating the cost for generation and managing certificates.
Keywords: Authentication; Elliptic curves; Public key; Smart grids; Smart meters; Advanced Metering Infrastructure; Identity Based Cryptography; Security; Signcryption; Smart Grid (ID#: 15-8722)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354933&isnumber=7354913
Vaidya, Binod; Makrakis, Dimitrios; Mouftah, Hussein T., "Multi-domain Public Key Infrastructure for Vehicle-to-Grid Network," in Military Communications Conference, MILCOM 2015 - 2015 IEEE, pp. 1572-1577, 26-28 Oct. 2015. doi: 10.1109/MILCOM.2015.7357669
Abstract: Smart grid is a modern electrical grid that utilizes Information and Communication Technologies (ICT) and information networks. Growing attraction in Electric Vehicles (EV) shall likely provide a fundamental shift not only in transportation sector but also in the existing electrical grid infrastructure. In Vehicle-to-Grid (V2G) network, participating EVs can be used to store energy and supply this energy back to the power grid when required. To realize proper deployment of V2G network, charging infrastructure having various entities such as charging facility, clearinghouse, and energy provider has to be constructed. So use of Public key infrastructure (PKI) is indispensable for provisioning security solutions in V2G networks. The ISO/IEC 15118 standard is ascribed that incorporates X.509 PKI solution for V2G network. However, as traditional X.509 based PKI for V2G network has several shortcomings, we have proposed a multi-domain PKI model for V2G network that is built on elliptic curve cryptography and a self-certified public key technique having implicit certificate. We illustrate that the proposed solutions outperform the existing ones.
Keywords: Electric vehicles; Elliptic curve cryptography; IEC Standards; ISO Standards; Smart grids; ISO/IEC 15118; PKI; Smart Grid; Vehicle-to-Grid network (ID#: 15-8723)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357669&isnumber=7357245
Jacobsen, R.H.; Mikkelsen, S.A.; Rasmussen, N.H., "Towards the Use of Pairing-Based Cryptography for Resource-Constrained Home Area Networks," in Digital System Design (DSD), 2015 Euromicro Conference on, pp. 233-240, 26-28 Aug. 2015. doi: 10.1109/DSD.2015.73
Abstract: In the prevailing smart grid, the Home Area Network (HAN) will become a critical infrastructure component at the consumer premises. The HAN provides the electricity infrastructure with a bi-directional communication infrastructure that allows monitoring and control of electrical appliances. HANs are typically equipped with wireless sensors and actuators, built from resource-constrained hardware devices, that communicate by using open standard protocols. This raises concerns on the security of these networked systems. Because of this, securing a HAN to a proper degree becomes an increasingly important task. In this paper, a security model, where an adversary may exploit the system both during HAN setup as well as during operations of the network, is considered. We propose a scheme for secure bootstrapping of wireless HAN devices based on Identity-Based Cryptography (IBC). The scheme minimizes the number of exchanged messages needed to establish a session key between HAN devices. The feasibility of the approach is demonstrated from a series of prototype experiments.
Keywords: computer network security; cryptography; domestic appliances; home automation; home networks; personal area networks; protocols; smart power grids; IBC; actuators; bidirectional communication infrastructure; critical infrastructure component; electrical appliance control; electrical appliance monitoring; electricity infrastructure; identity-based cryptography; message exchange; network operations; networked system security; open standard protocols; pairing-based cryptography; resource-constrained hardware devices; resource-constrained home area networks; secure bootstrapping; security model; session key; smart grid; wireless HAN devices; wireless personal area network; wireless sensors; Authentication; Elliptic curve cryptography; Logic gates; Prototypes; constrained devices; home area network; identity-based cryptography; network bootstrap; pairing-based cryptography; security (ID#: 15-8724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7302275&isnumber=7302233
Selma, H.A.; M'hamed, H., "Elliptic Curve Cryptographic Processor Design using FPGAs," in Control, Engineering & Information Technology (CEIT), 2015 3rd International Conference on, pp. 1-6, 25-27 May 2015. doi: 10.1109/CEIT.2015.7233123
Abstract: Elliptic Curve Cryptography (ECC) has been the focus of much recent attention since it offers the highest security per bit of any known public key cryptosystem. This benefit of smaller key sizes makes ECC particularly attractive for constrained devices, since its implementation requires less memory and processing power. The present work gives a description of a hardware implementation of an F2m elliptic curve cryptographic processor using field programmable gate array circuit technology. We provide simulation and implementation results related to ECC processor using the National Institute of Standards and Technology (NIST) recommended curve.
Keywords: field programmable gate arrays; microprocessor chips; public key cryptography; F2m elliptic curve cryptographic processor; FPGA; NIST; National Institute of Standards and Technology; circuit technology; field programmable gate array; hardware implementation; public key cryptosystem; Elliptic curve cryptography; Elliptic curves; Galois fields; Hardware; Polynomials; Protocols; ECC; ECC processor; FPGA; NIST; binary finite fields F2m; public key cryptosystem (ID#: 15-8725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7233123&isnumber=7232976
Rathi, A.; Rathi, D.; Astya, R.; Nand, P., "Improvement of Existing Security System by Using Elliptic Curve and Biometric Cryptography," in Computing, Communication & Automation (ICCCA), 2015 International Conference on, pp. 994-998, 15-16 May 2015.doi: 10.1109/CCAA.2015.7148520
Abstract: Biometric systems are systems which have an automated method to measure or analyze biological data, extracting features from the acquired data and comparing it against the templates set in the database. Many authentication schemes involving different biometric systems initialize several identification- and verification-based security methods, and this paper discusses a system which provides secure verification to incorporate the method with elliptical curve cryptography that works on following two points, viz. preventing an elliptic curve and a key using elliptical curve cryptography and the blending of biometric modality. The paper also discusses different approaches to multi-model biometric systems, the levels of fusion that are plausible and the integration of strategies that can be adopted to consolidate information. The unimodal biometric system faces many difficulties, like spooling, attacks, noisy data, etc., but the combination of two or more biometric modalities recognizes anything in a single identification.
Keywords: message authentication; public key cryptography; authentication schemes; biometric cryptography; biometric modality; elliptic curve cryptography; multi model biometric system; security system; verification based security methods; Authentication; Biometrics (access control); Databases; Elliptic curve cryptography; Elliptic curves; Feature extraction; Templates; elliptical curve cryptography; genetic algorithm; identification; one time password; verification (ID#: 15-8726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148520&isnumber=7148334
Benssalah, M.; Djeddou, M.; Drouiche, K., "Pseudo-Random Sequence Generator Based on Random Selection of an Elliptic Curve," in Computer, Information and Telecommunication Systems (CITS), 2015 International Conference on, pp. 1-5, 15-17 July 2015. doi: 10.1109/CITS.2015.7297719
Abstract: Pseudo-random numbers generators (PRNG) are one of the main security tools in Radio Frequency IDentification (RFID) technology. Thus, a weak internal embedded generator can directly cause the entire application to be insecure and it makes no sense to employ robust protocols for the security issue. In this paper, we propose a new PRNG constructed by randomly selecting points from two elliptic curves, suitable for ECC based applications. The main contribution of this work is the increasing of the generator internal states by extending the set of its output realizations to two curves randomly selected. The main advantages of this PRNG in comparison to previous works are the large periodicity, a better distribution of the generated sequences and a high security level based on the elliptic curve discrete logarithm problem (ECDLP). Further, the proposed PRNG has passed the different Special Publication 800-22 NIST statistical test suite. Moreover, the proposed PRNG presents a scalable architecture in term of security level and periodicity at the expense of increasing the computation complexity. Thus, it can be adapted for ECC based cryptosystems such as RFID tags and sensors networks and other applications like computer physic simulations, and control coding.
Keywords: computational complexity; cryptographic protocols; public key cryptography; radiofrequency identification; random number generation; statistical analysis; ECC based cryptosystem; ECDLP; PRNG; RFID technology; computation complexity; elliptic curve discrete logarithm problem; embedded generator; pseudo-random sequence generator; radio frequency identification technology; random selection; robust protocols; security tools; sensors networks; special publication 800-22 NIST statistical test; Complexity theory; Elliptic curve cryptography; Elliptic curves; Generators; Space exploration; Cryptosystem; ECC; PRNG; RFID (ID#: 15-8727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7297719&isnumber=7297712
Mathe, S.E.; Boppana, L.; Kodali, R.K., "Implementation of Elliptic Curve Digital Signature Algorithm on an IRIS Mote using SHA-512," in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, pp. 445-449, 28-30 May 2015. doi: 10.1109/IIC.2015.7150783
Abstract: Wireless Sensor Networks (WSN) are spatially distributed nodes monitoring physical or environmental conditions such as temperature, pressure, sound, light etc using sensors. The sensed data is cooperatively passed through a series of nodes in a network to a main base-station (BS) where it is analysed by the user. The data is communicated over a wireless channel between the nodes and since wireless channel has minimum security, the data has to be communicated in a secure manner. Different encryption techniques can be applied to transmit the data securely. This work provides an efficient implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) using SHA-512 algorithm on an IRIS mote. The ECDSA does not actually encrypt the data but provides a means to check the integrity of the received data. If the received data has been modified by an attacker, the ECDSA detects it and signals to the transmitter for retransmission. The SHA-512 algorithm is the hash algorithm used in the ECDSA and is implemented for an 8-bit architecture. The SHA-512 algorithm is chosen as it provides better security than its predecessors.
Keywords: digital signatures; public key cryptography; radio transmitters; telecommunication security; wireless channels; wireless sensor networks; IRIS mote;SHA-512 algorithm; WSN; elliptic curve digital signature algorithm; encryption techniques; main base station; minimum security; received data; retransmission transmitter; wireless channel; wireless sensor networks; word length 8 bit; Algorithm design and analysis; Elliptic curve cryptography; Elliptic curves; Wireless sensor networks; ECDSA; IRIS mote;SHA-512;WSN (ID#: 15-8728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150783&isnumber=7150576
Kasra-Kermanshahi, S.; Salleh, M., "A novel Authentication Scheme for Mobile Environments in the Context of Elliptic Curve Cryptography," in Computer, Communications, and Control Technology (I4CT), 2015 International Conference on, pp. 506-510, 21-23 April 2015. doi: 10.1109/I4CT.2015.7219630
Abstract: The challenge of providing security for Mobile Ad-hoc Networks (MANETs) due to the inherent problems regarding to the use of mobile devices and nonexistence of fixed infrastructures, made them one of the significant topics in security and cryptography research area. In this way, several works have been done to propose lightweight and less energy consuming protocols. However, the use of an expensive cryptographic operation named Bilinear Pairing made the mentioned schemes heavy for such resource constrained environments. In this paper, we could propose an efficient public key authentication scheme over an elliptic curve based algebraic group rather than Bilinear Pairings. The results show that our proposed scheme requires less complex operations in compare with other related ones.
Keywords: cryptographic protocols; mobile ad hoc networks; public key cryptography; telecommunication security; MANET security; bilinear pairing; elliptic curve based algebraic group; elliptic curve cryptography; energy consuming protocols; mobile ad-hoc networks; mobile devices; mobile environments; public key authentication scheme; resource constrained environments; Ad hoc networks; Mobile computing; Protocols; Public key cryptography; Authentication; Certificateless; Elliptic Curves; Lightweight; MANETs (ID#: 15-8729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219630&isnumber=7219513
Bobade, S.D.; Mankar, V.R., "VLSI Architecture for an Area Efficient Elliptic Curve Cryptographic Processor for Embedded Systems," in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, pp. 1038-1043, 28-30 May 2015. doi: 10.1109/IIC.2015.7150899
Abstract: Elliptic curve cryptography has established itself as a perfect cryptographic tool in embedded environment because of its compact key sizes and security strength at par with that of any other standard public key algorithms. Several FPGA implementations of ECC processor suited for embedded system have been consistently proposed, with a prime focus area being space and time complexities. In this paper, we have modified double point multiplication algorithm and replaced traditional Karatsuba multiplier in ECC processor with a novel modular multiplier. Designed Modular multiplier follows systolic approach of processing the words. Instead of processing vector polynomial bit by bit or in parallel, proposed multiplier recursively processes data as 16-bit words. This multiplier when employed in ECC processor reduces drastically the total area utilization. The complete modular multiplier and ECC processor module is synthesized and simulated using Xilinx 14.4 software. Experimental findings show a remarkable improvement in area efficiency, when comparing with other such architectures.
Keywords: VLSI; computational complexity; embedded systems; field programmable gate arrays; multiplying circuits; public key cryptography; ECC processor; FPGA implementations; VLSI architecture; Xilinx 14.4 software; area efficient elliptic curve cryptographic processor; cryptographic tool; double point multiplication algorithm; embedded environment; embedded system; field programmable gate array; modular multiplier; public key algorithms; security strength; space complexities; systolic approach; time complexities; total area utilization; vector polynomial bit; words processing; Encryption; Integrated circuits; Latches; Elliptic Curve Cryptography; double point multiplication; finite field multiplier; public key Cryptography; security (ID#: 15-8730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150899&isnumber=7150576
Ghoreishi, S.-M.; Isnin, I.F.; Abd Razak, S.; Chizari, H., "Secure And Authenticated Key Agreement Protocol with Minimal Complexity Of Operations in the Context of Identity-Based Cryptosystems," in Computer, Communications, and Control Technology (I4CT), 2015 International Conference on, pp. 299-303, 21-23 April 2015. doi: 10.1109/I4CT.2015.7219585
Abstract: Recently, a large variety of Identity-Based Key Agreement protocols have tried to eliminate the use of Bilinear Pairings in order to decrease complexity of computations through performing group operations over Elliptic Curves. In this paper we propose a novel pairing-free Key Agreement protocol over elliptic curve based algebraic groups. The results show that our proposed protocol is significantly less complex than related works from complexity of computation perspective.
Keywords: cryptographic protocols; public key cryptography; authenticated key agreement protocol; bilinear pairings; elliptic curve based algebraic groups; identity-based cryptosystems; identity-based key agreement protocols; pairing-free key agreement protocol; secure protocol; Complexity theory; Computational efficiency; Context; Cryptography; Elliptic curves; Protocols; Elliptic Curve; Identity-Based; Key Agreement; efficiency (ID#: 15-8731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219585&isnumber=7219513
Infanta Princy, S.; Revathi, G., "Enhanced Security Algorithm with Key Exchange Protocol in Wireless Network," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp.1-4, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282243
Abstract: In this paper, the proposed attack detection scheme in QOS security architecture with elliptic curve Diffie-Hellman (ECDH) protocol to overcome the attacks IMS (IP Multimedia Subsystem) and Femto cell access points (H(e)NBs). Initially, it refers the current security threats and security standards in WiMAX and LTE networks. Both networks WiMAX and LTE are Fourth Generation (4G) wireless technology with well-defined Quality of Service (QOS) and security architecture. In LTE network, the present theoretical analysis of proposed scheme gives a good performance and experimental results in means of throughput, latency and frameless that are compared and analyzed with testbed implementation and simulation approaches of LTE.
Keywords: 4G mobile communication; IP networks; Long Term Evolution; WiMax; computer network security; cryptographic protocols; femtocellular radio; multimedia communication; public key cryptography; quality of service; 4G wireless technology; ECDH protocol; IMS wireless network attack; IP multimedia subsystem attack; LTE network; QoS security architecture; WiMax network; attack detection scheme; elliptic curve Diffie-Hellman protocol; enhanced security algorithm; femto cell access points; fourth generation wireless technology; key exchange protocol; Authentication; Communication system security; Long Term Evolution; Quality of service; WiMAX; Long-Term Evolution (LTE); Multihop; Worldwide Interoperable For Microwave Access (WiMAX) and Elliptic Curve Diffie Hellman (ECDH) (ID#: 15-8732)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282243&isnumber=7282219
Tan Heng Chuan; Jun Zhang; Ma Maode; Chong, P.H.J.; Labiod, H., "Secure Public Key Regime (SPKR) in Vehicular Networks," in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, pp. 1-7, 5-7 Aug. 2015. doi: 10.1109/SSIC.2015.7245678
Abstract: Public Key Regime (PKR) was proposed as an alternative to certificate based PKI in securing Vehicular Networks (VNs). It eliminates the need for vehicles to append their certificate for verification because the Road Side Units (RSUs) serve as Delegated Trusted Authorities (DTAs) to issue up-to-date public keys to vehicles for communications. If a vehicle's private/public key needs to be revoked, the root TA performs real time updates and disseminates the changes to these RSUs in the network. Therefore, PKR does not need to maintain a huge Certificate Revocation List (CRL), avoids complex certificate verification process and minimizes the high latency. However, the PKR scheme is vulnerable to Denial of Service (DoS) and collusion attacks. In this paper, we study these attacks and propose a pre-authentication mechanism to secure the PKR scheme. Our new scheme is called the Secure Public Key Regime (SPKR). It is based on the Schnorr signature scheme that requires vehicles to expend some amount of CPU resources before RSUs issue the requested public keys to them. This helps to alleviate the risk of DoS attacks. Furthermore, our scheme is secure against collusion attacks. Through numerical analysis, we show that SPKR has a lower authentication delay compared with the Elliptic Curve Digital Signature (ECDSA) scheme and other ECDSA based counterparts.
Keywords: mobile radio; public key cryptography; certificate revocation list; collusion attack; complex certificate verification process; delegated trusted authorities; denial of service attack; lower authentication delay; preauthentication mechanism; road side units; secure public key regime; vehicular networks; Authentication; Computer crime; Digital signatures; Public key; Vehicles; Collusion Attacks; Denial of Service Attacks; Schnorr signature; certificate-less PKI (ID#: 15-8733)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245678&isnumber=7245317
Mahajan, R.K.; Patil, S.M., "Protection Against Data Drop, an Enhanced Security Model of Authentication Protocol for Ad-Hoc N/W," in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, pp. 1-4, 24-25 Jan. 2015. doi: 10.1109/EESCO.2015.7253867
Abstract: Effective network security may targets a plenty of threats and also avoids them from entering or opening out on a network. An attack in Mobile Ad hoc NETwork (MANET) is due to unreliability, unfixed topology, limited battery power and lack of centralized control. The first line of defense solutions is Encryption and authentication which are not adequate to protect MANETs from packet dropping attacks. Existing IDSs for MANETS depend on the Watchdog technique. In existing system Researchers main attraction is on designing new prevention, detection and response mechanism for MANETs. The scheme will identify, supervise and observe the malicious nodes without adjusting the performances in the network. The motivation is to overcome the issues such as “limited transmission power, packet dropping, receiver collision and false misbehavior reports” generation of the Watchdog system. In this paper, we proposed a new the Modified Version of EAACK based IDS that is used to overcome the MANET attacks. Elliptic Curve Digital signature Algorithm (ECDSA) is use to authenticate the acknowledgment packets used in the propose work to overcome drawbacks in the security level.
Keywords: cryptographic protocols; digital signatures; mobile ad hoc networks; public key cryptography; EAACK based IDS;ECDSA; MANET; authentication protocol; data drop; elliptic curve digital signature algorithm; enhanced security model; mobile ad hoc network; network security; packet dropping attacks; watchdog technique; Ad hoc networks; Atmospheric modeling; Authentication; Cryptography; Logic gates; Mobile computing; Programmable logic arrays; Elliptic Curve Digital Signature Algorithm (ECDSA);Enhanced Adaptive ACKnowledgment (EAACK); Mobile Adhoc NETwork (MANET) (ID#: 15-8734)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253867&isnumber=7253613
Tiwari, D.; Gangadharan, G.R.; Ma, M., "Provable Secure Protected Designated Proxy Signature with Revocation," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 2033-2041, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275916
Abstract: In this paper, we present a novel concept in proxy signature by introducing a trusted proxy agent called mediator together with a proxy signer, which enables an efficient revocation of signing capability within the delegation period, by controlling the signing capability of proxy signer even before/after generating the signature by the designated proxy signer. We describe a secure designated proxy signature scheme with revocation based on elliptic curve discrete logarithmic problem. Further, we define a random oracle based security model to prove the security of the proposed scheme under an adaptive-chosen-message attack and an adaptive-chosen-warrant attack.
Keywords: digital signatures; public key cryptography; trusted computing; adaptive-chosen-message attack; adaptive-chosen-warrant attack; elliptic curve discrete logarithmic problem; mediator; provable secure protected designated proxy signature; proxy signer; random oracle based security model; signing capability revocation; trusted proxy agent; Electronic mail; Elliptic curves; Forgery; Games; Informatics; Public key (ID#: 15-8735)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275916&isnumber=7275573
Cheng-Rung Tsai; Ming-Chun Hsiao; Wen-Chung Shen; Wu, A.-Y.A.; Chen-Mou Cheng, "A 1.96mm2 Low-Latency Multi-Mode Crypto-coprocessor for PKC-based IoT Security Protocols," in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, pp. 834-837, 24-27 May 2015. doi: 10.1109/ISCAS.2015.7168763
Abstract: In this paper, we present the implementation of a multi-mode crypto-coprocessor, which can support three different public-key cryptography (PKC) engines (NTRU, TTS, Pairing) used in post-quantum and identity-based cryptosystems. The PKC-based security protocols are more energy-efficient because they usually require less communication overhead than symmetric-key-based counterparts. In this work, we propose the first-of-its-kind tri-mode PKC coprocessor for secured data transmission in Internet-of-Things (IoT) systems. For the purpose of low energy consumption, the crypto-coprocessor incorporates three design features, including 1) specialized instruction set for the multi-mode cryptosystems, 2) a highly parallel arithmetic unit for cryptographic kernel operations, and 3) a smart scheduling unit with intelligent control mechanism. By utilizing the parallel arithmetic unit, the proposed crypto-coprocessor can achieve about 50% speed up. Meanwhile, the smart scheduling unit can save up to 18% of the total latency. The crypto-coprocessor was implemented with AHB interface in TSMC 90nm CMOS technology, and the die size is only 1.96 mm2. Furthermore, our chip is integrated with an ARM-based system-on-chip (SoC) platform for functional verification.
Keywords: CMOS integrated circuits; Internet of Things; coprocessors; cryptographic protocols; CMOS technology ;Internet-of-Things systems; IoT security protocols; IoT systems; PKC based security protocols; PKC coprocessor; PKC engines; SoC platform; cryptographic kernel operations; functional verification; highly parallel arithmetic unit; identity based cryptosystems; intelligent control mechanism; multimode cryptocoprocessor; parallel arithmetic unit; post quantum cryptosystems; public key cryptography; secured data transmission; smart scheduling unit; symmetric key based counterparts; system-on-chip; Computer architecture; Elliptic curve cryptography; Engines; Polynomials; System-on-chip; IoT; Public-key cryptography; SoC; crypto-coprocessor}, (ID#: 15-8736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168763&isnumber=7168553
Bhave, A.; Jajoo, S.R., "Secure Communication in Wireless Sensor Networks using Hybrid Encryption Scheme and Cooperative Diversity Technique," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-6, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282235
Abstract: A Wireless Sensor Network (WSN) is a versatile sensing system suitable to cover a wide variety of applications. Power efficiency, security and reliability are the major areas of concern in designing WSNs[3][7]. More-over, one of the most important issues in WSN design is to assure the reliability of the collected data which often involve security issues in the wireless communications. This project mainly focused on development of hybrid encryption scheme which combines a symmetric and asymmetric encryption algorithms for secure key exchange and enhanced cipher text security. This paper comments on comparison of performance in terms of bit error rate for symmetric, Asymmetric and hybrid encryption schemes implemented in wireless sensor networks. Test Results shows decrease in bit error rate by using hybrid encryption scheme as compare to symmetric and asymmetric schemes alone. Increase in number of sensors further minimizes bit error rate and improves performance. Alamouti codes with Space time block codes are most widely used transmission mechanism in WSN. Extended space time block codes (ECBSTBC) have better signal to noise ratio improvement when compared with sensor selection scheme. Proposed system uses ECBSTBC codes for transmission[8].
Keywords: block codes; cryptography; telecommunication security; wireless sensor networks; Alamouti codes; ECBSTBC; WSN; cipher text security; cooperative diversity technique; extended space time block codes; hybrid encryption scheme; secure communication; security issues; signal to noise ratio; wireless communications; wireless sensor networks; Elliptic curve cryptography; Indexes; Reliability; Resource management; Wireless sensor networks; AES; ECBSTBC; ECC; Hybrid Encryption; WSN (ID#: 15-8737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282235&isnumber=7282219
Santoso, F.K.; Vun, N.C.H., "Securing IoT for Smart Home System," in Consumer Electronics (ISCE), 2015 IEEE International Symposium on, pp. 1-2, 24-26 June 2015. doi: 10.1109/ISCE.2015.7177843
Abstract: This paper presents an approach to incorporate strong security in deploying Internet of Things (IoT) for smart home system, together with due consideration given to user convenience in operating the system. The IoT smart home system runs on conventional wifi network implemented based on the AllJoyn framework, using an asymmetric Elliptic Curve Cryptography to perform the authentications during system operation. A wifi gateway is used as the center node of the system to perform the system initial configuration. It is then responsible for authenticating the communication between the IoT devices as well as providing a mean for the user to setup, access and control the system through an Android based mobile device running appropriate application program.
Keywords: Internet of Things; authorisation; home automation; internetworking; public key cryptography; smart phones; wireless LAN; AllJoyn framework; Android based mobile device; Internet of Things; IoT smart home system security; Wi-Fi gateway; Wi-Fi network; application program; asymmetric elliptic curve cryptography; center node; communication authentication; system access; system control; system initial configuration; system operation; system setup; user convenience; Authentication; IEEE 802.11 Standard; Internet of things; Logic gates; Mobile handsets; Smart homes; IoT; authentication; smart home (ID#: 15-8738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7177843&isnumber=7177764
Mallissery, S.; Manohara Pai, M.M.; Ajam, N.; Pai, R.M.; Mouzna, J., "Transport and Traffic Rule Violation Monitoring Service in ITS: A secured VANET cloud application," in Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, pp. 213-218, 9-12 Jan. 2015. doi: 10.1109/CCNC.2015.7157979
Abstract: Vehicular Ad-hoc Network (VANET) cloud, a hybrid technology, provides several computational services to minimize traffic congestion, travelling time, accidents, and environmental pollution. In the proposed work, the concept of VANET cloud is used for helping the regulatory authorities in identifying the vehicles violating the traffic rules through sensors included as part of On Board Unit (OBU). When the vehicle is on fly the sensor values are periodically transferred to the cloud, controlled by the traffic police. A novel concept called Transient Ticket (TT) has been used to minimize the time and the cost of distributing Certificate Revocation List (CRL) to the vehicles. The proposed scheme also ensures utmost verification of identity, authenticity, confidentiality and integrity of the communication parties and messages exchanged. The work has been simulated using NS3 network simulator and Google App Engine (GAE). All the generated keys, TTs and the exchanged messages have been securely stored in the GAE for the ease of accessibility and processing. The results show that the proposed approach consumes very less time with respect to the generation of keys, exchange of messages, verification of authenticity and the generation of TT without compromising security.
Keywords: cloud computing; computerised monitoring; message authentication; public key cryptography; road accidents; road safety; vehicular ad hoc networks; Google App Engine; ITS; NS3 network simulator; accident minimization; authenticity verification; communication parties integrity; distributing certificate revocation list; elliptic curve integrated encryption scheme; environmental pollution minimization; identity verification; message exchange integrity; on board unit; secured VANET cloud application; traffic congestion minimization; traffic rule violation monitoring service; transient ticket; transport rule violation monitoring service; travelling time minimization; vehicular ad hoc network; Gas detectors; Public key; Vehicles; Vehicular ad hoc networks; ITS; Traffic Police Controlled Vehicular Cloud; Transient Ticket; Trust Value; VANET Cloud (ID#: 15-8739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157979&isnumber=7157933
Subha, S.; Sankar, U.G., "Message Authentication and Wormhole Detection Mechanism in Wireless Sensor Network," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-4, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282382
Abstract: One of the most effective way to prevent unauthorized and corrupted message being forwarded in wireless sensor network. But there is high computational and communication overhead in addition to lack of scalability and resilience to node compromise attacks. So to address these issues, a polynomial-based scheme was recently introduced. However, this scheme and its extensions all have the weakness of a built-in threshold determined by the degree of the polynomial. when the number of messages transmitted is larger than this threshold, the adversary can fully recover the polynomial. In the existing system, an unconditionally secure and efficient source anonymous message authentication (SAMA) scheme is presented which is based on the optimal modified Elgamal signature (MES) scheme on elliptic curves. This MES scheme is secure against adaptive chosen-message attacks in the random oracle model. This scheme enables the intermediate nodes to authenticate the message so that all corrupted message can be detected and dropped to conserve the sensor power. While achieving compromise resiliency, flexible-time authentication and source identity protection, this scheme does not have the threshold problem. While enabling intermediate nodes authentication, this scheme allows any node to transmit an unlimited number of messages without suffering the threshold problem. But by using this method the black hole and gray hole attacks are detected but wormhole attack is doesn't detect. Because the wormhole attack is one of the harmful attacks which degrade the network performance. So, in the proposed system, one innovative technique is introduced which is called an efficient wormhole detection mechanism in the wireless sensor networks. In this method, considers the RTT between two successive nodes and those nodes' neighbor number which is needed to compare those values of other successive nodes. The identification of wormhole attacks is based on the two faces. The first consideration is t- at the transmission time between two wormhole attack affected nodes is considerable higher than that between two normal neighbor nodes. The second detection mechanism is based on the fact that by introducing new links into the network, the adversary increases the number of neighbors of the nodes within its radius. An experimental result shows that the proposed method achieves high network performance.
Keywords: polynomials; telecommunication security; wireless sensor networks; MES scheme; SAMA; adaptive chosen message attacks; black hole attacks; corrupted message; elliptic curves; gray hole attacks; message authentication; modified Elgamal signature; node compromise attacks; polynomial based scheme; random oracle model; source anonymous message authentication; unauthorized message; unlimited number; wireless sensor network; wormhole detection mechanism; Computational modeling; Cryptography; Scalability; Terminology; Hop-by-hop authentication; public-key cryptosystem; source privacy (ID#: 15-8740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282382&isnumber=7282219
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Identity Management 2015 |
The term identity management refers to the management of individual identities, their roles, authentication, authorizations, and privileges within or across systems. Examples include passwords, active directories, digital identities, tokens, and workflows. One of the core competencies for cybersecurity, the increasingly complex IT world demands smarter identity management solutions. The research cited here was presented in 2015.
Singh, A.; Chatterjee, K., “Identity Management in Cloud Computing Through Claim-Based Solution,” in Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on, vol., no., pp. 524-529, 21-22 Feb. 2015. doi:10.1109/ACCT.2015.89
Abstract: In the last few years, many organizations/users have adopted cloud storage systems. These storage systems provide a large virtual storage. When people move from web applications to cloud computing platform, their main concern point is how-to raise privacy of user’s sensitive data in cloud infrastructure. The traditional form of accessing cloud services is to use a username and password as a security token. During login/access time, new security risk may arise like virtualization attack, account/password sniffing, or phishing attack. Hence, cloud service provider (CSP) does not provide a complete security. Even though existing authentication scheme have addressed various security properties, there is still need of a secure authentication mechanism. This paper describes the need of claim-based identity management system, the basic terminology that is used in claim based approach and what is the advantage to use this approach. This paper proposes a model to extend the claim-based identity management scheme for cloud applications and provide a more secure way to access the cloud services. In this scheme, a new form of Security Assertion Markup Langauge (SAML) security tokens are created for identity, supported by Windows Communication Foundation (WCF) and hence, can prove more reliable with single interoperable approach to identify the works more secure in every situation in the cloud computing environment.
Keywords: cloud computing; virtual storage; CSP; SAML security tokens; Security Assertion Markup Language; WCF; Web applications; Windows Communication Foundation; account/password sniffing; claim-based identity management scheme; claim-based identity management system; claim-based solution; cloud computing environment; cloud computing platform; cloud infrastructure; cloud service provider; cloud services; cloud storage systems; phishing attack; secure authentication mechanism; user sensitive data; username; virtual storage; virtualization attack; Authentication; Browsers; Cloud computing; Electronic mail; Organizations; Protocols; Claim; Cloud Computing; Federation Provide; Identity Providers; Security Token Service (ID#: 15-8698)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7079139&isnumber=7079031
Hörbe, R.; Hötzendorfer, W., “Privacy by Design in Federated Identity Management,” in Security and Privacy Workshops (SPW), 2015 IEEE, vol., no., pp. 167-174, 21-22 May 2015. doi:10.1109/SPW.2015.24
Abstract: Federated Identity Management (FIM), while solving important scalability, security and privacy problems of remote entity authentication, introduces new privacy risks. By virtue of sharing identities with many systems, the improved data quality of subjects may increase the possibilities of linking private data sets, moreover, new opportunities for user profiling are being introduced. However, FIM models to mitigate these risks have been proposed. In this paper we elaborate privacy by design requirements for this class of systems, transpose them into specific architectural requirements, and evaluate a number of FIM models with respect to these requirements. The contributions of this paper are a catalog of privacy-related architectural requirements, joining up legal, business and system architecture viewpoints, and the demonstration of concrete FIM models showing how the requirements can be implemented in practice.
Keywords: data protection; security of data; FIM models; federated identity management; identity sharing; improved data quality; privacy problems; privacy risks; privacy-related architectural requirements; private data sets; remote entity authentication; security problems; Art; Business; Data privacy; Guidelines; IEC standards; ISO standards; Privacy; data protection law; identity management; limited linkability; limited observability; privacy; privacy by design; security (ID#: 15-8699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163221&isnumber=7163193
Macedo, R.; Ghamri-Doudane, Y.; Nogueira, M., “Mitigating DoS Attacks in Identity Management Systems Through Reorganizations,” in Network Operations and Management Symposium (LANOMS), 2015 Latin American, vol., no., pp. 27-34,
1-3 Oct. 2015. doi:10.1109/LANOMS.2015.7332666
Abstract: Ensuring identity management (IdM) systems availability plays a key role to support networked systems. Denial-of-Service (DoS) attacks can make IdM operations unavailable, preventing the use of computational resources by legitimate users. In the literature, the main countermeasures against DoS over IdM systems are based on either the application of external resources to extend the system lifetime (replication) or on DoS attacks detection. The first approach increases the solutions cost, and in general the second approach is still prone to high rates of false negatives and/or false positives. Hence, this work presents SAMOS, a novel and paradigm-shifting Scheme for DoS Attacks Mitigation by the reOrganization and optimization of the IdM System. SAMOS optimizes the reorganization of the IdM system components founded on optimization techniques, minimizing DoS effects and improving the system lifetime. SAMOS is based on the unavailabilities effects such as the exhaustion of processing and memory resources, eliminating the dependence of attacks detection. Furthermore, SAMOS employs operational IdPs from the IdM system to support the demand of the IdM system, differently from replication approaches. Results considering data from two real IdM systems indicate the scheme viability and improvements. As future works, SAMOS will be prototyped in order to allow performance evaluations in a real testbed.
Keywords: computer network security; telecommunication network management; Denial-of-Service; DoS attacks detection; DoS attacks mitigation; IDM systems; computational resources; external resources; identity management systems; memory resources; mitigating DoS attacks; networked systems; Authentication; Cloud computing; Computer crime; IP networks; Optimization; Proposals (ID#: 15-8700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332666&isnumber=7332658
Soni, D.; Patel, H., “Privacy Preservation Using Novel Identity Management Scheme in Cloud Computing,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, vol., no., pp. 714-719, 4-6 April 2015. doi:10.1109/CSNT.2015.284
Abstract: The Cloud Computing is known for its high availability and low cost in implementation and maintenance. The Users of Cloud often give their secret credential for accessing application and/or data hosted on Cloud environment. Authentication is today’s one of the most challenging issues in the domain of security and privacy for the applications running under Cloud environment. As users’ access more than one service, sometimes using common credentials, User feels unsafe about disclosing their identity on the Cloud environment because their information may be used with other application/users to generate knowledge about their activities. In this paper, we propose a model which allows users to authenticate to the service securely and control the disclosure of their attributes. The proposed model offers users’ a flexibility to generate instant identity along with credential required to authenticate service provider. The instant identity for each service provider makes tough for them to track user’s access patterns. Due to rapid change in identity, service provider may not be able to locate user(s). The proposed model aims to assists the users to preserve privacy of their data.
Keywords: cloud computing; data privacy; message authentication; software management; identity management scheme; privacy preservation; service provider authentication; user access patterns; Authentication; Data privacy; Privacy; Public key; Relays; Servers; Cloud computing; Zero Knowledge Proof; One Time Password; Identity Management; Identity Provider; Privacy
(ID#: 15-8701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280012&isnumber=7279856
Chen, Ju; Liu, Yi; Chai, Yueting, “An Identity Management Framework for Internet of Things,” in e-Business Engineering (ICEBE), 2015 IEEE 12th International Conference on, vol., no., pp. 360-364, 23-25 Oct. 2015. doi:10.1109/ICEBE.2015.67
Abstract: The Internet of Things (IoT) has been developing rapidly in the past few years. In IoT, an enormous number of smart devices are connected to the network, where communication and interaction occurs extensively among end users, smart devices and Internet services. Due to the great diversity of devices, broader scope of interactions and other characteristics of IoT, current IdM model for Internet needs to be extended and improved. The objective of this article is to analyze the main features of IoT and key issues of the IdM for IoT, and then present an IdM framework for IoT, which consists of three parts: the standard information model, user-centric architecture and multi-channel authentication.
Keywords: Authentication; Authorization; Internet of things; Servers; Service-oriented architecture; Unified modeling language; Identity management (IdM); Internet of Things (IoT); User-centric architecture (ID#: 15-8702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7349993&isnumber=7349845
Malchow, Jan-Ole; Roth, Volker, “CryptID — Distributed Identity Management Infrastructure,” in Communications and Network Security (CNS), 2015 IEEE Conference on, vol., no., pp. 735-736, 28-30 Sept. 2015. doi:10.1109/CNS.2015.7346910
Abstract: Many of the services on which we depend on the Internet where designed when communications security was not a major concern. The toll for retrofitted security was increased complexity. When search engines emerged users began to type only significant parts of a domain name into the search field and clicked on the appropriate link. In this poster we argue that this paradigm shift ultimately allows us to disentangle, replace and simplify the existing stack of Internet services related to name services and security.
Keywords: Dictionaries; Indexes; Public key cryptography; Routing (ID#: 15-8703)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346910&isnumber=7346791
Barreto, Luciano; Celesti, Antonio; Villari, Massimo; Fazio, Maria; Puliafito, Antonio, “Identity Management in IoT Clouds: A FIWARE Case Study,” in Communications and Network Security (CNS), 2015 IEEE Conference on, vol., no., pp. 680-684, 28-30 Sept. 2015. doi:10.1109/CNS.2015.7346887
Abstract: Nowadays, the combination between Cloud computing and Internet of Things (IoT) is pursuing new levels of efficiency in delivering services, representing a tempting business opportunity for ICT operators of increasing their revenues. However, security is seen as one of the major factors that slows down the rapid and large scale adoption and deployment of both the IoT and Cloud computing. In this paper, considering such an IoT Cloud scenario, we present an authentication model that allow IoT devices to join IoT Clouds and users to access the system. Moreover, we discuss the issues involved in applying our authentication models in a real IoT Cloud based on the FIWARE technology.
Keywords: Authentication; Cloud computing; Computational modeling; Performance evaluation; Sensors; Cloud computing; FIWARE; authentication; internet of things; security (ID#: 15-8704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346887&isnumber=7346791
Werner, Jorge; Westphall, Carla Merkle; Weingärtner, Rafael; Geronimo, Guilherme Arthur; Westphall, Carlos Becker, “An Approach to IdM with Privacy in the Cloud,” in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, vol., no., pp. 168-175, 26-28 Oct. 2015. doi:10.1109/CIT/IUCC/DASC/PICOM.2015.26
Abstract: Cloud computing allows the use of resources and systems in thousands of providers. This paradigm can use federated identity management to control user’s identification data, but it is essential to preserve privacy, while performing authentication and access control. This article discusses necessary characteristics to improve privacy in the dissemination of sensitive data of users in federated cloud computing paradigm. We plan to identify and use privacy techniques in identity management systems used in cloud. Users’ attributes should have associated policies to minimize release of data exchanged in the process. It is also necessary to deal with privacy in interactions between authentication and authorization processes. This paper presents an approach to address the issues involving privacy around the personally identifiable information. The proposed model allows control of users’ PII, provides some choices to assist users in data dissemination during the interaction and provides guarantees using user preferences on the SP side.
Keywords: Authorization; Cloud computing; Data privacy; Identity management systems; Privacy; Proposals; cloud; identity management; idm; privacy (ID#: 15-8705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363067&isnumber=7362962
Xiaoqi Ma, “Managing Identities in Cloud Computing Environments,” in Information Science and Control Engineering (ICISCE), 2015 2nd International Conference on, vol., no., pp. 290-292, 24-26 April 2015. doi:10.1109/ICISCE.2015.71
Abstract: As cloud computing becomes a hot spot of research, the security issues of clouds raise concerns and attention from academic research community. A key area of cloud security is managing users’ identities, which is fundamental and important to other aspects of cloud computing. A number of identity management frameworks and systems are introduced and analysed. Issues remaining in them are discussed and potential solutions and countermeasures are proposed.
Keywords: cloud computing; security of data; academic research community; cloud computing environments; cloud security; Authentication; Cloud computing; Computational modeling; Computer architecture; Identity management systems; Servers; identity management; security (ID#: 15-8706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7120611&isnumber=7120439
Hummer, M.; Kunz, M.; Netter, M.; Fuchs, L.; Pernul, G., “Advanced Identity and Access Policy Management Using Contextual Data,” in Availability, Reliability and Security (ARES), 2015 10th International Conference on, vol., no., pp. 40-49, 24-27 Aug. 2015. doi:10.1109/ARES.2015.40
Abstract: Due to compliance and IT security requirements, company-wide Identity and Access Management within organizations has gained significant importance in research and practice over the last years. Companies aim at standardizing user management policies in order to reduce administrative overhead and strengthen IT security. Despite of its relevance, hardly any supportive means for the automated detection and refinement as well as management of policies are available. As a result, policies outdate over time, leading to security vulnerabilities and inefficiencies. Existing research mainly focuses on policy detection without providing the required guidance for policy management. This paper closes the existing gap by proposing a Dynamic Policy Management Process which structures the activities required for policy management in Identity and Access Management environments. In contrast to current approaches it fosters the consideration of contextual user management data for policy detection and refinement and offers result visualization techniques that foster human understanding. In order to underline its applicability, this paper provides a naturalistic evaluation based on real-life data from a large industrial company.
Keywords: authorisation; data visualisation; feature extraction; standardisation; IT security requirement; access policy management; contextual data; dynamic policy management process; identity management; policy detection; result visualization technique; user management policy standardization; Access control; Companies; Context; Data mining; Access Control; Identity Management; Policy Management; Policy Mining; RBAC (ID#: 15-8707)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299897&isnumber=7299862
Fongen, Anders, “Trust Management in Cross Domain Operations,” in Military Communications Conference, MILCOM 2015 - 2015 IEEE, vol., no., pp. 935-940, 26-28 Oct. 2015. doi:10.1109/MILCOM.2015.7357565
Abstract: Protocols for communication across security domains need to be evaluated against their architectural properties, not only their security properties. The protocols have connectivity and capacity requirements, they have implications on system coupling, scalability and management. This paper investigates several trust management mechanisms from the perspective of a list of non-functional requirements. The conclusions have consequences for the organization of Identity Management Systems used in cross-domain applications.
Keywords: Authentication; Authorization; Protocols; Public key; Scalability (ID#: 15-8708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357565&isnumber=7357245
Kurniawan, A.; Kyas, M., “A Trust Model-Based Bayesian Decision Theory in Large Scale Internet of Things,” in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, vol., no., pp. 1-5, 7-9 April 2015. doi:10.1109/ISSNIP.2015.7106964
Abstract: In addressing the growing problem of security of Internet of Things, we present, from a statistical decision point of view, a naval approach for trust-based access control using Bayesian decision theory. We build a trust model, TrustBayes which represents a trust level for identity management in IoT. TrustBayes model is be applied to address access control on uncertainty environment where identities are not known in advance. The model consists of EX (Experience), KN (Knowledge) and RC (Recommendation) values which is be obtained in measurement while a IoT device requests to access a resource. A decision will be taken based model parameters and be computed using Bayesian decision rules. To evaluate our a trust model, we do a statistical analysis and simulate it using OMNeT++ to investigate battery usage. The simulation result shows that the Bayesian decision theory approach for trust based access control guarantees scalability and it is energy efficient as increasing number of devices and not affecting the functioning and performance.
Keywords: Bayes methods; Internet of Things; authorisation; decision theory; statistical analysis; Bayesian decision rules; EX value; KN value; OMNeT++; RC value; TrustBayes model; battery usage; experience value; identity management; knowledge value; large scale Internet-of-things; recommendation value; statistical decision point; trust model-based Bayesian decision theory; trust-based access control; uncertainty environment; Batteries; Communication system security; Scalability; Wireless communication; Wireless sensor networks; Access Control; Decision making; Decision theory; Trust Management (ID#: 15-8709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106964&isnumber=7106892
Fei Liu; Jing Wang; Hongtao Bai; Huiping Sun, “Access Control Model Based on Trust and Risk Evaluation in IDMaaS,” in Information Technology – New Generations (ITNG), 2015 12th International Conference on, vol., no., pp. 179-184, 13-15 April 2015. doi:10.1109/ITNG.2015.34
Abstract: As cloud computing technology develops rapidly, more convenience has been brought to users by various cloud providers with various cloud services. However, difficulty of management, especially when different access control protocols and personal information involved, has become one of barriers that inhibit the development process of cloud technology. In this paper, a user-centered ID MaaS (Identity Management as a Service) is proposed combined with a novel access control model based on trust and risk evaluation. Besides, a format-preserving encryption (FPE) method is proposed as an auxiliary scheme guaranteeing the effectiveness of access control. ID MaaS offers a solution that effectively alleviates the difficulty of realizing unified management of users’ identity and information among diverse cloud service providers.
Keywords: authorisation; cloud computing; risk analysis; trusted computing; FPE method; IDMaaS; access control protocols; cloud computing technology; cloud service providers; cloud technology; format preserving encryption; identity management as a service; personal information; risk evaluation; trust evaluation; unified management; Access control; Cloud computing; Computational modeling; Data models; Encryption; Servers; access control; format-preserving encryption; (ID#: 15-8710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113469&isnumber=7113432
Marsico, A.; Broglio, A.; Vecchio, M.; Facca, F.M., “Learn by Examples How to Link the Internet of Things and the Cloud Computing Paradigms: A Fully Working Proof of Concept,” in Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, vol., no., pp. 806-810, 24-26 Aug. 2015. doi:10.1109/FiCloud.2015.27
Abstract: This paper describes a fully-working proof of concept centered around a smart enterprise scenario and able to shed led on the power offered by linking the Internet of Things (IoT) and the Cloud Computing (CC) paradigms together. More specifically, in this showcase all the sensing and actuation capabilities are implemented in the tiny micro-controllers on-board the “things” and exposed, through a short-range radio module, as interfaces and commands, while all the smart capabilities (from identity management, to complex event processing, from data contextualization to persistent storage) are implemented as cloud services. In this way one can keep the computational and memory requirements of the devices extremely low, by off-loading the smartness of the application to the cloud services, where computational and memory resources are not an issue. Finally, to connect the two worlds together, a small linux embedded micro-pc is used as a controller, hence playing the role of a smart IoT gateway.
Keywords: Internet of Things; Linux; cloud computing; embedded systems; internetworking; microcontrollers; CC paradigm; IoT gateway; Linux embedded micro-pc; actuation capability; cloud computing paradigm; cloud service; complex event processing; computational requirement; data contextualization; identity management; memory requirement; microcontroller; sensing capability; short-range radio module; smart enterprise scenario; Actuators; Cloud computing; Clouds; Context; Graphical user interfaces; Logic gates; Sensors; Cloud Computing; FIWARE Techologies; Smart Environments (ID#: 15-8711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300909&isnumber=7300539
Breaux, T.D.; Smullen, D.; Hibshi, H., “Detecting Repurposing and Over-Collection in Multi-Party Privacy Requirements Specifications,” in Requirements Engineering Conference (RE), 2015 IEEE 23rd International, vol., no., pp. 166-175, 24-28 Aug. 2015. doi:10.1109/RE.2015.7320419
Abstract: Mobile and web applications increasingly leverage service-oriented architectures in which developers integrate third-party services into end user applications. This includes identity management, mapping and navigation, cloud storage, and advertising services, among others. While service reuse reduces development time, it introduces new privacy and security risks due to data repurposing and over-collection as data is shared among multiple parties who lack transparency into third-party data practices. To address this challenge, we propose new techniques based on Description Logic (DL) for modeling multiparty data flow requirements and verifying the purpose specification and collection and use limitation principles, which are prominent privacy properties found in international standards and guidelines. We evaluate our techniques in an empirical case study that examines the data practices of the Waze mobile application and three of their service providers: Facebook Login, Amazon Web Services (a cloud storage provider), and Flurry.com (a popular mobile analytics and advertising platform). The study results include detected conflicts and violations of the principles as well as two patterns for balancing privacy and data use flexibility in requirements specifications. Analysis of automation reasoning over the DL models show that reasoning over complex compositions of multi-party systems is feasible within exponential asymptotic timeframes proportional to the policy size, the number of expressed data, and orthogonal to the number of conflicts found.
Keywords: Web services; data privacy; description logic; mobile computing; security of data; Amazon Web Services; DL models; Facebook login; Flurry.com; Waze mobile application; data use flexibility; description logic; exponential asymptotic timeframes; guidelines; international standards; multiparty data flow requirements; multiparty privacy requirements specifications; over-collection detection; repurposing detection; use limitation principles; Advertising; Data privacy; Facebook; Limiting; Privacy; Terminology; Data flow analysis; privacy principles; requirements validation (ID#: 15-8712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7320419&isnumber=7320393
Nida; Teli, B.K., “An Efficient and Secure Means for Identity and Trust Management in Cloud,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, vol., no., pp. 677-682, 19-20 March 2015. doi:10.1109/ICACEA.2015.7164777
Abstract: Cloud users are inevitably confronted with the potential risk of storing their crucial data into the remote data center of cloud service providers (CSP), which raises the concern among cloud users for their Identities and Trust for CSP’s. So their arises the need for an efficient identity and trust management system, which can serve to both CSP and Cloud Consumer and hence necessary to increase the service level agreements (SLA) between them. This paper propose a strong heterogeneous online and offline sign crypt model for a cloud network for the issue pertaining to identities and trust management. This model has certain merits: firstly, it set ups the secure, trustworthy connection between the cloud user and cloud data center, while maintaining the identities of the user and also it achieves confidentiality, authentication, and non-repudiation of services in a coherent single step. Secondly, it allows a cloud user in an Identity based cryptography (IBC) to send a request message to an internet host in public key infrastructure (PKI). Thirdly, it splits the generated sign crypt into two phases: a) Offline, and b) Online phase, and thereafter they are shown on several types of attacks. Our model is very suitable to provide high level of identity and trust management in cloud computing paradigm.
Keywords: authorisation; cloud computing; computer centres; public key cryptography; CSP; Internet host; PKI; SLA; cloud computing paradigm; cloud consumer; cloud network; cloud service providers; cloud users; crucial data storage; identity management system; offline phase; online phase; public key infrastructure; remote data center; request message; service authentication; service confidentiality; service level agreements; service nonrepudiation; strong heterogeneous offline sign crypt model; strong heterogeneous online sign crypt model; trust management system; Authentication; Cloud computing; Computers; Encryption; Public key; AES; Cloud Computing; IBC; OffSigncrypt; OnSigncrypt; trust and Identity management (ID#: 15-8713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7164777&isnumber=7164643
Guenane, F.A.; Serhrouchni, A., “Secure Access & Authentication for Collaborative Intercloud Exchange Service,” in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, vol., no.,
pp. 1-5, 5-7 Aug. 2015. doi:10.1109/SSIC.2015.7245331
Abstract: Recent advances in information technology make remote collaboration and resource sharing easier for next generation of distributed systems. The Intercloud is an interconnection system of several cloud provider infrastructures that allows the dynamic coordination of the deployment of applications and the distribution of the load across multiple data centers. In this paper, we propose a new design to establish a new generation of secure collaborative cloud services where several companies are patially or fully pooling their resources to optimize their operating costs and increase the availability of their services in a secure way by performing secure access & authentication for collaborative interCloud exchange services.
Keywords: authorisation; cloud computing; computer centres; groupware; authentication; cloud provider infrastructures; collaborative intercloud exchange service; data centers; information technology; operating costs; remote collaboration; resource sharing; secure access; secure collaborative cloud services; Authentication; Cloud computing; Collaboration; Computational modeling; Computer architecture; Servers; Access Control; Collaborative Internet; Identity Management; Intercloud; Security As A Service (ID#: 15-8714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7245331&isnumber=7245317
Sung Choi; Zage, D.; Yung Ryn Choe; Brent Wasilow, “Physically Unclonable Digital ID,” in Mobile Services (MS), 2015 IEEE International Conference on, vol., no., pp. 105-111, June 27 2015-July 2 2015. doi:10.1109/MobServ.2015.24
Abstract: The Center for Strategic and International Studies estimates the annual cost from cyber crime to be more than $400 billion. Most notable is the recent digital identity thefts that compromised millions of accounts. These attacks emphasize the security problems of using clonable static information. One possible solution is the use of a physical device known as a Physically Unclonable Function (PUF). PUFs can be used to create encryption keys, generate random numbers, or authenticate devices. While the concept shows promise, current PUF implementations are inherently problematic: inconsistent behavior, expensive, susceptible to modeling attacks, and permanent. Therefore, we propose a new solution by which an unclonable, dynamic digital identity is created between two communication endpoints such as mobile devices. This Physically Unclonable Digital ID (PUDID) is created by injecting a data scrambling PUF device at the data origin point that corresponds to a unique and matching descrambler/hardware authentication at the receiving end. This device is designed using macroscopic, intentional anomalies, making them inexpensive to produce. PUDID is resistant to cryptanalysis due to the separation of the challenge response pair and a series of hash functions. PUDID is also unique in that by combining the PUF device identity with a dynamic human identity, we can create true two-factor authentication. We also propose an alternative solution that eliminates the need for a PUF mechanism altogether by combining tamper resistant capabilities with a series of hash functions. This tamper resistant device, referred to as a Quasi-PUDID (Q-PUDID), modifies input data, using a black-box mechanism, in an unpredictable way. By mimicking PUF attributes, Q-PUDID is able to avoid traditional PUF challenges thereby providing high-performing physical identity assurance with or without a low performing PUF mechanism. Three different application scenarios with mobile devices for PUDID and Q-PUDID have been analyzed to show their unique advantages over traditional PUFs and outline the potential for placement in a host of applications.
Keywords: authorisation; cryptography; random number generation; PUF; Q-PUDID; center for strategic and international studies; clonable static information; cryptanalysis; descrambler-hardware authentication; device authentication; digital identity thefts; dynamic human identity; encryption keys; hash functions; physically unclonable digital ID; physically unclonable function; quasi-PUDID; random number generation; two-factor authentication; Authentication; Cryptography; Immune system; Optical imaging; Optical sensors; Servers; access control; authentication; biometrics; cloning; computer security; cyber security; digital signatures; identification of persons; identity management systems; mobile hardware security (ID#: 15-8715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226678&isnumber=7226653
Becot, S.; Bertin, E.; Crom, J.-M.; Frey, V.; Tuffin, S., “Communication Services in the Web Era: How Can Telco Join the OTT Hangout?,” in Intelligence in Next Generation Networks (ICIN), 2015 18th International Conference on, vol., no., pp. 208-215,
17-19 Feb. 2015. doi:10.1109/ICIN.2015.7073833
Abstract: Evolutions of communications and the advent of Web real time technologies are further challenging the Telco ecosystem. New architectures are emerging to enable new services in a context where assets as identity, signaling and network management are decoupled and virtually delaminated, so to speak. This paper tackles three challenges to face to enable Telco to embrace these evolutions. First we need a secure, trustful and privacy-friendly way of using services provided by various identity and communication providers. Second, we need a versatile framework to develop and deploy communication services. The third challenge is to overcome the limitation of best effort networking by enabling specialized network services for de-perimeterized service delivery.
Keywords: Internet; data privacy; quality of service; Telco ecosystem; Web real time technologies; communication providers; communication services; deperimeterized service delivery; identity providers; network management; privacy-friendly services; signaling; specialized network services; trustful services; Biological system modeling; Browsers; IP networks; Mobile communication; Quality of service; Real-time systems; Telephony; Identity Management; IoT; QoS Management; Web communications; WebRTC; Webification of Networks; post-IMS (ID#: 15-8716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7073833&isnumber=7073795
Niemiec, M.; Kolucka-Szypula, W., “Federated Identity in Real-Life Applications,” in Networks and Communications (EuCNC), 2015 European Conference on, vol., no., pp. 492-496, June 29 2015-July 2 2015. doi:10.1109/EuCNC.2015.7194124
Abstract: This paper describes scenarios and services based on Federated Identity technology. The authors emphasize the Single Sign On mechanism, in which a user’s single authentication credential is used to log in once without being prompted to authenticate again to other systems. The overview of Federated Identity and a Federated Identity Management System is presented first. Next, federation approaches used in different fields of human activity are discussed. A few examples of such domains are presented: E-health, E-government, E-learning, and E-business. Also, two different use cases were proposed: a federated approach for tourism which provide a better service for customers, and in the health care sector, which improves medical service quality and reduces treatment costs. The last section describes the prototype which was implemented and tested in network environment.
Keywords: authorisation; health care; quality of service; federated identity management system; medical service quality; single Sign On mechanism; tourism; treatment cost reduction; user single authentication credential; Authentication; Companies; Electronic government; Identity management systems; Medical services; Federated Identity; Single Sign On; authentication; security
(ID#: 15-8717)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194124&isnumber=7194024
Raghavendra, K.; Ramesh, B., “Managing the Digital Identity in the Cloud: The Current Scenario,” in Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on, vol., no., pp. 1-4, 5-7 March 2015. doi:10.1109/ICECCT.2015.7226076
Abstract: The role of cloud computing in today’s world of globalization has seen major contribution for application development and deployment. Many enterprise see cloud computing as a platform for organizational and economic benefit. Cloud computing offers many businesses a new way of accessing computing services. Nevertheless, this has also exposed the organizations to a range of risks which they are unaware of. In this paper, we present identity management issues in cloud and also review the existing approaches to provide secure Identity management (IdM) system.
Keywords: cloud computing; commerce; economics; globalisation; organisational aspects; IdM system; business; digital identity; economic benefit; enterprise; globalization; identity management system; organizational benefit; Authorization; Face; Protocols; Servers; Diameter; authentication; cloud; identity; security (ID#: 15-8718)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7226076&isnumber=7225915
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Immersive Systems 2015 |
Immersion systems, commonly known as “virtual reality”, are used for a variety of functions such as gaming, rehabilitation, and training. These systems mix the virtual with the actual, and have implications for cybersecurity because attackers may make the jump from virtual to actual systems. For the Science of Security community, this work is relevant to resilience, human factors, cyber physical systems, privacy, and composability. Work cited here was presented in 2015.
Ntokas, I.; Maratou, V.; Xenos, M., "Usability and Presence Evaluation of a 3D Virtual World Learning Environment Simulating Information Security Threats," in Computer Science and Electronic Engineering Conference (CEEC), 2015 7th, pp. 71-76, 24-25 Sept. 2015. doi: 10.1109/CEEC.2015.7332702
Abstract: The use of 3-D immersive Virtual World Learning Environments (VWLE) for educational purposes has been rapidly increased in the last few decades. Recent studies focusing on the evaluation of such environments have shown the great potential of virtual worlds in e-learning, providing improvement in factors such as satisfaction, enjoyment, concentration and presence compared to traditional educational practices. In this paper we present the 3D VWLE that has been developed at the framework of the V-ALERT project; the system's main goal is to contribute to the improvement of Information Security(IS) issues awareness. In particular, we present the methodology followed to evaluate critical aspects of the implemented platform, such as usability, presence and educational value. The data analysis has shown that the implemented system is usable, offered to the users high presence perception and increased their knowledge regarding IS. In addition, the evaluation results have shown that interface improvements should be considered and the training session should be enhanced in order to strengthen the system's functionality and educational scope.
Keywords: computer aided instruction; human factors; security of data; 3D VWLE; 3D immersive virtual world learning environments;3D virtual world learning environment; IS issues awareness; V-ALERT project; critical aspect evaluation; data analysis; e-learning; information security issue awareness ;information security threat simulation; interface improvements; presence evaluation; training session; usability evaluation; user concentration; user enjoyment; user satisfaction; Computer science; Electronic mail; Europe; Information security; Three-dimensional displays; Usability; Evaluation of VWLE; Information Security (IS);Virtual World Learning Environments (VWLE);educational value; presence; usability (ID#: 15-8784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7332702&isnumber=7332684
Sharma, S.; Rajeev, S.P.; Devearux, P., "An Immersive Collaborative Virtual Environment of a University Campus for Performing Virtual Campus Evacuation Drills and Tours for Campus Safety," in Collaboration Technologies and Systems (CTS), 2015 International Conference on, pp. 84-89, 1-5 June 2015. doi: 10.1109/CTS.2015.7210404
Abstract: The use of a collaborative virtual reality environment for training and virtual tours has been increasingly recognized an as alternative to traditional reallife tours for university campuses. Our proposed application shows an immersive collaborative virtual reality environment for performing virtual online campus tours and evacuation drills using Oculus Rift head mounted displays. The immersive collaborative virtual reality environment also offers a unique way for training in emergencies for campus safety. The participant can enter the collaborative virtual reality environment setup on the cloud and participate in the evacuation drill or a tour which leads to considerable cost advantages over large scale real life exercises. This paper presents an experimental design approach to gather data on human behavior and emergency response in a university campus environment among a set of players in an immersive virtual reality environment. We present three ways for controlling crowd behavior: by defining rules for computer simulated agents, by providing controls to the users to navigate in the VR environment as autonomous agents, and by providing controls to the users with a keyboard/ joystick along with an immersive VR head set in real time. Our contribution lies in our approach to combine these three methods of behavior in order to perform virtual evacuation drills and virtual tours in a multi-user virtual reality environment for a university campus. Results from this study can be used to measure the effectiveness of current safety, security, and evacuation procedure for campus safety.
Keywords: educational institutions; groupware; helmet mounted displays; multi-agent systems ;safety ;virtual reality; Oculus Rift head mounted displays; VR environment; autonomous agents; campus safety; computer simulated agents; crowd behavior control; emergency response; experimental design approach; human behavior; immersive VR head set; immersive collaborative virtual reality environment; multiuser virtual reality environment; university campus; virtual campus evacuation drills;virtual campus evacuation tours; virtual online campus tours; Buildings; Computational modeling; Computers; Servers; Solid modeling; Three-dimensional displays; Virtual reality; behavior simulation; collaborative virtual environment; evacuation; virtual reality (ID#: 15-8785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7210404&isnumber=7210375
Jansen Dos Reis, P.R.; Falcao Matos, C.E.; Sousa Diniz, P.; Mota Silva, D.; Dantas, W.; Braz, G.; Cardoso De Paiva, A.; Araujo, A.S., "An Immersive Virtual Reality Application for Collaborative Training of Power Systems Operators," in Virtual and Augmented Reality (SVR), 2015 XVII Symposium on, pp. 121-126, 25-28 May 2015. doi: 10.1109/SVR.2015.24
Abstract: The use of immersive Virtual Reality applications for training in industrial areas has been increasing due to the benefits related to that technology. This paper presents an application to perform training of power system operators in a collaborative and immersive environment. This application aims to enhance the user immersion and increase collaborative training in a Virtual Reality using Collaborative Virtual Environment and a Problem Based Learning approach. It was build in Unity engine and presents a fully integrated scenario of power system visualization with a supervisor module that improves training through the simulation of real events.
Keywords: computer based training; data visualisation; power engineering computing; virtual reality; Unity engine; collaborative training; collaborative virtual environment; immersive environment; immersive virtual reality application; industrial areas; power system operator training; power system visualization; power systems operators; problem based learning approach; supervisor module; user immersion; Collaboration; Mice; Power systems; Three-dimensional displays; Training; Virtual reality; Visualization; Collaborative Virtual Environment; Power System; Virtual Reality (ID#: 15-8786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7300736&isnumber=7300710
Tredinnick, R.; Broecker, M.; Ponto, K., "Experiencing Interior Environments: New Approaches for the Immersive Display of Large-Scale Point Cloud Data," in Virtual Reality (VR), 2015 IEEE, pp. 297-298, 23-27 March 2015. doi: 10.1109/VR.2015.7223413
Abstract: This document introduces a new application for rendering massive LiDAR point cloud data sets of interior environments within high resolution immersive VR display systems. Overall contributions are: to create an application which is able to visualize large-scale point clouds at interactive rates in immersive display environments, to develop a flexible pipeline for processing LiDAR data sets that allows display of both minimally processed and more rigorously processed point clouds, and to provide visualization mechanisms that produce accurate rendering of interior environments to better understand physical aspects of interior spaces. The work introduces three problems with producing accurate immersive rendering of Li-DAR point cloud data sets of interiors and presents solutions to these problems. Rendering performance is compared between the developed application and a previous immersive LiDAR viewer.
Keywords: computer displays; data visualisation; optical radar; pipeline processing; rendering (computer graphics);virtual reality; LiDAR point cloud data sets; flexible pipeline processing; immersive VR display systems; interior environments; rendering; visualization mechanisms; Graphics processing units; Laser radar; Loading; Mirrors; Rendering (computer graphics); Three-dimensional displays; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism — Virtual reality; I.3.8 [Computer Graphics]: Applications (ID#: 15-8787)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223413&isnumber=7223305
Kyriakou, M.; Xueni Pan; Chrysanthou, Y., "Interaction with Virtual Agents — Comparison of the Participants' Experience Between an IVR and a Semi-IVR System," in Virtual Reality (VR), 2015 IEEE, pp. 217-218, 23-27 March 2015. doi: 10.1109/VR.2015.7223373
Abstract: In this paper we compare participants' behavior and experience when navigating through a virtual environment populated with virtual agents in an IVR (Immersive Virtual Reality) system and a semi-IVR system. We measured the impact of collision and basic interaction between participants and virtual agents in both systems. Our findings show that it is more important for our semi-IVR systems to facilitate collision avoidance between the user and the virtual agents accompanied with basic interaction between them. This can increase the sense of presence and make the virtual agents and the environment appear more realistic and lifelike.
Keywords: human computer interaction; multi-agent systems; virtual reality; collision avoidance; collision impact; immersive virtual reality system; participants experience; participants interaction; semiIVR system; virtual agents; virtual environment; Collision avoidance; Computer science; Navigation; Teleoperators; Tracking; Virtual environments (ID#: 15-8788)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223373&isnumber=7223305
Wang Dawei; Yang Hongyan, "An Intention Based Manipulation Method of Large Models in Virtual Environment," in Control, Automation and Robotics (ICCAR), 2015 International Conference on, pp. 209-213, 20-22 May 2015. doi: 10.1109/ICCAR.2015.7166033
Abstract: This paper proposes a big-span position tracking algorithm based on pre-position time series to determine the user operation intention in virtual environment. Some problems, such as lost subject and inconvenient operation, when a tracker is working on big-scale models, like an airplane or a ship, can't be solved well in an immersive virtual reality system. Based on the moving trajectory and speed of a tracker in a unit of time span, the tracking algorithm is a plus to the mainstream relative position algorithm. The algorithm can tell the magnitude and intensity of a user's operation. It makes it especially convenient to display large-scale models in the CAVE system. Cases are used in this thesis to prove the effectiveness and convenience of this algorithm.
Keywords: manipulators; object tracking; position control; virtual reality; CAVE system; big-span position tracking algorithm; cave automatic virtual environment system; immersive virtual reality system; intention based manipulation method; mainstream relative position algorithm; pre-position time series; Cameras; Charge coupled devices; Computational modeling; Solid modeling; Time series analysis; Tracking; Trajectory; CAVE; relative position; tracking algorithm; virtual reality (ID#: 15-8789)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7166033&isnumber=7165846
Papadopoulos, C.; Mirhosseini, S.; Gutenko, I.; Petkov, K.; Kaufman, A.E.; Laha, B., "Scalability Limits of Large Immersive High-Resolution Displays," in Virtual Reality (VR), 2015 IEEE, pp. 11-18, 23-27 March 2015. doi: 10.1109/VR.2015.7223318
Abstract: We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive high resolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gi-gapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed “overview” and “detail”) through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.
Keywords: computer displays; data visualisation; user interfaces; Reality Deck; attribute search task; comparisons task; data exploration task; display apparatus space; display form-factors; gigapixel resolution immersive display; horizontally immersive display setting; immersive high-resolution displays; pattern finding task; per-task user performance; quantitative analysis; room-shaped displays; room-sized visualization space; scalability limit; tiled-display walls; user navigation; variable information space experiment; visual search task; Data visualization; Navigation; Rendering (computer graphics);Scalability; Timing; Visualization; Wall displays; display scalability; high resolution display; immersion; navigation; user studies; visualization (ID#: 15-8790)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223318&isnumber=7223305
Owens, B.D.; Crocker, A.R., "SimSup's Loop: A Control Theory Approach to Spacecraft Operator Training," in Aerospace Conference, 2015 IEEE, pp. 1-17, 7-14 March 2015. doi: 10.1109/AERO.2015.7118921
Abstract: Immersive simulation is a staple of training for many complex system operators, including astronauts and ground operators of spacecraft. However, while much has been written about simulators, simulation facilities, and operator certification programs, the topic of how one develops simulation scenarios to train a spacecraft operator is relatively understated in the literature. In this paper, an approach is presented for using control theory as the basis for developing the immersive simulation scenarios for a spacecraft operator training program. The operator is effectively modeled as a high level controller of lower level hardware and software control loops that affect a select set of system state variables. Simulation scenarios are derived from a STAMP-based hazard analysis of the operator's high and low level control loops. The immersive simulation aspect of the overall training program is characterized by selecting a set of scenarios that expose the operator to the various inadequate control actions that stem from control flaws and inadequate control executions in the different sections of the typical control loop. Results from the application of this approach to the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission are provided through an analysis of the simulation scenarios used for operator training and the actual anomalies that occurred during the mission. The simulation scenarios and inflight anomalies are mapped to specific control flaws and inadequate control executions in the different sections of the typical control loop to illustrate the characteristics of anomalies arising from the different sections of the typical control loop (and why it is important for operators to have exposure to these characteristics). Additionally, similarities between the simulation scenarios and inflight anomalies are highlighted to make the case that the simulation scenarios prepared the operators for the mission.
Keywords: aerospace computing; aerospace simulation; certification; control engineering computing; industrial training; space vehicles; LADEE; STAMP-based hazard analysis; SimSup loop; astronauts; complex system operators; control executions; control flaws; control theory approach; ground operators; hardware control loops; immersive simulation scenarios; inflight anomalies; lunar atmosphere and dust environment explorer mission; operator certification programs; simulation facilities; software control loops; spacecraft operator training program; Biographies; Control systems; NASA; Training (ID#: 15-8791)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118921&isnumber=7118873
Reich, D.; Stark, R., "The Influence of Immersive Driving Environments on Human-Cockpit Evaluations," in System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 523-532, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.69
Abstract: To ensure safety and usability of advanced in-car cockpit solutions, prospective evaluation during early prototyping stages is important, especially when developing innovative human-cockpit-interactions. In this context, highly realistic test environments will help to provide reliable and valid findings. Nevertheless, real car driving studies are difficult to control, manipulate, replicate and standardize. They are also more time consuming and expensive. One economizing suggestion is the implementation of immersive driving environments within simulator studies to provide users with a more realistic awareness of the situation. This paper discusses research investigating the influence of immersive driving environments. We evaluated three interaction modalities (touch, spin controller, free-hand gestures), and two levels of immersivity (low, high) to examine this methodology. Twenty participants took part in the driving simulator study. Objective and subjective data show advantages regarding situational awareness and perception for high immersive driving environments when interacting with a navigation system.
Keywords: digital simulation; human computer interaction; road safety; road vehicles; traffic engineering computing; advanced in-car cockpit solution safety; driving simulator; free-hand gesture interaction; human-cockpit evaluations; immersive driving environments; innovative human-cockpit-interactions; navigation system; spin controller interaction; touch interaction; Glass; Keyboards; Navigation; Reliability; Software; Three-dimensional displays; Vehicles; Human-Cockpit-Interactions; Immersive environments (ID#: 15-8792)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069718&isnumber=7069647
Lin, M.C., "Perceptually-inspired Computing," in Intelligent Technologies for Interactive Entertainment (INTETAIN), 2015 7th International Conference on, pp. 1-1, 10-12 June 2015. Doi: (not provided)
Abstract: Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.
Keywords: computer animation; computer games; hearing; human computer interaction; human factors; interactive systems; adaptive algorithms; auditory perception; computer animation; crowd simulation; data-driven personality modeling; entertainment; example-guided physics-based sound synthesis; human perceptual systems; human sensory systems; human-computer interaction; immersive experiences; interactive applications; multimodal interaction; perceptually guided principles; perceptually-inspired computational models; physical environment; psychophysics; shared social experience; video games; visual computing; Adaptation models; Animation; Computational modeling; Computer science; Entertainment industry; Games; Solid modeling; computer animation; crowd simulation; entertainment; human perceptual systems; human-computer interaction; multimodal interaction; perceptually-inspired computing; video games (ID#: 15-8793)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325476&isnumber=7325470
Drouhard, Margaret; Steed, Chad A.; Hahn, Steven; Proffen, Thomas; Daniel, Jamison; Matheson, Michael, "Immersive Visualization for Materials Science Data Analysis using the Oculus Rift," in Big Data (Big Data), 2015 IEEE International Conference on, pp. 2453-2461, Oct. 29 2015-Nov. 1 2015
doi: 10.1109/BigData.2015.7364040
Abstract: In this paper, we propose strategies and objectives for immersive data visualization with applications in materials science using the Oculus Rift virtual reality headset. We provide background on currently available analysis tools for neutron scattering data and other large-scale materials science projects. In the context of the current challenges facing scientists, we discuss immersive virtual reality visualization as a potentially powerful solution. We introduce a prototype immersive visualization system, developed in conjunction with materials scientists at the Spallation Neutron Source, which we have used to explore large crystal structures and neutron scattering data. Finally, we offer our perspective on the greatest challenges that must be addressed to build effective and intuitive virtual reality analysis tools that will be useful for scientists in a wide range of fields.
Keywords: Crystals; Data visualization; Instruments; Neutrons; Solid modeling; Three-dimensional displays; Virtual reality (ID#: 15-8794)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7364040&isnumber=7363706
Garcia, A.S.; Roberts, D.J.; Fernando, T.; Bar, C.; Wolff, R.; Dodiya, J.; Engelke, W.; Gerndt, A., "A Collaborative Workspace Architecture for Strengthening Collaboration among Space Scientists," in Aerospace Conference, 2015 IEEE, pp. 1-12, 7-14 March 2015. doi: 10.1109/AERO.2015.7118994
Abstract: Space exploration missions have produced large data of immense value, to both research and the planning and operating of future missions. However, current datasets and simulation tools fragment teamwork, especially across disciplines and geographical location. The aerospace community already exploits virtual reality for purposes including space tele-robotics, interactive 3D visualization, simulation and training. However, collaborative virtual environments are yet to be widely deployed or routinely used in space projects. Advanced immersive and collaborative visualization systems have the potential for enhancing the efficiency and efficacy of data analysis, simplifying visual benchmarking, presentations and discussions. We present preliminary results of the EU funded international project CROSS DRIVE, which develops an infrastructure for collaborative workspaces for space science and missions. The aim is to allow remote scientific and engineering experts to collectively analyze and interpret combined datasets using shared simulation tools. The approach is to combine advanced 3D visualization techniques and interactive tools in conjunction with immersive virtuality telepresence. This will give scientists and engineers the impression of teleportation from their respective buildings across Europe, to stand together on a planetary surface, surrounded by the information and tools that they need. The conceptual architecture and proposed realization of the collaborative workspace are described. ESA's planned ExoMars mission provides the use-case for deriving user requirements and evaluating our implementation.
Keywords: aerospace computing; data visualisation; interactive systems; EU funded international project; ExoMars mission; aerospace community; collaborative visualization systems; collaborative workspace architecture; data analysis; immersive virtuality telepresence; interactive 3D visualization; interactive tools; space exploration missions; space tele-robotics; Collaboration; Computer architecture; Data visualization; Mars; Solid modeling; Space missions; Space vehicles (ID#: 15-8795)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7118994&isnumber=7118873
Panzoli, D.; Pons-Lelardeux, C.; Lagarrigue, P., "Communication and Knowledge Sharing in an Immersive Learning Game," in Games and Virtual Worlds for Serious Applications (VS-Games), 2015 7th International Conference on, pp. 1-8, 16-18 Sept. 2015. doi: 10.1109/VS-GAMES.2015.7295768
Abstract: Learning games are becoming a serious contender to real-life simulations for professional training, particularly in highly technical jobs where their cost-effectiveness is a sizeable asset. The most appreciated feature of a learning game is to provide in an automatic way to each learner an integrated feedback in real time during the game and, ideally, a personally meaningful debriefing at the end of each session. Immersive learning games use virtual reality and 3D environments to allow several learners at once to collaborate in the most natural way. Managing the communication on the other hand has proven so far a more difficult problem to overcome. In this article, we present a communication system designed to be used in immersive learning games. This innovative system is neither based on voice-chat nor branching dialogues but on the idea that pieces of information can be manipulated as tangible objects in a virtual environment. This system endeavours to offer the simplest and most intuitive way for several learners to acquire and share knowledge in an immersive virtual environment while complying with the requirements of a reliable assessment of their performance. A first experiment with nurse anaesthetist students gives evidence that this simple communication system is apt to support lifelike behaviours such as consultation, debate, conflict or irritation.
Keywords: computer based training; computer games; virtual reality; 3D environments; branching dialogues; communication system; highly technical jobs; immersive learning games; immersive virtual environment; knowledge sharing; professional training; real-life simulations; virtual reality; voice-chat; Collaboration; Communication systems; Context; Games; Real-time systems; Surgery; Virtual environments (ID#: 15-8796)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7295768&isnumber=7295691
Mentzelopoulos, Markos; Ferguson, Jeffrey; Protopsaltis, Aristidis, "Perceptually Captured Gesture Interaction with Immersive Information Retrieval Environments: An Experimental Framework for Testing and Rapid Iteration," in Interactive Mobile Communication Technologies and Learning (IMCL), 2015 International Conference on, pp. 307-311, 19-20 Nov. 2015. doi: 10.1109/IMCTL.2015.7359608
Abstract: The use of perceptual inputs is an emerging area within HCI that suggests a developing Perceptual User Interface (PUI) that may prove advantageous for those involved in mobile serious games and immersive social network environments. Since there are a large variety of input devices, software platforms, possible interactions, and myriad ways to combine all of the above elements in pursuit of a PUI, we propose in this paper a basic experimental framework that will be able to standardize study of the wide range of interactive applications for testing efficacy in learning or information retrieval and also suggest improvements to emerging PUI by enabling quick iteration. This rapid iteration will start to define a targeted range of interactions that will be intuitive and comfortable as perceptual inputs, and enhance learning and information retention in comparison to traditional GUI systems. The work focuses on the planning of the technical development of two scenarios.
Keywords: Engines; Graphical user interfaces; Grasping; Hardware; Navigation; Software; Three-dimensional displays; Graphical User Interface (GUI); HCI; PUI; perceptual; serious games (ID#: 15-8797)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359608&isnumber=7359535
Covarrubias, M.; Bordegoni, M., "Immersive VR for Natural Interaction with a Haptic Interface for Shape Rendering," in Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI), 2015 IEEE 1st International Forum on, pp. 82-89, 16-18 Sept. 2015. doi: 10.1109/RTSI.2015.7325075
Abstract: This paper presents an immersive virtual reality system that includes a natural interaction approach based on free hand gestures that is used to drive a Desktop Haptic Strip for Shape Rendering (DHSSR). The DHSSR is a mechatronic display of virtual curves intersecting 3D virtual objects, and aims at allowing designers to evaluate the quality of shapes during the conceptual design phase of new products. The DHSSR consists of a 6DOF servo-actuated developable metallic strip, which reproduces cross-sectional curves of 3D virtual objects. Virtual curves can be interactively generated on the 3D surface of the virtual object, and coherently the DHSSR haptic interface renders them. An intuitive and natural modality for interacting with the 3D virtual objects and 3D curves is offered to users, who are mainly industrial designers. This consists of an immersive virtual reality system for the visualization of the 3D virtual models and a hand gestural interaction approach used by the user for handling the models. The system has been implemented by using low cost and open technologies, and combines a software engine for interactive 3D content generation (Unity 3D), the Oculus Rift Head Mounted Display for 3D stereo visualization, a motion capture sensor (LeapMotion) for tracking the user's hands, and the Arduino Leonardo board for controlling the components. Results reported in the paper are positive for what concerns the quality of the rendering of the surface, and of the interaction modality proposed.
Keywords: haptic interfaces; interactive systems; rendering (computer graphics); shape recognition; virtual reality; 3D stereo visualization; 3D virtual objects; 6DOF servo actuated developable metallic strip; Arduino Leonardo board; DHSSR; LeapMotion; cross sectional curves; desktop haptic strip for shape rendering; hand gestural interaction; haptic interface; immersive VR; industrial designers; interactive 3D content generation; mechatronic display; motion capture sensor; natural interaction; oculus rift head mounted display; software engine; virtual curves; virtual reality system; Haptic interfaces; Interpolation; Rendering (computer graphics); Shape; Solid modeling; Strips; Three-dimensional displays}, (ID#: 15-8798)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7325075&isnumber=7325058
Sidorakis, N.; Koulieris, G.A.; Mania, K., "Binocular Eye-Tracking for the Control of a 3D Immersive Multimedia User Interface," in Everyday Virtual Reality (WEVR), 2015 IEEE 1st Workshop on, pp.15-18, 23-23 March 2015. doi: 10.1109/WEVR.2015.7151689
Abstract: In this paper, we present an innovative approach to design a gaze-controlled Multimedia User Interface for modern, immersive headsets. The wide-spread availability of consumer grade Virtual Reality Head Mounted Displays such as the Oculus RiftTM transformed VR to a commodity available for everyday use. However, Virtual Environments require new paradigms of User Interfaces, since standard 2D interfaces are designed to be viewed from a static vantage point only, e.g. the computer screen. Additionally, traditional input methods such as the keyboard and mouse are hard to manipulate when the user wears a Head Mounted Display. We present a 3D Multimedia User Interface based on eye-tracking and develop six applications which cover commonly operated actions of everyday computing such as mail composing and multimedia viewing. We perform a user study to evaluate our system by acquiring both quantitative and qualitative data. The study indicated that users make less type errors while operating the eye-controlled interface compared to using the standard keyboard during immersive viewing. Subjects stated that they enjoyed the eye-tracking 3D interface more than the keyboard/mouse combination.
Keywords: gaze tracking helmet mounted displays; multimedia computing; user interfaces; virtual reality; 3D immersive multimedia user interface; Oculus Rift; binocular eye-tracking; gaze-controlled multimedia user interface; immersive headsets; immersive viewing; virtual environments; virtual reality head mounted displays; Electronic mail; Games;Keyboards; Mice; Multimedia communication; Three-dimensional displays; User interfaces; H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems-Artificial augmented and virtual realities I.3.6 [Computer Graphics]: Methodology and Techniques-Interaction techniques I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Virtual Reality (ID#: 15-8799)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7151689&isnumber=7151684
Katz, B.F.G.; Felinto, D.Q.; Touraine, D.; Poirier-Quinot, D.; Bourdot, P., "BlenderVR: Open-Source Framework for Interactive and Immersive VR," in Virtual Reality (VR), 2015 IEEE, pp. 203-204, 23-27 March 2015. doi: 10.1109/VR.2015.7223366
Abstract: BlenderVR is an open-source project framework for interactive and immersive applications based on an extension of the Blender Game Engine to Virtual Reality applications. BlenderVR is a generalization of the BlenderCAVE project, accounting for alternate platforms (e.g., HMD, video-walls). The goal is to provide a flexible and easy to use framework for the creation of VR applications for various platforms, making use of the existing power of the BGE's graphics rendering and physics engine. Compatible with 3 major Operating Systems, BlenderVR has been developed by VR researchers with support from the Blender Community. BlenderVR currently handles multi-screen/multi-user tracked stereoscopic rendering through efficient low-level master/slave synchronization process with multimodal interactions via OSC and VRPN protocols.
Keywords: protocols; public domain software; rendering (computer graphics); synchronisation; virtual reality; Blender game engine; BlenderVR; OSC protocol; VRPN protocol; graphics rendering; immersive application; interactive application; open-source framework; physics engine; synchronization process; virtual reality; Engines; Games; Navigation; Rendering (computer graphics); Synchronization; Virtual reality; H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities; I.3.2 [Graphics Systems]: Distributed/network graphics (ID#: 15-8800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223366&isnumber=7223305
Jooyoung Lee; Hasup Lee; BoYu Gao; HyungSeok Kim; Jee-In Kim, "Multiple Devices as Windows for Virtual Environment," in Virtual Reality (VR), 2015 IEEE, pp. 219-220, 23-27 March 2015. doi: 10.1109/VR.2015.7223374
Abstract: We introduce a method for using multiple devices as windows for interacting with 3-D virtual environment. Motivation of our work has come from generating collaborative workspace with multiple devices which can be found in our daily lives, like desktop PC and mobile devices. Provided with life size virtual environment, each device shows a scene of 3-D virtual space on its position and direction, and users would be able to perceive virtual space in more immersive way with it. By adopting mobile device to our system, users not only see outer space of stationary screen by relocating their mobile device, but also have personalized view in working space. To acquiring knowledge of device's pose and orientation, we adopt vision-based approaches. For the last, we introduce an implementation of a system for managing multiple device and letting them have synchronized performance.
Keywords: computer vision; groupware; mobile computing; virtual reality; 3D virtual environment; 3D virtual space; collaborative workspace; desktop PC; mobile devices; vision-based approaches; Electronic mail; Mobile communication; Mobile handsets; Servers; Virtual environments; AR; Multiple device; Shared virtual space; immersive VR (ID#: 15-8801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7223374&isnumber=7223305
Rodehutskors, Tobias; Schwarz, Max; Behnke, Sven, "Intuitive Bimanual Telemanipulation Under Communication Restrictions by Immersive 3D Visualization and Motion Tracking," in Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, pp. 276-283, 3-5 Nov. 2015
doi: 10.1109/HUMANOIDS.2015.7363547
Abstract: Robots which solve complex tasks in environments too dangerous for humans to enter are desperately needed, e.g. for search and rescue applications. As fully autonomous robots are not yet capable of operating in highly unstructured real-world scenarios, teleoperation is often used to embed the cognitive capabilities of human operators into the robotic system. The many degrees of freedom of anthropomorphic robots and communication restrictions pose challenges to the design of teleoperation interfaces, though. In this work, we propose to combine immersive 3D visualization and tracking of operator head and hand motions to an intuitive interface for bimanual teleoperation. 3D point clouds acquired from the robot are visualized together with a 3D robot model and camera images using a tracked 3D head-mounted display. 6D magnetic trackers capture the operator hand motions which are mapped to the grippers of our two-armed robot Momaro. The proposed user interface allows for solving complex manipulation tasks over degraded communication links, as demonstrated at the DARPA Robotics Challenge Finals and in lab experiments.
Keywords: Cameras; Mobile robots; Robot vision systems; Three-dimensional displays; Tracking (ID#: 15-8802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363547&isnumber=7362951
Lu, Ching-Hu, "IoT-enhanced and Bidirectionally Interactive Information Visualization for Context-Aware Home Energy Savings," in Mixed and Augmented Reality - Media, Art, Social Science, Humanities and Design (ISMAR-MASH'D), 2015 IEEE International Symposium on, pp. 15-20, Sept. 29 2015-Oct. 3 2015. doi: 10.1109/ISMAR-MASHD.2015.20
Abstract: In recent years, due to deteriorating global warming, there has been increasing attention to home energy savings, which is often a serious and not so interesting task. In this regard, we proposed a playful and bidirectionally interactive eco-feedback with three kinds of information visualization integrated with a 3D pet-raising game, which synchronously visualizes the information of the physical environment with the virtual environment by leveraging IoT (Internet of Things) enabled technologies in hopes of enhancing user experience and prolonging users' engagement in energy savings. In addition to mere mapping from physical to virtual environment for traditional game-based energy savings, this study also makes use of the other direction to form a bidirectional mapping to empower users to allow direct and flexible remote control anywhere and anytime in a more natural and playful way. Furthermore, integrating context-awareness with the bidirectional mapping in an energy-saving system also enhances the immersive experience of the users.
Keywords: Avatars; Games; Home appliances; Positron emission tomography; Sensors; Visualization; Context-awareness; Game-based eco-feedback; IoT; Mixed Reality; hysical-cyber system (ID#: 15-8803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7350729&isnumber=7350713
Davoudpour, M.; Sadeghian, A.; Rahnama, H., "Synthesizing Social Context for Making Internet of Things Environments More Immersive," in Network of the Future (NOF), 2015 6th International Conference on the, pp. 1-5, Sept. 30 2015-Oct. 2 2015. doi: 10.1109/NOF.2015.7333282
Abstract: The growth in context-aware systems and smart devices elevates another technology in ubiquitous computing - Internet of Things (IoT), where all objects are connected. The integration of Smart objects and social networking play an important role in today's life. This paper mainly promotes the management and architecture for adaptive social services within IoT, where objects interact based on social behavior. We proposed social context-aware and ontology as the major keys for this study. Our main goal is to make the presented framework CANthings a standard social framework that can be used both in research and industry projects.
Keywords: Internet of Things; ontologies (artificial intelligence); social aspects of automation; social networking (online); CANthings framework; Internet of Things environments; adaptive social services; context-aware systems; smart devices; smart objects; social behavior; social context synthesis; social networking; ubiquitous computing; Computer architecture; Context; Internet of things; Interoperability; Ontologies; Social network services; Context-aware; Internet of Things (IoT);Interoperability; Ontology; Social IoT (ID#: 15-8804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7333282&isnumber=7333276
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Information Theoretic Security 2015 |
A cryptosystem is said to be information-theoretically secure if its security derives purely from information theory and cannot be broken even when the adversary has unlimited computing power. For example, the one-time pad is an information-theoretically secure cryptosystem proven by Claude Shannon, inventor of information theory, to be secure. Information-theoretically secure cryptosystems are often used for the most sensitive communications such as diplomatic cables and high-level military communications, because of the great efforts enemy governments expend toward breaking them. Because of this importance, methods, theory and practice in information theory security also remains high. The works cited here was presented in 2015.
Yener, A., "New Directions in Information Theoretic Security: Benefits of Bidirectional Signaling," in Information Theory Workshop (ITW), 2015 IEEE, pp. 1-5, April 26 2015-May 1 2015. doi: 10.1109/ITW.2015.7133165
Abstract: The past decade has witnessed significant effort towards establishing reliable and information theoretically secure rates in communication networks, taking advantage of the properties of the communication medium. Such efforts include those in the wireless medium where simultaneous transmissions and the ensuing interference can prove advantageous from an information theoretic secrecy point of view. With the goal of obtaining a secrecy rate that scales with transmit power, structured signaling with simultaneous favorable signal alignment at the legitimate receiver(s) and unfavorable signal alignment at the eavesdropper(s) has proven particularly useful in multi-terminal Gaussian channels. Many challenges remain however in realizing the vision of absolute security provided by the wireless physical layer including handling more realistic models. In this paper, we provide a brief overview of the state of the art, the forward look and argue for an additional asset that could be utilized for secrecy, i.e., bidirectional signaling. Taking the bidirectional wiretap channel as an example, Gaussian signaling is demonstrated to be as good as structured signaling from the degrees of freedom point of view, while observed to be performing better with finite transmit power. Moreover, taking bidirectional signals explicitly into account for encoding performs even better and provides a way forward to synergistically combine physical layer based secrecy and encryption.
Keywords: Gaussian channels; cryptography; Gaussian signaling; bidirectional signaling; encryption; information theoretic security; multi-terminal Gaussian channels; secrecy; wireless physical layer; Interference; Jamming; Receivers; Security; Signal to noise ratio; Transmitters; Wireless communication (ID#: 15-8849)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133165&isnumber=7133075
Ligong Wang; Wornell, G.W.; Lizhong Zheng, "Limits of Low-Probability-of-Detection Communication over a Discrete Memoryless Channel," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 2525-2529, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282911
Abstract: This paper considers the problem of communication over a discrete memoryless channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low. Specifically, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a channel whose output distribution induced by the zero input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum number of bits that can be transmitted under this criterion scales like the square root of the blocklength. Exact expressions for the scaling constant are also derived.
Keywords: channel coding; entropy codes; signal detection; steganography; codeword transmission; discrete memoryless channel; entropy; low-probability-of-detection communication limits; scaling constant; steganography; zero input symbol; AWGN channels; Channel capacity; Memoryless systems; Receivers; Reliability theory; Transmitters; Fisher information; Low probability of detection; covert communication; information-theoretic security (ID#: 15-8850)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282911&isnumber=7282397
tar, R.; El Rouayheb, S., "Securing Data Against Limited-Knowledge Adversaries in Distributed Storage Systems," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 2847-2851, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282976
Abstract: We study the problem of constructing secure regenerating codes that protect data integrity in distributed storage systems (DSS) in which some nodes may be compromised by a malicious adversary. The adversary can corrupt the data stored on and transmitted by the nodes under its control. The “damage” incurred by the actions of the adversary depends on how much information it knows about the data in the whole DSS. We focus on the limited-knowledge model in which the adversary knows only the data on the nodes under its control. The only secure capacity-achieving codes known in the literature for this model are for the bandwidth-limited regime and repair degree d = n-1, i.e., when a node fails in a DSS with n nodes all the remaining n - 1 nodes are contacted for repair. We extend these results to the more general case of d ≤ n - 1 in the bandwidth-limited regime. Our capacity-achieving scheme is based on the use of product-matrix codes with special hashing functions and allow the identification of the compromised nodes and their elimination from the DSS while preserving the data integrity.
Keywords: codes; cryptography; data integrity; file organisation; DSS; capacity-achieving scheme; data integrity; distributed storage systems; hashing functions; limited-knowledge adversaries; limited-knowledge model; product-matrix codes; secure capacity-achieving codes; secure regenerating codes; Bandwidth; Correlation; Decision support systems; Maintenance engineering; Security; Servers; Upper bound; Distributed storage; information theoretic security; malicious adversary; regenerating codes (ID#: 15-8851)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282976&isnumber=7282397
Kitajima, N.; Yanai, N.; Nishide, T.; Hanaoka, G.; Okamoto, E., "Constructions of Fail-Stop Signatures for Multi-signer Setting," in Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, pp. 112-123, 24-26 May 2015. doi: 10.1109/AsiaJCIS.2015.26
Abstract: Fail-stop signatures (FSS) provide the security for a signer against a computationally unbounded adversary by enabling the signer to provide a proof of forgery. Conventional FSS schemes are for a single-signer setting, but in the real world, there is a case where a countersignature of multiple signers (e.g. A signature between a bank, a user, and a consumer) is required. In this work, we propose a framework of FSS capturing a multi-signer setting and call the primitive fail-stop multisignatures (FSMS). We propose a generic construction of FSMS via the bundling homomorphisms proposed by Pfitzmann and then propose a provably secure instantiation of the FSMS scheme from the factoring assumption. Our proposed schemes can be also extended to fail-stop aggregate signatures (FSAS).
Keywords: digital signatures; FSAS; FSMS scheme; bundling homomorphisms; fail-stop aggregate signatures; generic construction; multisigner setting; primitive fail-stop multisignatures; proof of forgery; single-signer setting; Adaptation models; Computational modeling; Forgery; Frequency selective surfaces; Games; Public key; Fail-stop multisignatures; Fail-stop signatures; Family of bundling homomorphisms; Information-theoretic security (ID#: 15-8852)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7153945&isnumber=7153836
Weidong Yang; Le Xiao; Limin Sun; Qing Li, "Cooperative Transmission Against Impersonation Attack Using Inter-Session Interference in Two-Hop Wireless Networks," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 104-110, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.363
Abstract: The authentication error in Two-Hop wireless networks is considered without knowledge of eavesdropper channels and location. This paper presents an eavesdropper model with authentication error and two eavesdropping ways. In the model, the authentication error is expressed a pair (p_1, p_2). Based on the authentication error, a cooperative transmission protocol against impersonation attack is presented. Then, the number of eavesdroppers can be tolerated is analyzed while the desired secrecy is achieved with high probability in the limit of a large number of relay nodes. Final, we draw two conclusions for authentication error: 1) the impersonate nodes are chosen as relay is the dominant factor of the transmitted message leakage, and the impersonation attack does seriously decrease the number of eavesdroppers can be tolerated. 2) The Error authentication to legitimate nodes is almost no effect on the number of eavesdroppers can be tolerated.
Keywords: cooperative communication; cryptographic protocols; radio networks; telecommunication security; authentication error; cooperative transmission protocol; eavesdropping ways; impersonate nodes; impersonation attack; inter session interference; legitimate nodes; two-hop wireless networks; Authentication; Interference; Protocols; Relays; Signal to noise ratio; Transmitters; Wireless networks; Cooperative Transmission; Impersonation Attack; Information-Theoretic Security; Wireless Networks; component (ID#: 15-8853)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345271&isnumber=7345233
Forutan, V.; Fischer, R.F.H., "Security-Enhanced Network Coding Through Public-Key Cryptography," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 717-718, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346901
Abstract: Information-theoretic security through linear network coding (LNC) is achievable only when a limited number of network links with linearly-independent global coding vectors are attacked, while security is not guaranteed otherwise. We incorporate LNC-based security and asymmetric-key cryptography to provide data protection in more realistic cases where the wiretapper attacks an arbitrary number of links. Therefore, LNC-based security protects network irrespective of the computing power of the adversary when the number of attacked links falls below a certain amount r, whereas computational security enters into the scene to protect data against computationally-bounded attackers capable of tapping any number of links.
Keywords: network coding; public key cryptography; telecommunication security; LNC-based security; asymmetric-key cryptography; computational security; information-theoretic security; linear network coding; linearly-independent global coding vector; public-key cryptography; security-enhanced network coding; wiretapper attack; Data protection; Encoding; Encryption; Network coding; Public key (ID#: 15-8854)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346901&isnumber=7346791
Ming Li; Lingyun Li; Yanqing Guo; Bo Wang; Xiangwei Kong, "Security Analysis of Optimal Multi-Carrier Spread-Spectrum Embedding," in Signal and Information Processing (ChinaSIP), 2015 IEEE China Summit and International Conference on, pp. 851-855, 12-15 July 2015. doi: 10.1109/ChinaSIP.2015.7230525
Abstract: This paper considers optimal multi-carrier (multiple messages) spread-spectrum (SS) data embedding on linearly-transformed host. We present information-theoretic security analysis for the optimal SS embedding. The security is quantified by both the Kullback-Leibler distance and Bhattacharyya distance between the cover and stego probability distributions. The main results of this paper permit to establish fundamental security limits for the optimal SS embedding. Theoretical analysis and experimental results show the impact of the number of embedding messages, the embedding distortion, and the host transformation in the security level.
Keywords: information theory; spread spectrum communication; steganography; telecommunication security; Bhattacharyya distance; Kullback-Leibler distance; embedding distortion; embedding messages; host transformation; information-theoretic security analysis; linearly-transformed host; optimal multi-carrier spread-spectrum embedding; Correlation; Distortion; Interference; Receivers; Security; Signal to noise ratio; Transforms; Bhattacharyya distance; Kullback-Leibler distance; covert communications; data hiding; steganography (ID#: 15-8855)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7230525&isnumber=7230339
Yi-Peng Wei; Ulukus, S., "Polar Coding for the General Wiretap Channel," in Information Theory Workshop (ITW), 2015 IEEE, pp. 1-5, April 26 2015-May 1 2015. doi: 10.1109/ITW.2015.7133080
Abstract: Information-theoretic work for wiretap channels is mostly based on random coding schemes. Designing practical coding schemes to achieve information-theoretic security is an important problem. By applying two recently developed techniques for polar codes, namely, universal polar coding and polar coding for asymmetric channels, we propose a polar coding scheme to achieve the secrecy capacity of the general wiretap channel.
Keywords: channel capacity; channel coding; codes; radio receivers; radio transmitters; telecommunication security; asymmetric channels; general wiretap channel; information-theoretic security; legitimate receiver; legitimate transmitter; random coding schemes; secrecy capacity; universal polar coding; Decoding; Error probability; Indexes; Manganese; Reliability; Source coding}, (ID#: 15-8856)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133080&isnumber=7133075
Gazi, P.; Tessaro, S., "Secret-Key Cryptography From Ideal Primitives: A Systematic Overview," in Information Theory Workshop (ITW), 2015 IEEE, pp. 1-5, April 26 2015-May 1 2015. doi: 10.1109/ITW.2015.7133163
Abstract: Secret-key constructions are often proved secure in a model where one or more underlying components are replaced by an idealized oracle accessible to the attacker. This model gives rise to information-theoretic security analyses, and several advances have been made in this area over the last few years. This paper provides a systematic overview of what is achievable in this model, and how existing works fit into this view.
Keywords: information theory; private key cryptography; ideal primitive; idealized oracle; information-theoretic security analysis; secret-key cryptography; Ciphers; Computational modeling; Computer science; Encryption; Standards; Cryptography; ideal-primitive model; provable security (ID#: 15-8857)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133163&isnumber=7133075
Zhili Chen; Liusheng Huang; Lin Chen, "ITSEC: An Information-Theoretically Secure Framework for Truthful Spectrum Auctions," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2065-2073, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218591
Abstract: Truthful auctions make bidders reveal their true valuations for goods to maximize their utilities. Currently, almost all spectrum auction designs are required to be truthful. However, disclosure of one's true value causes numerous security vulnerabilities. Secure spectrum auctions are thus called for to address such information leakage. Previous secure auctions either did not achieve enough security, or were very slow due to heavy computation and communication overhead. In this paper, inspired by the idea of secret sharing, we design an information-theoretically secure framework (ITSEC) for truthful spectrum auctions. As a distinguished feature, ITSEC not only achieves information-theoretic security for spectrum auction protocols in the sense of cryptography, but also greatly reduces both computation and communication overhead by ensuring security without using any encryption/description algorithm. To our knowledge, ITSEC is the first information-theoretically secure framework for truthful spectrum auctions in the presence of semi-honest adversaries. We also design and implement circuits for both single-sided and double spectrum auctions under the ITSEC framework. Extensive experimental results demonstrate that ITSEC achieves comparable performance in terms of computation with respect to spectrum auction mechanisms without any security measure, and incurs only limited communication overhead.
Keywords: cryptography; radio spectrum management; telecommunication security; ITSEC; cryptography; encryption-description algorithm; information leakage; information theoretically secure framework; radio spectrum; secret sharing; secure spectrum auctions; spectrum auction designs; spectrum auction protocols; truthful spectrum auctions; Conferences; Cryptography; Logic gates; Privacy; Protocols; Random variables (ID#: 15-8858)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218591&isnumber=7218353
Zhukova, M.; Stefarov, A., "Development of the Protected Telecommunication Systems," in Control and Communications (SIBCON), 2015 International Siberian Conference on, pp. 1-4, 21-23 May 2015. doi: 10.1109/SIBCON.2015.7147061
Abstract: Nowadays take place active development and approbation of new methodologies and algorithms for solving the problem of complex development of the protected telecommunication systems, security assessment and information security management. A lot of works describe mathematical and theoretic security models. There is no universal security model of telecommunication system that allows to effectively combine the requirements of normatively-methodical documents, threats' model and attackers' profile on the stage of planning, allows to generate the list of protective measures and allows to find the optimal set of information security tools. Normatively-methodical documents analysis shows that there is no universal approach for development of the protected telecommunication systems, threats' model and attackers' profile creation. Described in the article threats' model creation methodology allows to effectively solve problem of particular threats' model creation. The attackers' profile creation algorithm allows to uniquely classify attacker's and to create actual threats' list according to attacker's level's of impact to telecommunication system. Both models are the basis of telecommunication systems' security model. This article describes algorithm of telecommunication systems' security model creation, basic requirements for protected telecommunication systems and main stages of development of the protected telecommunication systems.
Keywords: telecommunication network planning; telecommunication security; attacker profile; information security management; normatively-methodical document analysis; protected telecommunication system; security assessment; theoretic security model; threat model; Algorithm design and analysis; Biological system modeling; Computational modeling; Information security; Planning; Telecommunications; attacker's profile; information security; security model; telecommunication system; threat's models; violator's model (ID#: 15-8859)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7147061&isnumber=7146959
Tom, L., "Game-Theoretic Approach Towards Network Security: A Review," in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, pp. 1-4, 19-20 March 2015. doi: 10.1109/ICCPCT.2015.7159364
Abstract: Advancements in information technology has increased the use of internet. With the pervasiveness of internet, network security has become critical issue in every organization. Network attacks results in massive amount of loss in terms of money, reputation and data confidentiality. Reducing or eliminating the negative effects of any intrusion is a fundamental issue of network security. The network security problem can be represented as a game between the attacker or intruder and the network administrator where both the players try to attain maximum outcome. The network administrator tries to defend the attack and the attacker tries to overcome it and attack the system. Thus network security can be enforced using game theoretic approach. This paper presents a review of game theoretic solutions developed for network security.
Keywords: Internet; game theory; information technology; security of data; ubiquitous computing; Internet; game-theoretic approach; information technology; network administration; network security; pervasiveness; Communication networks; Computational modeling; Games; Intrusion detection; Nash equilibrium; Nash equilibrium; attack defence; game theory; network security (ID#: 15-8860)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159364&isnumber=7159156
Yuan Shen; Win, M.Z., "On Secret-Key Generation Using Wideband Channels in Mobile Networks," in Communications (ICC), 2015 IEEE International Conference on, pp. 4151-4156, 8-12 June 2015. doi: 10.1109/ICC.2015.7248974
Abstract: Wireless networks are subject to security vulnerability due to the broadcasting nature of radio transmission. Information-theoretic approaches for secure communication propose to generate secret keys from a common source, such as reciprocal channels, available to the transmitter and receiver. However, such approaches assume the probability distribution of the sources, which may not be available in many realistic scenarios. In this paper, we establish an information-theoretic framework for secret-key generation (SKG) using noisy observations of unknown deterministic parameters (UDPs). Based on an axiomatic definition of UDPs, we derive a new metric called intrinsic information between the UDP and its observation, characterizing the rate of the secret key that can be generated from the observation. This metric is then applied to quantify the use of wideband channels in mobile networks for SKG. Our results provide a non-Bayesian perspective for SKG as well as its practical implications.
Keywords: cryptography; mobile radio; information theory; mobile networks; noisy observations; secret key generation; unknown deterministic parameter; wideband channel; Cryptography; Delays; Information rates; Mobile communication; Uncertainty; Wideband (ID#: 15-8861)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7248974&isnumber=7248285
Biondi, F.; Given-Wilson, T.; Legay, A., "Attainable Unconditional Security for Shared-Key Cryptosystems," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 159-166, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.370
Abstract: Preserving the privacy of private communication is a fundamental concern of computing addressed by encryption. Information-theoretic reasoning models unconditional security where the strength of the results is not moderated by computational hardness or unproven results. Perfect secrecy is often considered the ideal result for a cryptosystem, where knowledge of the ciphertext reveals no information about the key or message, however often this is impossible to achieve in practice. An alternative measure is the equivocation, intuitively the average number of message/key pairs that could have produced a given ciphertext. We show a theoretical bound on equivocation called max equivocation and show that this generalizes perfect secrecy when achievable, and provides an alternative measure when perfect secrecy is not. We derive bounds for max-equivocation, and show that counter intuitively max-equivocation is achieved when the entropy of the ciphertext is minimized. We consider encryption functions under this new information, and show that in general the theoretical best is unachievable, and that some popular approaches such as Latin squares or Quasigroups are also not optimal. We present some algorithms for generating encryption functions that are practical and achieve 90-95% of the theoretical best, improving with larger message spaces.
Keywords: data privacy; entropy; private key cryptography; public key cryptography; attainable unconditional security; ciphertext entropy; encryption functions; max-equivocation; private communication privacy; shared-key cryptosystems; Encryption; Entropy; Mutual information; Random variables; Yttrium; Cryptography; Encryption; Entropy; Equivocation; Latin Squares; Perfect Secrecy; Quasigroups; Shared Key Encryption; Symmetric Encryption; Unconditional Security; Unicity (ID#: 15-8862)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345278&isnumber=7345233
Muthukkumar, R.; Manimegalai, D.; Siva Santhiya, A., "Game-Theoretic Approach to Detect Selfish Attacker in Cognitive Radio Ad-Hoc Networks," in Signal Processing, Communication and Networking (ICSCN), 2015 3rd International Conference on, pp. 1-5, 26-28 March 2015. doi: 10.1109/ICSCN.2015.7219888
Abstract: In wireless communication, spectrum resources are utilized by authorities in particular fields. Most of the elements in spectrum are idle. Cognitive radio is a promising technique for allocating the idle spectrum into unlicensed users. Security shortage is a major challenging issue in cognitive radio ad-hoc networks (CRAHNs) that makes performance degradation on spectrum sensing and sharing. A selfish user pre-occupies the accessible bandwidth for their prospect usage and prohibits the progress secondary users whose makes the requirement for spectrum utility. Game theoretic model is proposed to detect the selfish attacker in CRAHNs. Channel state information (CSI) is considered to inform each user's channel handing information. The two strategy of Nash Equilibrium game model such as pure and mixed strategy for secondary users (SUs) and selfish secondary users (SSUs) are investigated and the selfish attacker is detected. Moreover a novel belief updating system is also proposed to the secondary users for knowing the CSI of the primary user. A simulation result shows that, game theoretic model is achieved to increase the detection rate of selfish attackers.
Keywords: cognitive radio; game theory; radio spectrum management; Nash Equilibrium game model; channel state information; cognitive radio ad-hoc networks; game-theoretic approach; security shortage; selfish attacker; selfish secondary users; spectrum resources; spectrum sensing; spectrum sharing; Ad hoc networks; Cognitive radio; Games; Nash equilibrium; Security; Sensors; Channel state information; Cognitive Radio; Game theoretical model; Security (ID#: 15-8863)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7219888&isnumber=7219823
Mazloum, T.; Mani, F.; Sibille, A., "Analysis of Secret Key Robustness in Indoor Radio Channel Measurements," in Vehicular Technology Conference (VTC Spring), 2015 IEEE 81st, pp. 1-5, 11-14 May 2015. doi: 10.1109/VTCSpring.2015.7145702
Abstract: In the recent years, a set of works have addressed inherent characteristics of the fading propagation channel in an information-theoretic framework oriented towards security. In particular, for cryptographic purposes, secure key bits can be extracted from reciprocal radio channels, seen as a shared source of randomness between any two entities. In this paper we intend to assess the impact of true radio channel features on the security performance, by quantizing the complex channel coefficients into discrete key bits. To this end, indoor channel measurements were performed and investigated in different scenarios, e.g. LOS/NLOS condition, narrowband (NB) as well as wide band (WB) targeting OFDM systems. Secret key robustness is firstly studied for the legitimate terminals in terms of key bit disagreement ratio and key randomness, and secondly by considering an eavesdropper in a wide set of positions around these terminals and by evaluating the difference between the generated keys. A simplified channel model is also analyzed in comparison with the measured channels, as regards the characteristics of generated key bits.
Keywords: OFDM modulation; fading channels; private key cryptography; quantisation (signal); telecommunication security; complex channel coefficients quantization; discrete key bits; eavesdropper; fading propagation channel; indoor radio channel measurement; key bit disagreement ratio; key randomness; legitimate terminals; radio channel features; secret key robustness; security performance; wide band targeting OFDM systems; Antennas; Bit error rate; Fading; Niobium; Quantization (signal); Security; Signal to noise ratio (ID#: 15-8864)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7145702&isnumber=7145573
Kavcic, Aleksandar; Mihaljevic, Miodrag J.; Matsuura, Kanta, "Light-Weight Secrecy System Using Channels with Insertion Errors: Cryptographic Implications," in Information Theory Workshop - Fall (ITW), 2015 IEEE, pp. 257-261, 11-15 Oct. 2015. doi: 10.1109/ITWF.2015.7360775
Abstract: A model of an encryption approach is analyzed from an information-theoretic point of view. In the model, an attacker faces the problem of observing messages through a concatenation of a binary symmetric channel and a channel with randomly inserted bits. The paper points out to a number of security related implications resulting from employing an insertion channel. It is shown that deliberate and secret-key-controlled insertions of random bits into the basic ciphertext provide a security enhancement of the resulting encryption scheme.
Keywords: Cryptography; Information rates; Random variables; Receivers; Transmitters; Yttrium (ID#: 15-8865)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7360775&isnumber=7360717
Forutan, Vahid; Fischer, Robert F.H., "On the Security of Lattice-based Physical-layer Network Coding Against Wiretap Attacks," in SCC 2015; 10th International ITG Conference on Systems, Communications and Coding; Proceedings of, pp. 1-6, 2-5 Feb. 2015. Doi: (not provided)
Abstract: We consider Gaussian-channel networks that employ lattice-based physical-layer network coding (PNC) as their routing strategy. Under the assumption that the communication is subject to the adversarial attacks in the form of wiretapping, we address data security from an information-theoretic viewpoint. To this end, we first examine how data transfer in PNC-based networking is vulnerable against wiretapping attacks, and then we show that it, due to the structured codebook employed in PNC, is possible to apply the already available lattice coset coding to obstruct the attackers in obtaining any information from the data communicated over the network. Several wiretap attack scenarios targeted to such networks are considered and possible solutions are discussed.
Keywords: (not provided) (ID#: 15-8866)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7052110&isnumber=7052080
Cheng-Zong Bai; Pasqualetti, F.; Gupta, V., "Security in Stochastic Control Systems: Fundamental Limitations and Performance Bounds," in American Control Conference (ACC), 2015, pp. 195-200, 1-3 July 2015. doi: 10.1109/ACC.2015.7170734
Abstract: This work proposes a novel metric to characterize the resilience of stochastic cyber-physical systems to attacks and faults. We consider a single-input single-output plant regulated by a control law based on the estimate of a Kalman filter. We allow for the presence of an attacker able to hijack and replace the control signal. The objective of the attacker is to maximize the estimation error of the Kalman filter - which in turn quantifies the degradation of the control performance - by tampering with the control input, while remaining undetected. We introduce a notion of ε-stealthiness to quantify the difficulty to detect an attack when an arbitrary detection algorithm is implemented by the controller. For a desired value of ε-stealthiness, we quantify the largest estimation error that an attacker can induce, and we analytically characterize an optimal attack strategy. Because our bounds are independent of the detection mechanism implemented by the controller, our information-theoretic analysis characterizes fundamental security limitations of stochastic cyber-physical systems.
Keywords: Kalman filters; stochastic systems; ε-stealthiness notion; Kalman filter estimation; arbitrary detection algorithm; control law; control performance; estimation error; optimal attack strategy; single-input single-output plant; stochastic control systems; stochastic cyber-physical systems; Cyber-physical systems; Degradation; Detectors; Estimation error; Kalman filters; Random sequences; Upper bound (ID#: 15-8867)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170734&isnumber=7170700
Miyaguchi, K.; Yamanishi, K., "On-Line Detection of Continuous Changes in Stochastic Processes," in Data Science and Advanced Analytics (DSAA), 2015. IEEE International Conference on, pp. 1-9, 19-21 Oct. 2015. doi: 10.1109/DSAA.2015.7344783
Abstract: This paper addresses the issue of detecting changes in stochastic processes. In conventional studies on change detection, it has been explored how to detect discrete changes for which the statistical models of data suddenly change. We are rather concerned with how to detect continuous changes which occurs incrementally over some successive periods. This paper gives a novel methodology for detecting continuous changes. We first define the information-theoretic measure of continuous change and prove that it is invariant with respect to the parametrization of statistical model. We then propose an efficient algorithm to detect continuous changes according to the proposed measure. We demonstrate the effectiveness of our method through the experiments using synthetic data and the applications to security and economic event detection.
Keywords: information theory; stochastic processes; continuous changes detection; discrete changes detection; economic event detection; information-theoretic measure; online change detection; parametrization; security; statistical models; stochastic processes; synthetic data; Algorithm design and analysis; Approximation methods; Biological system modeling; Linear regression; Maximum likelihood estimation; Stochastic processes (ID#: 15-8868)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344783&isnumber=7344769
Shlezinger, N.; Zahavi, D.; Murin, Y.; Dabora, R., "The Secrecy Capacity of MIMO Gaussian Channels with Finite Memory," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 101-105, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282425
Abstract: Privacy is a critical issue when communicating over shared mediums. A fundamental model for the information-theoretic analysis of secure communications is the wiretap channel (WTC), which consists of a communicating pair and an eavesdropper. In this work we study the secrecy capacity of Gaussian multiple-input multiple-output (MIMO) WTCs with finite memory. These channels are very common in wireless communications as well as in wireline communications (e.g., in power line communications). We derive a closed-form expression for the secrecy capacity of the MIMO Gaussian WTC with finite memory via the analysis of an equivalent block-memoryless model, which is transformed into a set of parallel independent memoryless MIMO WTCs. The secrecy capacity is expressed as the maximization over the input covariance matrices in the frequency domain. Finally, we show that for the Gaussian scalar WTC with finite memory, the secrecy capacity can be obtained by waterfilling.
Keywords: Gaussian channels; MIMO communication; covariance matrices; frequency-domain analysis; telecommunication security; wireless channels; Gaussian multiple-input multiple-output WTC; Gaussian scalar WTC;MIMO Gaussian WTC; closed-form expression; covariance matrices; equivalent block-memoryless model; finite memory; frequency domain; power line communications; secrecy capacity; wireless communications; wireline communications; wiretap channel; Covariance matrices; Encoding; MIMO; Receivers; Signal to noise ratio; Wireless communication (ID#: 15-8869)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282425&isnumber=7282397
Bopardikar, S.D.; Speranzon, A.; Langbort, C., "Trusted Computation with an Adversarial Cloud," in American Control Conference (ACC), 2015, pp. 2445-2452, 1-3 July 2015. doi: 10.1109/ACC.2015.7171099
Abstract: We consider the problem of computation in a cloud environment where either the data or the computation may be corrupted by an adversary. We assume that a small fraction of the data is stored locally at a client during the upload process to the cloud and that this data is trustworthy. We formulate the problem within a game theoretic framework where the client needs to decide an optimal fusion strategy using both non-trusted information from the cloud and local trusted data, given that the adversary on the cloud is trying to deceive the client by biasing the output to a different value/set of values. We adopt an Iterated Best Response (IBR) scheme for each player to update its action based on the opponent's announced computation. At each iteration, the cloud reveals its output to the client, who then computes the best response as a linear combination of its private local estimate and of the untrusted cloud output. We characterize equilibrium conditions for both the scalar and vector cases of the computed value of interest. Necessary and sufficient conditions for convergence for the IBR are derived and insightful geometric interpretations of such conditions is discussed for the vector case. Numerical results are presented showing the convergence conditions are relatively tight.
Keywords: cloud computing; game theory; geometry; iterative methods; optimisation; security of data; trusted computing; vectors; IBR scheme; adversarial cloud computing; game theoretic framework; geometric interpretation; iterated best response; optimal fusion strategy; trusted computation; vector case; Algorithm design and analysis; Convergence; Cost function; Games; Protocols; Random variables; Security; Adversarial Machine Learning; Equilibrium; Game theory; Trusted Computation (ID#: 15-8870)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7171099&isnumber=7170700
Aghdam, S.R.; Duman, T.M.; Di Renzo, M., "On Secrecy Rate Analysis of Spatial Modulation and Space Shift Keying," in Communications and Networking (BlackSeaCom), 2015 IEEE International Black Sea Conference on, pp. 63-67, 18-21 May 2015. doi: 10.1109/BlackSeaCom.2015.7185087
Abstract: Spatial modulation (SM) and space shift keying (SSK) represent transmission methods for low-complexity implementation of multiple-input multiple-output (MIMO) wireless systems in which antenna indices are employed for data transmission. In this paper, we focus our attention on the secrecy behavior of SSK and SM. Using an information-theoretic framework, we derive expressions for the mutual information and consequently compute achievable secrecy rates for SSK and SM via numerical evaluations. We also characterize the secrecy behavior of SSK by showing the effects of increasing the number of antennas at the transmitter as well as the number of antennas at the legitimate receiver and the eavesdropper. We further evaluate the secrecy rates achieved by SM with different sizes of the underlying signal constellation and compare the secrecy performance of this scheme with those of general MIMO and SIMO systems. The proposed framework unveils that SM is capable of achieving higher secrecy rates than the conventional single-antenna transmission schemes. However, it underperfoms compared to a general MIMO system in terms of the achievable secrecy rates.
Keywords: MIMO communication; antenna arrays; information theory; modulation; receiving antennas; transmitting antennas; MIMO wireless system; SIMO system; SM; SSK; antenna index; data transmission; eavesdropper; information-theoretic framework; multiple-input multiple-output wireless system; mutual information; receiving antenna; secrecy behavior; secrecy rate analysis; signal constellation; single-antenna transmission scheme; space shift keying; spatial modulation; transmitting antenna; MIMO; Modulation; Mutual information; Receiving antennas; Signal to noise ratio; Transmitting antennas; MIMO wiretap channel; Physical layer security; secrecy capacity; space shift keying; spatial modulation (ID#: 15-8871)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7185087&isnumber=7185069
Kamhoua, Charles; Martin, Andrew; Tosh, Deepak K.; Kwiat, Kevin A.; Heitzenrater, Chad; Sengupta, Shamik, "Cyber-Threats Information Sharing in Cloud Computing: A Game Theoretic Approach," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 382-389, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.80
Abstract: Cybersecurity is among the highest priorities in industries, academia and governments. Cyber-threats information sharing among different organizations has the potential to maximize vulnerabilities discovery at a minimum cost. Cyber-threats information sharing has several advantages. First, it diminishes the chance that an attacker exploits the same vulnerability to launch multiple attacks in different organizations. Second, it reduces the likelihood an attacker can compromise an organization and collect data that will help him launch an attack on other organizations. Cyberspace has numerous interconnections and critical infrastructure owners are dependent on each other's service. This well-known problem of cyber interdependency is aggravated in a public cloud computing platform. The collaborative effort of organizations in developing a countermeasure for a cyber-breach reduces each firm's cost of investment in cyber defense. Despite its multiple advantages, there are costs and risks associated with cyber-threats information sharing. When a firm shares its vulnerabilities with others there is a risk that these vulnerabilities are leaked to the public (or to attackers) resulting in loss of reputation, market share and revenue. Therefore, in this strategic environment the firms committed to share cyber-threats information might not truthfully share information due to their own self-interests. Moreover, some firms acting selfishly may rationally limit their cybersecurity investment and rely on information shared by others to protect themselves. This can result in under investment in cybersecurity if all participants adopt the same strategy. This paper will use game theory to investigate when multiple self-interested firms can invest in vulnerability discovery and share their cyber-threat information. We will apply our algorithm to a public cloud computing platform as one of the fastest growing segments of the cyberspace.
Keywords: Cloud computing; Computer security; Games; Information management; Organizations; Virtual machine monitors; cloud computing; cybersecurity; game theory; information sharing (ID#: 15-8872)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371511&isnumber=7371418
Qiaosheng Zhang; Kadhe, S.; Bakshi, M.; Jaggi, S.; Sprintson, A., "Talking Reliably, Secretly, and Efficiently: A “Complete” Characterization," in Information Theory Workshop (ITW), 2015 IEEE, pp. 1-5, April 26 2015-May 1 2015. doi: 10.1109/ITW.2015.7133143
Abstract: We consider reliable and secure communication of information over a multipath network. A transmitter Alice sends messages to the receiver Bob in the presence of a hidden adversary Calvin. The adversary Calvin can both eavesdrop and jam on (possibly non-identical) subsets of transmission links. The goal is to communicate reliably (intended receiver can understand the messages) and secretly (adversary cannot understand the messages). Two kinds of jamming, additive and overwrite, are considered. Additive jamming corresponds to wireless network model while overwrite jamming corresponds to wired network model and storage systems. The multipath network consists of C parallel links. Calvin can both jam and eavesdrop any zio number of links, can eavesdrop (but not jam) any zi/o number of links, and can jam (but not eavesdrop) any zo/i number of links. We present the first “complete” information-theoretic characterization of maximum achievable rate as a function of the number of links that can be jammed and/or eavesdropped for equal and unequal link capacity multipath networks under additive and overwrite jamming in the large alphabet regime. Our achievability and converse proofs require non-trivial combination of information theoretic and coding theoretic ideas and our achievability schemes are computationally efficient. The PHaSE-Saving techniques1 are used for achievability while a “stochastic” singleton bound is obtained for converse.
Keywords: jamming; network coding; radio networks; telecommunication security; C parallel links; PHaSE-Saving techniques; additive jamming; coding theory; communication security; first complete information-theoretic characterization; hidden adversary Calvin; overwrite jamming; stochastic singleton bound; storage systems; transmission links; unequal link capacity multipath networks; wired network model; wireless network model; Additives; Computer hacking; Computers; Decoding; Jamming; Reliability theory (ID#: 15-8873)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133143&isnumber=7133075
Rontidis, G.; Panaousis, E.; Laszka, A.; Dagiuklas, T.; Malacaria, P.; Alpcan, T., "A Game-Theoretic Approach for Minimizing Security Risks in the Internet-of-Things," in Communication Workshop (ICCW), 2015 IEEE International Conference on, pp. 2639-2644, 8-12 June 2015. doi: 10.1109/ICCW.2015.7247577
Abstract: In the Internet-of-Things (IoT), users might share part of their data with different IoT prosumers, which offer applications or services. Within this open environment, the existence of an adversary introduces security risks. These can be related, for instance, to the theft of user data, and they vary depending on the security controls that each IoT prosumer has put in place. To minimize such risks, users might seek an “optimal” set of prosumers. However, assuming the adversary has the same information as the users about the existing security measures, he can then devise which prosumers will be preferable (e.g., with the highest security levels) and attack them more intensively. This paper proposes a decision-support approach that minimizes security risks in the above scenario. We propose a non-cooperative, two-player game entitled Prosumers Selection Game (PSG). The Nash Equilibria of PSG determine subsets of prosumers that optimize users' payoffs. We refer to any game solution as the Nash Prosumers Selection (NPS), which is a vector of probabilities over subsets of prosumers. We show that when using NPS, a user faces the least expected damages. Additionally, we show that according to NPS every prosumer, even the least secure one, is selected with some non-zero probability. We have also performed simulations to compare NPS against two different heuristic selection algorithms. The former is proven to be approximately 38% more effective in terms of security-risk mitigation.
Keywords: Internet of Things; game theory; security of data; Internet of Things; Nash equilibrium; Nash prosumers selection; decision support; game theory; noncooperative game; optimal prosumer set; prosumers selection Game; security risk minimization; two player game; user data theft; Cascading style sheets; Conferences; Game theory; Games; Internet of things; Security; Silicon (ID#: 15-8874)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7247577&isnumber=7247062
Narayan, P.; Tyagi, H.; Watanabe, S., "Common Randomness for Secure Computing," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 949-953, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282595
Abstract: We revisit A.C. Yao's classic problem of secure function computation by interactive communication, in an information theoretic setting. Our approach, based on examining the underlying common randomness, provides a new proof of the characterization of a securely computable function by deterministic protocols. This approach also yields a characterization of the minimum communication needed for secure computability.
Keywords: cryptographic protocols; common randomness; deterministic protocols; information theory; interactive communication; minimum communication characterization; secure computing; secure function computation; Complexity theory; Cryptography; Entropy; Information theory; Joints; Protocols; Common randomness; maximum common function; recoverability; secure computing; security (ID#: 15-8875)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282595&isnumber=7282397
Chenguang Zhang; Zeqing Yao, "A Game Theoretic Model of Targeting in Cyberspace," in Estimation, Detection and Information Fusion (ICEDIF), 2015 International Conference on, pp. 335-339, 10-11 Jan. 2015. doi: 10.1109/ICEDIF.2015.7280218
Abstract: Targeting is the fundamental work in cyberspace operational plan. This paper investigates the basic tradeoffs and decision processes involved in cyber targeting and proposes a simple game theoretic model for cyberspace targeting to support operational plan. Then an optimal targeting strategy decision algorithm applying the game theoretic model is developed. The key component of this game theoretic model is its ability to predict equilibrium. The paper ends up with an example on showing how the game theoretic model supports targeting decision-making, which demonstrates the simplicity and effectiveness of this decision-making model.
Keywords: Internet; decision making; game theory; security of data; cyber targeting; cyberspace operational plan; cyberspace targeting; decision process; decision-making; game theoretic model; optimal targeting strategy decision algorithm; Analytical models; Biology; Cyberspace; Decision making; Games; Lead; Terrorism; cyberspace; targeting; zero-sum games (ID#: 15-8876)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7280218&isnumber=7280146
Marina, Ninoslav; Velkoska, Aneta; Paunkoska, Natasha; Baleski, Ljupcho, "Security in Twin-Code Framework," in Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2015 7th International Congress on, pp. 247-252, 6-8 Oct. 2015. doi: 10.1109/ICUMT.2015.7382437
Abstract: Achieving reliability, availability, efficient node repair and security of the data stored in a Distributed Storage System (DSS) is of great importance for improving the functioning of these systems. In this work, we apply a data distribution concept, Twin-code framework, and compare it with a DSS that uses minimum bandwidth regenerating (MBR) and minimum storage regenerating (MSR) codes. We demonstrate that the Twin-code framework gives better performance in the distribution process. Moreover, we construct a new secure Twin MDS code and investigate its security performance comparing to the security of the MBR and MSR codes. The new constructed code is resistant against a threat model where a passive eavesdropper can access to the stored data and the downloaded data during the repair process of a failed node. We demonstrate that the Twin MDS code framework achieves better results than the MBR and MSR codes regarding the security in the system.
Keywords: Bandwidth; Decision support systems; Linear codes; Maintenance engineering; Peer-to-peer computing; Reliability; Security; distributed storage system (DSS); eavesdropper; information-theoretic secrecy; security; twin-code framework (ID#: 15-8877)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7382437&isnumber=7382391
Cheuk Ting Li; El Gamal, A., "Maximal Correlation Secrecy," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 2939-2943, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282995
Abstract: This paper shows that the maximal correlation between the message and the ciphertext provides good secrecy guarantees for ciphers with short keys. We show that a small maximal correlation ρ can be achieved via a randomly generated cipher with key length ≈ 2 log(1/ρ), independent of the message length, and by a stream cipher with key length ≈ 2 log(1/ρ)+log n for a message of length n. We provide a converse result showing that these ciphers are close to optimal. We then show that any cipher with a small maximal correlation achieves a variant of semantic security with computationally unbounded adversary, similar to entropic security proposed by Russell and Wang. Finally, we show that a small maximal correlation implies secrecy with respect to several mutual information based criteria but is not necessarily implied by them.
Keywords: cryptography; graph theory; ciphertext; information theoretic secrecy; maximal correlation secrecy; semantic security; stream ciphers; Ciphers; Correlation; Encryption; Graph theory; Mutual information; Hirschfeld-Gebelein-Rényi maximal correlation; Information-theoretic secrecy; expander graph; stream cipher (ID#: 15-8878)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282995&isnumber=7282397
Shintre, S.; Gligor, V.; Barros, J., "Optimal Strategies for Side-Channel Leakage in FCFS Packet Schedulers," in Information Theory (ISIT), 2015 IEEE International Symposium on, pp. 2515-2519, 14-19 June 2015. doi: 10.1109/ISIT.2015.7282909
Abstract: We examine the side-channel information leakage in first-come-first-serve (FCFS) packet schedulers. In this setup, an attacker aims to learn the packet arrival pattern of a private user that shares a FCFS packet scheduler with him, using the queuing delay information of his own packets. Under an information-theoretic metric for information leakage, we identify the optimal non-adaptive strategy for a given average probe rate of the attacker and report upto 1000% increase in information leakage compared to the attack strategy analyzed in the literature with the same average probe rate. The search for optimal strategies is reduced to linear programming, implying that the discovery of such strategies is in the domain of a real-world attacker.
Keywords: Internet; computer network security; telecommunication network management; telecommunication network routing; telecommunication traffic; FCFS packet schedulers; first-come-first-serve; information theoretic metric; linear programming; optimal nonadaptive strategy; optimal strategies; packet arrival pattern; queuing delay information; side channel information leakage; Delays; Entropy; Privacy; Probes; Scheduling algorithms; Security (ID#: 15-8879)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282909&isnumber=7282397
Rui Zhang; Quanyan Zhu, "Secure and Resilient Distributed Machine Learning under Adversarial Environments," in Information Fusion (Fusion), 2015 18th International Conference on, pp. 644-651, 6-9 July 2015. Doi: (not provided)
Abstract: With a large number of sensors and control units in networked systems, the decentralized computing algorithms play a key role in scalable and efficient data processing for detection and estimation. The well-known algorithms are vulnerable to adversaries who can modify and generate data to deceive the system to misclassify or misestimate the information from the distributed data processing. This work aims to develop secure, resilient and distributed machine learning algorithms under adversarial environment. We establish a game-theoretic framework to capture the conflicting interests between the adversary and a set of distributed data processing units. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environment, and enhancing the resilience of the machine learning through dynamic distributed learning algorithms. We use Spambase Dataset to illustrate and corroborate our results.
Keywords: distributed processing; game theory; learning (artificial intelligence); sensors; Nash equilibrium; Spambase Dataset; adversarial environments; decentralized computing algorithms; distributed data processing units; distributed machine learning algorithms; dynamic distributed learning algorithm; game-theoretic framework; information misclassification; information misestimation; networked systems; sensors; Games; Heuristic algorithms; Machine learning algorithms; Security; Training; Training data (ID#: 15-8880)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7266621&isnumber=7266535
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Integrity of Outsourced Databases 2015 |
The growth of distributed storage systems such as the Cloud has produced novel security problems. The works cited here address untrusted servers, generic trusted data, trust extension on commodity computers, defense against frequency-based attacks in wireless networks, and other topics. For the Science of Security community, these topics relate to composability, metrics, and resilience. The work cited here was presented in 2015.
Azraoui, M.; Elkhiyaoui, K.; Onen, M.; Molva, R., "Publicly Verifiable Conjunctive Keyword Search in Outsourced Databases," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 619-627, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346876
Abstract: Recent technological developments in cloud computing and the ensuing commercial appeal have encouraged companies and individuals to outsource their storage and computations to powerful cloud servers. However, the challenge when outsourcing data and computation is to ensure that the cloud servers comply with their advertised policies. In this paper, we focus in particular on the scenario where a data owner wishes to (i) outsource its public database to a cloud server; (ii) enable anyone to submit multi-keyword search queries to the outsourced database; and (iii) ensure that anyone can verify the correctness of the server's responses. To meet these requirements, we propose a solution that builds upon the well-established techniques of Cuckoo hashing, polynomial-based accumulators and Merkle trees. The key idea is to (i) build an efficient index for the keywords in the database using Cuckoo hashing; (ii) authenticate the resulting index using polynomial-based accumulators and Merkle tree; (iii) and finally, use the root of the Merkle tree to verify the correctness of the server's responses. Thus, the proposed solution yields efficient search and verification and incurs a constant storage at the data owner. Furthermore, we show that it is sound under the strong bilinear Diffie-Hellman assumption and the security of Merkle trees.
Keywords: authorisation; cloud computing; cryptography; database management systems; formal verification; polynomials; query processing; tree data structures; trees (mathematics); Merkle trees security; bilinear Diffie-Hellman assumption; cloud computing; cloud servers; cuckoo hashing; multikeyword search queries; outsourced databases; outsourcing computation; outsourcing data; polynomial-based accumulators; public conjunctive keyword search verifiability; resulting index authentication; server response correctness verification; Cloud computing; Databases; Erbium; Keyword search; Public key; Servers (ID#: 15-8765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346876&isnumber=7346791
Hyunjo Lee; MunChol Choi; Jae-Woo Chang, "A Group Order-Preserving Encryption Scheme Based on Periodic Functions for Efficient Query Processing on Encrypted Data," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 923-923, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.275
Abstract: To preserve the private information of the outsourced database, it is important to encrypt the database. Also it is necessary to provide a query processing scheme without decrypting the encrypted data. For this, we propose a group order preserving data encryption scheme based on periodic functions (GOPES). Our GOPES generates encryption signatures based on data groups and periodic functions. With this, we can guarantee the data privacy.
Keywords: cryptography; query processing; GOPES; data privacy; encrypted data; encryption signatures; group order-preserving encryption scheme; outsourced database; periodic functions; query processing; Conferences; Data privacy; Encryption; Query processing; data privacy protection; database outsourcing; encrypted query processing; group order-preserving; order-preserving encryption scheme (ID#: 15-8766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336287&isnumber=7336120
Talha, Ayesha M.; Kamel, Ibrahim; Aghbari, Zaher Al, "Enhancing Confidentiality and Privacy of Outsourced Spatial Data," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 13-18, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.39
Abstract: The increase of spatial data has led organizations to upload their data onto third-party service providers. Cloud computing allows data owners to outsource their databases, eliminating the need for costly storage and computational resources. The main challenge is maintaining data confidentiality with respect to untrusted parties as well as providing efficient and accurate query results to the authenticated users. We propose a dual transformation scheme on the spatial database to overcome this problem, while the service provider executes queries and returns results to the users. First, our approach utilizes the space-filling Hilbert curve to map each spatial point in the multidimensional space to a one-dimensional space. This space transformation method is easy to compute and preserves the spatial proximity. Next, the order-preserving encryption algorithm is applied to the clustered data. The user issues spatial range queries to the service provider on the encrypted Hilbert index and then uses a secret key to decrypt the query response returned. This allows data protection and reduces the query communication cost between the user and service provider.
Keywords: Encryption; Indexes; Servers; Spatial databases; Database Security; Order-Preserving Encryption; Outsourced Database; Space-Filling Curves; Spatial Data (ID#: 15-8767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371432&isnumber=7371418
Sepehri, M.; Cimato, S.; Damiani, E.; Yeuny, C.Y., "Data Sharing on the Cloud: A Scalable Proxy-Based Protocol for Privacy-Preserving Queries," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1pp. 1357-1362, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.530
Abstract: Outsourcing data on the cloud poses many challenges related to data owners and users privacy, especially when some data processing capabilities are delegated to the cloud infrastructure. In this paper we address the problem of executing privacy-preserving equality queries in a scenario where multiple data owners outsource their databases to an untrusted cloud service provider, accepting encrypted queries coming from authorized users. We propose a highly scalable proxy re-encryption scheme so that (i) the cloud service provider can return only the encrypted data that satisfies user's query without decrypting it, and (ii) the encrypted results can be decrypted using the user's key. We analyze the computation efficiency and the security of the scheme against proxy under the standard Diffie-Hellman assumption, reporting also some experimental results, which show encouraging speed up in comparison with previously proposed similar schemes.
Keywords: cloud computing; cryptographic protocols; outsourcing; private key cryptography; public key cryptography; query processing; trusted computing; authorized users; cloud infrastructure; computation efficiency analysis; data decryption; data owners; data processing capabilities; data sharing; database outsourcing; encrypted data; encrypted queries; privacy-preserving equality queries; proxy re-encryption scheme; scalable proxy-based protocol; security analysis; standard Diffie-Hellman analysis; untrusted cloud service provider; user key; user privacy; Cloud computing; Data models; Data privacy; Encryption; Protocols; Servers (ID#: 15-8768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345438&isnumber=7345233
Baghel, S.V.; Theng, D.P., "A Survey for Secure Communication of Cloud Third Party Authenticator," in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, pp. 51-54, 26-27 Feb. 2015. doi: 10.1109/ECS.2015.7124959
Abstract: Cloud computing is an information technology where user can remotely store their outsourced data so as enjoy on demand high quality application and services from configurable resources. Using information data exchange, users can be worried from the load of local data storage and protection. Thus, allowing freely available auditability for cloud data storage is more importance so that user gives change to check data integrity through external audit party. In the direction of securely establish efficient third party auditor (TPA), which has next two primary requirements to be met: 1) TPA should able to audit outsourced data without demanding local copy of user outsourced data; 2) TPA process should not bring in new threats towards user data privacy. To achieve these goals this system will provide a solution that uses Kerberos as a Third Party Auditor/ Authenticator, RSA algorithm for secure communication, MD5 algorithm is used to verify data integrity, Data centers is used for storing of data on cloud in effective manner with secured environment and provides Multilevel Security to Database.
Keywords: authorisation; cloud computing; computer centres; data integrity; data protection; outsourcing; public key cryptography; MD5 algorithm; RSA algorithm; TPA; cloud third party authenticator; data centers; data integrity; data outsourcing; external audit party; information data exchange; information technology; local data protection; local data storage; multilevel security; on demand high quality application; on demand services; secure communication; third party auditor; user data privacy; user outsourced data; Algorithm design and analysis; Authentication; Cloud computing; Heuristic algorithms; Memory; Servers; Cloud Computing; Data center; Multilevel database; Public Auditing; Third Party Auditor (ID#: 15-8769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7124959&isnumber=7124722
Mohammed, N.; Barouti, S.; Alhadidi, D.; Rui Chen, "Secure and Private Management of Healthcare Databases for Data Mining," in Computer-Based Medical Systems (CBMS), 2015 IEEE 28th International Symposium on, pp. 191-196, 22-25 June 2015. doi: 10.1109/CBMS.2015.54
Abstract: There has been a tremendous growth in health data collection since the development of Electronic Medical Record (EMR) systems. Such collected data is further shared and analyzed for diverse purposes. Despite many benefits, data collection and sharing have become a big concern as it threatens individual privacy. In this paper, we propose a secure and private data management framework that addresses both the security and privacy issues in the management of medical data in outsourced databases. The proposed framework ensures the security of data by using semantically-secure encryption schemes to keep data encrypted in outsourced databases. The framework also provides a differentially-private query interface that can support a number of SQL queries and complex data mining tasks. We experimentally evaluate the performance of the proposed framework, and the results show that the proposed framework is practical and has low overhead.
Keywords: data mining; electronic health records; health care; records management; security of data; EMR system; SQL query; data mining; differentially-private query interface; electronic medical record; health data collection; healthcare database private management; healthcare database security; medical data management; semantically-secure encryption scheme; Algorithm design and analysis; Cryptography; Databases; Medical services; Privacy; Protocols; Servers; Data sharing; Differential privacy; Electronic medical record; Privacy (ID#: 15-8770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167484&isnumber=7167433
Dingxing Xie; Yanchao Lu; Congjin Du; Jie Li; Li Li, "Secure Range Query Based on Spatial Index," in Industrial Networks and Intelligent Systems (INISCom), 2015 1st International Conference on, pp. 1-6, 2-4 March 2015. doi: 10.4108/icst.iniscom.2015.258364
Abstract: Sensor network has become an increasingly attractive and advantageous subject recently. More and more demands of data storage and data query have been raised in soft-defined sensor network. Bonnet et al. [1] investigated the problem of database in sensor network. In most of such scenes, data is stored in server instead of local. For this reason, data security [2] is very important. While encryption of outsourced data protects against many privacy threats, it could not hide the access patterns of the users. Protecting user information from leakage or attackers while guaranteeing high efficiency of query is becoming an important problem of concern. In this paper, we discuss secure range query based on spatial index. We build the spatial index on the client instead of the server to keep the information away from the potential threat. While keeping a high efficiency of query, we not only encrypt the data, but also hide the access patterns. That will greatly reduce the risk of data leakage. We do simulations and prove our design to be practicable and effective.
Keywords: cryptography; data privacy; query processing; software defined networking; storage management; data query; data security; data storage; encryption; outsourced data; privacy threat; secure range query; soft-defined sensor network; spatial index; user information protection; Cryptography; Random access memory; Servers; Spatial databases; Spatial indexes; Data Security; Database; Range Query; Sensor network; Spatial Data (ID#: 15-8771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7157814&isnumber=7157808
Nguyen-Vu, Long; Park, Minho; Park, Jungsoo; Jung, Souhwan, "Privacy Enhancement for Data Outsourcing," in Information and Communication Technology Convergence (ICTC), 2015 International Conference on, pp. 335-338, 28-30 Oct. 2015. doi: 10.1109/ICTC.2015.7354558
Abstract: The demand of storing and processing data online grows quickly to adapt to the rapid change of business. It could lead to crisis if the cloud service provider is compromised and data of users are exposed to attackers in plaintext. In this paper, we introduce a practical scheme that dynamically protects and outsources data on demand, as well as propose a corresponding architecture to securely process data in Database Service Provider. After studying over 1300 database models, we believe this scheme can be applied in production with justifiable result.
Keywords: Databases; Digital signal processing; Encryption; Outsourcing; Servers; Yttrium; Cloud Privacy; Data Outsourcing; Database as a Service; Information Security (ID#: 15-8772)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7354558&isnumber=7354472
Sarada, G.; Abitha, N.; Manikandan, G.; Sairam, N., "A Few New Approaches for Data Masking," in Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, pp. 1-4, 19-20 March 2015. doi: 10.1109/ICCPCT.2015.7159301
Abstract: In today's information era, the data is a key asset for any organization. Every organization has a privacy policy for hiding their data in the database, but when they outsource their data to a third party for analysis purpose there is no security measure taken in order to prevent it from being misused. Data Security plays a vital role in the industry and one way to achieve security is to use data masking. The Main objective of data masking is to hide the sensitive data from the outside world. In this paper we propose a few approaches to hide the sensitive data from being accessed by unauthorized users.
Keywords: authorisation; data encapsulation; data privacy; fuzzy logic; minimax techniques; data masking; data outsourcing; data privacy policy; data security; sensitive data hiding; Algorithm design and analysis; Computers; Data privacy; Databases; Encryption; Organizations; Fuzzy; Map range; Masking; Min-Max normalization; Rail-fence (ID#: 15-8773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7159301&isnumber=7159156
Rahulamathavan, Y.; Rajarajan, M., "Hide-and-Seek: Face Recognition in Private," in Communications (ICC), 2015 IEEE International Conference on, pp. 7102-7107, 8-12 June 2015. doi: 10.1109/ICC.2015.7249459
Abstract: Recent trend towards cloud computing and outsourcing has led to the requirement for face recognition (FR) to be performed remotely by third-party servers. When outsourcing the FR, client's test image and classification result will be revealed to the servers. Within this context, we propose a novel privacy-preserving (PP) FR algorithm based on randomization. Existing PP FR algorithms are based on homomorphic encryption (HE) which requires higher computational power and communication bandwidth. Since we use randomization, the proposed algorithm outperforms the HE based algorithm in terms of computational and communication complexity. We validated our algorithm using popular ORL database. Experimental results demonstrate that accuracy of the proposed algorithm is the same as the accuracy of existing algorithms, while improving the computational efficiency by 120 times and communication complexity by 2.5 times against the existing HE based approach.
Keywords: communication complexity; cryptography; data privacy; face recognition; image classification; ORL database; cloud computing; communication complexity; computational complexity; face recognition; hide-and-seek; homomorphic encryption; image classification; outsourcing; privacy-preserving FR algorithm; third-party servers; Accuracy; Algorithm design and analysis; Complexity theory; Noise; Security; Servers; Training (ID#: 15-8774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7249459&isnumber=7248285
Hoang Giang Do; Wee Keong Ng, "Privacy-Preserving Approach for Sharing and Processing Intrusion Alert Data," in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1-6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106911
Abstract: Amplified and disrupting cyber-attacks might lead to severe security incidents with drastic consequences such as large property damage, sensitive information breach, or even disruption of the national economy. While traditional intrusion detection and prevention system might successfully detect low or moderate levels of attack, the cooperation among different organizations is necessary to defend against multi-stage and large-scale cyber-attacks. Correlating intrusion alerts from a shared database of multiple sources provides security analysts with succinct and high-level patterns of cyber-attacks - a powerful tool to combat with sophisticate attacks. However, sharing intrusion alert data raises a significant privacy concern among data holders, since publishing this information means a risk of exposing other sensitive information such as intranet topology, network services, and the security infrastructure. This paper discusses possible cryptographic approaches to tackle this issue. Organizers can encrypt their intrusion alert data to protect data confidentiality and outsource them to a shared server to reduce the cost of storage and maintenance, while, at the same time, benefit from a larger source of information for alert correlation process. Two privacy preserving alert correlation techniques are proposed under semi-honest model. These methods are based on attribute similarity and prerequisite/consequence conditions of cyber-attacks.
Keywords: cryptography; data privacy; intranets; cryptographic approach; cyber-attacks; intranet topology; intrusion alert data processing; intrusion alert data sharing; large-scale cyber-attacks; network services; privacy-preserving approach; security infrastructure; Encryption; Sensors (ID#: 15-8775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106911&isnumber=7106892
Andreoli, Andrea; Ferretti, Luca; Marchetti, Mirco; Colajanni, Michele, "Enforcing Correct Behavior without Trust in Cloud Key-Value Databases," in Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, pp. 157-164, 3-5 Nov. 2015. doi: 10.1109/CSCloud.2015.51
Abstract: Traditional computation outsourcing and modern cloud computing are affected by a common risk of distrust between service requestor and service provider. We propose a novel protocol, named Probus, that offers guarantees of correct behavior to both parts without assuming any trust relationship between them in the context of cloud-based key-value databases. Probus allows a service requestor to have evidence of cloud provider misbehavior on its data, and a cloud provider to defend itself from false accusations by demonstrating the correctness of its operations. Accusation and defense proofs are based on cryptographic mechanisms that can be verified by a third party. Probus improves the state-of-the-art by introducing novel solutions that allow for efficient verification of data security properties and by limiting the overhead required to provide its security guarantees. Thanks to Probus it is possible to check the correctness of all the results generated by a cloud service, thus improving weaker integrity assurance based on probabilistic verifications that are adopted by related work.
Keywords: Cloud computing; Cryptography; Databases; Metadata; Protocols; cloud services; integrity; key-value database ;trust (ID#: 15-8776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7371475&isnumber=7371418
Refaie, Rasha; Abd El-Aziz, A.A.; Hamza, Nermin; Mahmood, Mahmood A.; Hefny, Hesham, "A New Efficient Algorithm for Executing Queries over Encrypted Data," in Computing, Communication and Security (ICCCS), 2015 International Conference on, pp. 1-4, 4-5 Dec. 2015. doi: 10.1109/CCCS.2015.7374182
Abstract: Outsourcing databases into cloud increases the need of data security. The user of cloud must be sure that his data will be safe and will not be stolen or reused even if the datacenters were attacked. The service provider is not trustworthy so the data must be invisible to him. Executing queries over encrypted data preserves a certain degree of confidentiality. In this paper, we propose an efficient algorithm to run computations on data encrypted for different principals. The proposed algorithm allows users to run queries over encrypted columns directly without decrypting all records.
Keywords: Antenna radiation patterns; Bandwidth; Boats; Dual band; Feeds; Ultra wideband antennas; CryptDB; Database security; Homomorphic encryption; MONOMI; Secure indexes; query processing (ID#: 15-8777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7374182&isnumber=7374113
Chang Liu; Liehuang Zhu; Jinjun Chen, "Efficient Searchable Symmetric Encryption for Storing Multiple Source Data on Cloud," in Trustcom/BigData SE/ISPA, 2015 IEEE, vol. 1, pp. 451-458, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.406
Abstract: Cloud computing has greatly facilitated large-scale data outsourcing due to its cost efficiency, scalability and many other advantages. Subsequent privacy risks force data owners to encrypt sensitive data, hence making the outsourced data no longer searchable. Searchable Symmetric Encryption (SSE) is an advanced cryptographic primitive addressing the above issue, which maintains efficient keyword search over encrypted data without disclosing much information to the storage provider. Existing SSE schemes implicitly assume that original user data is centralized, so that a searchable index can be built at once. Nevertheless, especially in cloud computing applications, user-side data centralization is not reasonable, e.g. an enterprise distributes its data in several data centers. In this paper, we propose the notion of Multi-Data-Source SSE (MDS-SSE), which allows each data source to build a local index individually and enables the storage provider to merge all local indexes into a global index afterwards. We propose a novel MDS-SSE scheme, in which an adversary only learns the number of data sources, the number of entire data files, the access pattern and the search pattern, but not any other distribution information such as how data files or search results are distributed over data sources. We offer rigorous security proof of our scheme, and report experimental results to demonstrate the efficiency of our scheme.
Keywords: cloud computing; cryptography; storage management; MDS-SSE scheme; cloud computing; large-scale data outsourcing; multiple source data storage; searchable symmetric encryption; Cloud computing; Distributed databases; Encryption; Indexes; Servers; Cloud Computing; Data Outsourcing; Multiple Data Sources; Searchable Symmetric Encryption (ID#: 15-8778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345314&isnumber=7345233
Talpur, S.R.; Abdalla, S.; Kechadi, T., "Towards Middleware Security Framework for Next Generation Data Centers Connectivity," in Science and Information Conference (SAI), 2015, pp. 1277-1283, 28-30 July 2015. doi: 10.1109/SAI.2015.7237308
Abstract: Data Center as a Service (DCaaS) facilitates to clients as an alternate outsourced physical data center, the expectations of business community to fully automate these data centers to run smoothly. Geographically Distributed Data Centers and their connectivity has major role in next generation data centers. In order to deploy the reliable connections between Distributed Data Centers, the SDN based security and logical firewalls are attractive and enviable. We present the middleware security framework for software defined data centers interconnectivity, the proposed security framework will be based on some learning processes, which will reduce the complexity and manage very large number of secure connections in real-world data centers. In this paper we will focus on two main objectives; (1) proposing simple and yet scalable techniques for security and analysis, (2) Implementing and evaluating these techniques on real-world data centers.
Keywords: cloud computing; computer centres; firewalls; middleware; security of data; software defined networking; Data Center as a Service; SDN based security; geographically distributed data centers; logical firewalls; middleware security framework; next generation data centers connectivity; outsourced physical data center; real-world data centers; software defined data centers interconnectivity; Distributed databases; Optical switches; Routing; Security; Servers; Software; DCI (Data Center Inter-connectivity); DCaaS; Distributed Firewall; OpenFlow; SDDC; SDN; Virtual Networking (ID#: 15-8779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7237308&isnumber=7237120
Helei Cui; Xingliang Yuan; Cong Wang, "Harnessing Encrypted Data in Cloud for Secure and Efficient Image Sharing from Mobile Devices," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2659-2667, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218657
Abstract: In storage outsourcing, highly correlated datasets can occur commonly, where the rich information buried in correlated data can be useful for many cloud data generation/dissemination services. In light of this, we propose to enable a secure and efficient cloud-assisted image sharing architecture for mobile devices, by leveraging outsourced encrypted image datasets with privacy assurance. Different from traditional image sharing, the proposed design aims to save the transmission cost from mobile clients, by directly utilizing outsourced correlated images to reproduce the image of interest inside the cloud for immediate dissemination. While the benefits are obvious, how to leverage the encrypted image datasets makes the problem particular challenging. To tackle the problem, we first propose a secure and efficient index design that allows the mobile client to securely find from the encrypted image datasets the candidate selection pertaining to the image of interest for sharing. We then design two specialized encryption mechanisms that support the secure image reproduction inside the cloud directly from the encrypted candidate selection. We formally analyze the security strength of the design. Our experiments show that up to 90% of the transmission cost at the mobile client can be saved, while achieving all service requirements and security guarantees.
Keywords: cloud computing; correlation methods; cryptography; data privacy; image processing; mobile computing; outsourcing; visual databases; cloud data dissemination services; cloud data generation services; cloud-assisted image sharing architecture; correlated datasets; encrypted candidate selection; encrypted data; index design; mobile clients; mobile devices; outsourced encrypted image datasets; privacy assurance; secure image reproduction; security guarantees; security strength analysis; service requirements; specialized encryption mechanisms; storage outsourcing; transmission cost saving; Encryption; Feature extraction; Indexes; Mobile communication; Servers (ID#: 15-8780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218657&isnumber=7218353
Shuaishuai Zhu; Yiliang Han; Yuechuan Wei, "Controlling Outsourcing Data in Cloud Computing with Attribute-Based Encryption," in Intelligent Networking and Collaborative Systems (INCOS), 2015 International Conference on, pp. 257-261, 2-4 Sept. 2015. doi: 10.1109/INCoS.2015.29
Abstract: In our IT society, cloud computing is clearly becoming one of the dominating infrastructures for enterprises as long as end users. As more cloud based services available to end users, their oceans of data are outsourced in the cloud as well. Without any special mechanisms, the data may be leaked to a third party for unauthorized use. Most presented works of cloud computing put these emphases on computing utility or new types of applications. But in the view of cloud users, such as traditional big companies, data in cloud computing systems is tend to be out of control and privacy fragile. So most of data they outsourced is less important. A mechanism to guarantee the ownership of data is required. In this paper, we analyzed a couple of recently presented scalable data management models to describe the storage patterns of data in cloud computing systems. Then we defined a new tree-based dataset management model to solve the storage and sharing problems in cloud computing. A couple of operation strategies including data encryption, data boundary maintenance, and data proof are extracted from the view of different entities in the cloud. The behaviors of different users are controlled by view management on the tree. Based on these strategies, a flexible data management mechanism is designed in the model to guarantee entity privacy, data availability and secure data sharing.
Keywords: cloud computing; cryptography; data privacy; outsourcing; trees (mathematics); attribute-based encryption; cloud computing system; data availability; data management model; data outsourcing; data sharing security; entity privacy; tree-based dataset management model; Access control; Cloud computing; Computational modeling; Data models; Data privacy; Encryption; Cloud Computing; Data Privacy; Database Management; Outsourcing Data (ID#: 15-8781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7312081&isnumber=7312007
Shanmugakani, N.; Chinnaa, R., "An Explicit Integrity Verification Scheme for Cloud Distributed Systems," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-4, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282293
Abstract: Cloud computing encourages the prototype for data service outsourcing, where data owners can avoid cost usage by storing their data in cloud storage centers. The ultimate problem of cloud computing technology is that, the service providers have to protect the user data and services. Secured systems should consider Confidentiality, Availability and Integrity as their primary option. The user encrypts their information to achieve the first one. The second one is achieved in convenient deployment scheme. Last but not the least is integrity. To provide integrity, lots of techniques were discovered, but still the goal is not achieved. In this paper, we propose a novel scheme to achieve integrity goals and we explore how to ensure the integrity and correctness of data storage in cloud computing. The Unique feature of this scheme is finding out which data portion is modified or attacked by the malicious user. In our scheme, there is no need for third-party authority (TPA) and cloud service provider communication in verification. Compared with the existing scheme, it takes the advantage of huge data support and high performance with a simple and easily approachable technique.
Keywords: cloud computing; cryptography; outsourcing; program verification; software reliability; storage management; TPA; cloud computing technology; cloud distributed systems; cloud service provider communication; cloud storage centers; data service outsourcing; data storage; explicit integrity verification scheme; third-party authority; Cloud computing; Computers; Cryptography; Distributed databases; Instruments; Servers; Availability; Cloud Storage; Confidentiality; Integrity (ID#: 15-8782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282293&isnumber=7282219
Jiang, T.; Chen, X.; Ma, J., "Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revocation," in Computers, IEEE Transactions on, vol. PP, no. 99, pp.1-1, January 2015. doi: 10.1109/TC.2015.2389955
Abstract: The advent of the cloud computing makes storage outsourcing become a rising trend, which promotes the secure remote data auditing a hot topic that appeared in the research literature. Recently some research consider the problem of secure and efficient public data integrity auditing for shared dynamic data. However, these schemes are still not secure against the collusion of cloud storage server and revoked group users during user revocation in practical cloud storage system. In this paper, we figure out the collusion attack in the exiting scheme and provide an efficient public integrity auditing scheme with secure group user revocation based on vector commitment and verifier-local revocation group signature. We design a concrete scheme based on the our scheme definition. Our scheme supports the public checking and efficient user revocation and also some nice properties, such as confidently, efficiency, countability and traceability of secure group user revocation. Finally, the security and experimental analysis show that, compared with its relevant schemes our scheme is also secure and efficient.
Keywords: Cloud computing; Cryptography; Databases; Generators; Servers; Vectors; Public integrity auditing; cloud computing; dynamic data; group signature; victor commitment (ID#: 15-8783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004787&isnumber=4358213
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Intellectual Property Protection 2015 |
Intellectual Property protection continues to be a matter of major research interest. The articles cited here look at hardware security, provenance and piracy prevention. The topic is related to the Science of Security regarding resilience, policy-based governance, and composability. Articles cited were presented in 2015.
Rajendran, J.; Huan Zhang; Chi Zhang; Rose, G.S.; Youngok Pino; Sinanoglu, O.; Karri, R., "Fault Analysis-Based Logic Encryption," in Computers, IEEE Transactions on, vol. 64, no. 2, pp. 410-424, Feb. 2015. doi: 10.1109/TC.2013.193
Abstract: Globalization of the integrated circuit (IC) design industry is making it easy for rogue elements in the supply chain to pirate ICs, overbuild ICs, and insert hardware Trojans. Due to supply chain attacks, the IC industry is losing approximately $4 billion annually. One way to protect ICs from these attacks is to encrypt the design by inserting additional gates such that correct outputs are produced only when specific inputs are applied to these gates. The state-of-the-art logic encryption technique inserts gates randomly into the design, but does not necessarily ensure that wrong keys corrupt the outputs. Our technique ensures that wrong keys corrupt the outputs. We relate logic encryption to fault propagation analysis in IC testing and develop a fault analysis-based logic encryption technique. This technique enables a designer to controllably corrupt the outputs. Specifically, to maximize the ambiguity for an attacker, this technique targets 50% Hamming distance between the correct and wrong outputs (ideal case) when a wrong key is applied. Furthermore, this 50% Hamming distance target is achieved using a smaller number of additional gates when compared to random logic encryption.
Keywords: cryptography; fault diagnosis; integrated circuit design; integrated circuit testing; invasive software; logic gates; Hamming distance; IC design industry; IC testing; fault analysis-based logic encryption; fault propagation analysis; gates; hardware Trojans; integrated circuit design industry; random logic encryption; supply chain attacks; Circuit faults; Encryption; Foundries; Integrated circuits; Logic gates; Testing; Automatic test pattern generation; IC piracy; IP piracy; combinational logic circuit; hardware security; integrated circuit testing; logic obfuscation (ID#: 15-8674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6616532&isnumber=7006872
Yasin, M.; Rajendran, J.; Sinanoglu, O.; Karri, R., "On Improving the Security of Logic Locking," in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1-1, 22 December 2015. doi: 10.1109/TCAD.2015.2511144
Abstract: Due to globalization of Integrated Circuit (IC) design flow, rogue elements in the supply chain can pirate ICs, overbuild ICs, and insert hardware trojans. EPIC [1] locks the design by randomly inserting additional gates; only a correct key makes the design to produce correct outputs. We demonstrate that an attacker can decipher the locked netlist, in a time linear to the number of keys, by sensitizing the key-bits to the output. We then develop techniques to fix this vulnerability and make an attacker’s effort truly exponential in the number of inserted keys. We introduce a new security metric and a method to deliver strong logic locking.
Keywords: Foundries; Hardware; Integrated circuits; Logic gates; Reverse engineering; Trojan horses; Design for trust; Hardware security; IP piracy; IP protection; Logic encryption (ID#: 15-8675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7362173&isnumber=6917053
Yu-Wei Lee; Touba, N.A., "Improving Logic Obfuscation via Logic Cone Analysis," in Test Symposium (LATS), 2015 16th Latin-American, pp.1-6, 25-27 March 2015. doi: 10.1109/LATW.2015.7102410
Abstract: Logic obfuscation can protect designs from reverse engineering and IP piracy. In this paper, a new attack strategy based on applying brute force iteratively to each logic cone is described and shown to significantly reduce the number of brute force key combinations that need to be tried by an attacker. It is shown that inserting key gates based on MUXes is an effective approach to increase security against this type of attack. Experimental results are presented quantifying the threat posed by this type of attack along with the relative effectiveness of MUX key gates in countering it.
Keywords: logic design; logic gates; IP piracy; MUX key gates; attacker; brute force key combinations; logic cone analysis; logic obfuscation; reverse engineering; Force; IP networks; Integrated circuits; Interference; Logic gates; Reverse engineering; Security (ID#: 15-8676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102410&isnumber=7102396
Kan Xiao; Forte, D.; Tehranipoor, M.M., "Efficient and Secure Split Manufacturing via Obfuscated Built-in Self-Authentication," in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, pp. 14-19, 5-7 May 2015. doi: 10.1109/HST.2015.7140229
Abstract: The threats of reverse-engineering, IP piracy, and hardware Trojan insertion in the semiconductor supply chain are greater today than ever before. Split manufacturing has emerged as a viable approach to protect integrated circuits (ICs) fabricated in untrusted foundries, but has high cost and/or high performance overhead. Furthermore, split manufacturing cannot fully prevent untargeted hardware Trojan insertions. In this paper, we propose to insert additional functional circuitry called obfuscated built-in self-authentication (OBISA) in the chip layout with split manufacturing process, in order to prevent reverse-engineering and further prevent hardware Trojan insertion. Self-tests are performed to authenticate the trustworthiness of the OBISA circuitry. The OBISA circuit is connected to original design in order to increase the strength of obfuscation, thereby allowing a higher layer split and lower overall cost. Additional fan-outs are created in OBISA circuitry to improve obfuscation without losing testability. Our proposed gating mechanism and net selection method can ensure negligible overhead in terms of area, timing, and dynamic power. Experimental results demonstrate the effectiveness of the proposed technique in several benchmark circuits.
Keywords: foundries; integrated circuit manufacture; integrated circuit reliability; invasive software; reverse engineering; supply chains; IP piracy; OBISA circuit; chip layout; hardware Trojan insertion; integrated circuits; obfuscated built-in self-authentication; reverse engineering; semiconductor supply chain; split manufacturing; trustworthiness; untrusted foundries; Delays; Fabrication; Foundries; Layout; Logic gates (ID#: 15-8677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140229&isnumber=7140225
sin, M.; Mazumdar, B.; Ali, S.S.; Sinanoglu, O., "Security Analysis of Logic Encryption Against the Most Effective Side-Channel Attack: DPA," in Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS), 2015 IEEE International Symposium on, pp. 97-102, 12-14 Oct. 2015. doi: 10.1109/DFT.2015.7315143
Abstract: Logic encryption has recently gained interest as a countermeasure against IP piracy and reverse engineering attacks. A secret key is used to lock/encrypt an IC such that the IC will not be functional without being activated with the correct key. Existing attacks against logic encryption are of theoretical and/or algorithmic nature. In this paper, we evaluate for the first time the security of logic encryption against side-channel attacks. We present a differential power analysis attack against random and strong logic encryption techniques. The proposed attack is highly effective against random logic encryption, revealing more than 70% of the key bits correctly in 50% of the circuits. However, in the case of strong logic encryption, which exhibits an inherent DPA-resistance, the attack could reveal more than 50% of the key bits in only 25% of the circuits.
Keywords: integrated logic circuits; private key cryptography; DPA-resistance; IC encryption; IC lock; IP piracy; differential power analysis attack; key bits; logic encryption security analysis; random-logic encryption technique; reverse engineering attacks; secret key; side-channel attack; strong logic encryption technique; Algorithm design and analysis; Benchmark testing; Encryption; IP networks; Integrated circuits; Reverse engineering (ID#: 15-8678)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7315143&isnumber=7315124
Dunbar, C.; Gang Qu, "A Practical Circuit Fingerprinting Method Utilizing Observability Don't Care Conditions," in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1-6, 8-12 June 2015. doi: 10.1145/2744769.2744780
Abstract: Circuit fingerprinting is a method that adds unique features into each copy of a circuit such that they can be identified for the purpose of tracing intellectual property (IP) piracy. It is challenging to develop effective fingerprinting techniques because each copy of the IP must be made different, which increases the design and manufacturing cost. In this paper, we explore the Observability Don't Care (ODC) conditions to create multiple fingerprinting copies of a design IP (e.g. in the form of gate level layout) with minute changes. More specifically, we find locations in the given circuit layout where we can replace a gate with another gate and some wires without changing the functionality of the circuit. However, as expected, this could introduce design overhead. Our experimental results show that, although we can embed fingerprints of up to 1438 bits, there is an average of 10.9% area increase, 50.5% delay increase, and 9.4% power increase on circuits in the MCNC and ISCAS 85 benchmark suites. We further propose a fingerprinting heuristics under delay constraints to help us reduce area and power overhead.
Keywords: circuit layout; embedded systems; industrial property; system-on-chip; IP; ISCAS 85 benchmark; circuit fingerprinting; circuit layout; fingerprinting copies; fingerprinting techniques; intellectual property piracy; Fingerprint recognition; Integrated circuits; Inverters; Layout; Logic gates (ID#: 15-8679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167298&isnumber=7167177
Mishra, P.; Bhunia, S.; Ravi, S., "Tutorial T2: Validation and Debug of Security and Trust Issues in Embedded Systems," in VLSI Design (VLSID), 2015 28th International Conference on, pp. 3-5, 3-7 Jan. 2015. doi: 10.1109/VLSID.2015.110
Abstract: Summary form only given. Reusable hardware intellectual property (IP) based System-on-Chip (SoC) design has emerged as a pervasive design practice in the industry to dramatically reduce design/verification cost while meeting aggressive time-to-market constraints. However, growing reliance on reusable pre-verified hardware IPs and wide array of CAD tools during SoC design - often gathered from untrusted 3rd party vendors - severely affects the security and trustworthiness of SoC computing platforms. Major security issues in the hardware IPs at different stages of SoC life cycle include piracy during IP evaluation, reverse engineering, cloning, counterfeiting, as well as malicious hardware modifications. The global electronic piracy market is growing rapidly and is now estimated to be $1B/day, of which a significant part is related to hardware IPs. Furthermore, use of untrusted foundry in a fabless business model greatly aggravates the SoC security threats by introducing vulnerability of malicious modifications or piracy during SoC fabrication. Due to ever-growing computing demands, modern SoCs tend to include many heterogeneous processing cores, scalable communication network, together with reconfigurable cores e.g. embedded FPGA in order to incorporate logic that is likely to change as standards and requirements evolve. Such design practices greatly increase the number of untrusted components in the SoC design flow and make the overall system security a pressing concern. There is a critical need to analyze the SoC security issues and attack models due to involvement of multiple untrusted entities in SoC design cycle - IP vendors, CAD tool developers, and foundries - and develop low-cost effective countermeasures. These countermeasures would encompass encryption, obfuscation, watermarking and fingerprinting, and certain analytic methods derived from the behavioral aspects of SoC to enable trusted operation with untrusted components. In this tutorial, we plan to prov- de a comprehensive coverage of both fundamental concepts and recent advances in validation of security and trust of hardware IPs. The tutorial also covers the security and debug trade-offs in modern SoCs e.g., more observability is beneficial for debug whereas limited observability is better for security. It examines the state-of-the-art in research in this challenging area as well as industrial practice, and points to important gaps that need to be filled in order to develop a validation and debug flow for secure SoC systems. The tutorial presenters (one industry expert and two faculty members) will be able to provide unique perspectives on both academic research and industrial practices. The selection of topics covers a broad spectrum and will be of interest to a wide audience including design, validation, security, and debug engineers. The proposed tutorial consists of five parts. The first part introduces security vulnerabilities and various challenges associated with trust validation for hardware IPs. Part II covers various security attacks and countermeasures. Part III covers both formal methods and simulation-based approaches for security and trust validation. Part IV presents the conflicting requirements between security and debug during SoC development and ways to address them. Part V covers real-life examples of security failures and successful countermeasures in industry. Finally, Part VI concludes this tutorial with discussion on emerging issues and future directions.
Keywords: computer debugging; embedded systems; industrial property; security of data; system-on-chip; SoC computing platforms; debug flow; embedded systems; formal methods; hardware IP; reusable hardware intellectual property; security attacks; security failures; security validation; security vulnerabilities; system-on-chip; trust validation; Awards activities; Design automation; Hardware; Security; System-on-chip; Tutorials; Very large scale integration (ID#: 15-8680)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031691&isnumber=7031671
Zhang, J., "A Practical Logic Obfuscation Technique for Hardware Security," in Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. PP, no. 99, pp.1-1, June 2015. doi: 10.1109/TVLSI.2015.2437996
Abstract: A number of studies of hardware security aim to thwart piracy, overbuilding, and reverse engineering (RE) by obfuscating and/or camouflaging. However, these techniques incur high overheads, and integrated circuit (IC) camouflaging cannot provide any protection for the gate-level netlist of the third party intellectual property (IP) core or the single large monolithic IC. In order to circumvent these weaknesses, this brief elaborately analyzes these hardware security techniques and proposes a practical logic obfuscation method with low overheads to prevent an adversary from RE both the gate-level netlist and the layout-level geometry of IP/IC and protect IP/IC from piracy and overbuilding. Experimental evaluations demonstrate the low area, power, and zero performance overhead of the proposed obfuscation technique.
Keywords: Benchmark testing; Hardware; Integrated circuits; Inverters; Licenses; Logic gates; Security; Hardware security; intellectual property (IP) protection; logic obfuscation; overbuilding; physical unclonable function (PUF); piracy; reverse engineering (RE) (ID#: 15-8681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7128395&isnumber=4359553
Chari, K.S.; Sharma, M., "Performance of IC Layout Design Diagnostic Tool," in Communication Technologies (GCCT), 2015 Global Conference on, pp. 332-337, 23-24 April 2015. doi: 10.1109/GCCT.2015.7342678
Abstract: The staggering evolution of electronic products in all spheres of human life has been possible because of the advancements in the Integrated Circuit (IC) design and technologies. The complexities of the ICs have grown tremendously over the years, so does the intellectual capital of resources and man years of efforts in their creations. With increasing emphasis on protection of ICs, the issues of Intellectual Property Rights (IPR) pertaining to IC layout, the backbone of implemented functionality, have become crucial. The matters of IPR also touch on several legal and ethical issues associated with the layout creations. In this context, analyzing the distinctness in IC Layout Design (LD) and knowing the unique and common parts in different IC LDs of concern has become important both from IP protection aspects as well as design perspective of tagging the layouts for their content. A robust analytical evaluation of designs could immensely help in catching piracy of IC Designs in addition to assisting the screening processes followed before granting IPR to claimants. Authors have previously presented some results on custom designed IC Layout Design Diagnostic Tool (ICLDDT) for this purpose. In the present paper, the abilities of a more advanced version, Version 2 of this tool is investigated further and results of performance reported.
Keywords: industrial property; integrated circuit layout; IC layout design diagnostic tool; IP protection; IPR; catching piracy; electronic products; human life; intellectual property rights; robust analytical evaluation; screening process; staggering evolution; Complexity theory; Geometry; Integrated circuit layout; Intellectual property; Layout; Shape; ICLDDT; IPR; Integrated Circuit Layout Design(ICLD); Layout and geometry comparison; geometry equivalence check (ID#: 15-8682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342678&isnumber=7342608
Plaza, S.M.; Markov, I.L., "Solving the Third-Shift Problem in IC Piracy With Test-Aware Logic Locking," in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 34, no. 6, pp. 961-971, June 2015. doi: 10.1109/TCAD.2015.2404876
Abstract: The increasing IC manufacturing cost encourages a business model where design houses outsource IC fabrication to remote foundries. Despite cost savings, this model exposes design houses to IC piracy as remote foundries can manufacture in excess to sell on the black market. Recent efforts in digital hardware security aim to thwart piracy by using XOR-based chip locking, cryptography, and active metering. To counter direct attacks and lower the exposure of unlocked circuits to the foundry, we introduce a multiplexor-based locking strategy that preserves test response allowing IC testing by an untrusted party before activation. We demonstrate a simple yet effective attack against a locked circuit that does not preserve test response, and validate the effectiveness of our locking strategy on IWLS 2005 benchmarks.
Keywords: integrated circuit manufacture; integrated circuit modelling; logic design; logic testing; IC manufacturing cost; IC piracy; IWLS 2005 benchmarks; XOR-based chip locking; active metering; business model; cost savings; counter direct attacks; cryptography; design houses outsource IC fabrication; digital hardware security; remote foundries; test-aware logic locking; third-shift problem; thwart piracy; Cryptography; Fabrication; Integrated circuit modeling; Logic gates; Tin; Vectors; Chip locking; EPIC; IP protection; chip locking; design for testability; ending piracy of integrated circuits (EPIC); secure hardware; third-shift problem (ID#: 15-8683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7045595&isnumber=7110649
Bossuet, L.; Fischer, V.; Bayon, P., "Contactless Transmission of Intellectual Property Data to Protect FPGA Designs," in Very Large Scale Integration (VLSI-SoC), 2015 IFIP/IEEE International Conference on, pp. 19-24, 5-7 Oct. 2015. doi: 10.1109/VLSI-SoC.2015.7314385
Abstract: Over the past 10 years, the designers of intellectual properties (IP) have faced increasing threats including illegal copy or cloning, counterfeiting, reverse-engineering. This is now a critical issue for the microelectronics industry, mainly for fabless designers and FPGA designers. The design of a secure, efficient, lightweight protection scheme for design data is a serious challenge for the hardware security community. In this context, this paper presents the first ultra-lightweight transmitter using side channel leakage based on electromagnetic emanation to send embedded IP identity discreetly and quickly. In addition, we present our electromagnetic test bench and a coherent demodulation method using slippery window spectral analysis to recover data outside the device. The hardware resources occupied by the transmitter represent less than 0.022% of a 130 nm Microsemi Fusion FPGA. Experimental results show that the demodulation method success to provide IP data with a bit rate equal to 500 Kbps.
Keywords: copy protection; field programmable gate arrays; industrial property; logic design; radio transmitters; FPGA design protection; Microsemi Fusion FPGA; bit rate 500 kbit/s; contactless transmission; electromagnetic emanation; embedded IP identity; fabless design; intellectual property data; microelectronics industry; side channel leakage; ultralightweight transmitter; Bit rate; Demodulation; Electromagnetics; Field programmable gate arrays; Hardware; Spectral analysis; Transmitters; IP protection; electromagnetic emanation analysis; side channel (ID#: 15-8684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7314385&isnumber=7314373
Colombier, B.; Bossuet, L.; Hely, D., "Reversible Denial-of-Service by Locking Gates Insertion for IP Cores Design Protection," in VLSI (ISVLSI), 2015 IEEE Computer Society Annual Symposium on, pp. 210-215, 8-10 July 2015. doi: 10.1109/ISVLSI.2015.54
Abstract: Nowadays, electronics systems design is a complex process. A design-and-reuse model has been adopted, and the vast majority of designers integrate third party intellectual property (IP) cores in their design in order to reduce time to market. Due to their immaterial form and high market value, IP cores are exposed to threats such as cloning and illegal copying. In order to fight these threats, we propose to achieve functional locking, equivalent to a trigger able and reversible denial-of-service. This is done by inserting locking gates at specific locations in the net list, allowing to force outputs at a fixed value. We developed a new method based on graph exploration techniques for locking gates insertion. It selects candidate nodes ten thousand times faster than state-of-the-art fault analysis-based logic masking techniques. Methods are then compared on ISCAS'85 combinational benchmarks.
Keywords: copy protection; copyright; graph theory; logic circuits; logic design; microprocessor chips; IP cores design protection; cloning threats; design-and-reuse model; electronics systems design; fault analysis-based logic masking techniques; functional locking; graph exploration techniques; illegal copying; locking gates insertion; reversible denial-of-service; third party intellectual property; Algorithm design and analysis; Benchmark testing; Circuit faults; Correlation; Force; Logic gates; Security; Intellectual property protection; functional locking; graph analysis; logic masking (ID#: 15-8685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7309567&isnumber=7308659
Yao, Song; Chen, Xiaoming; Zhang, Jie; Liu, Qiaoyi; Wang, Jia; Xu, Qiang; Wang, Yu; Yang, Huazhong, "FASTrust: Feature Analysis for Third-party IP Trust Verification," in Test Conference (ITC), 2015 IEEE International, pp. 1-10, 6-8 Oct. 2015. doi: 10.1109/TEST.2015.7342417
Abstract: Third-party intellectual property (3PIP) cores are widely used in integrated circuit designs. It is essential and important to ensure their trustworthiness. Existing hardware trust verification techniques suffer from high computational complexity, low extensibility, and inability to detect implicitly-triggered hardware trojans (HTs). To tackle the above problems, in this paper, we present a novel 3PIP trust verification framework, named FASTrust, which conducts HT feature analysis on the flip-flop level control-data flow graph (CDFG) of the circuit. FASTrust is not only able to identify existing explicitly-triggered and implicitly-triggered HTs appeared in the literature in an efficient and effective manner, but more importantly, it also has the unique advantage of being scalable to defend against future and more stealthy HTs by adding new features to the system.
Keywords: Combinational circuits; Feature extraction; Hardware; Integrated circuit modeling; Trojan horses; Wires; Hardware Trojan; feature analysis; hardware security; third-party intellectual property (ID#: 15-8686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7342417&isnumber=7342364
Machado, R.C.S.; Boccardo, D.R.; Pereira De Sa, V.G.; Szwarcfiter, J.L., "Fair Fingerprinting Protocol for Attesting Software Misuses," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 110-119, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.29
Abstract: Digital watermarks embed information into a host artifact in such a way that the functionalities of the artifact remain unchanged. Allowing for the timely retrieval of authorship/ownership information, and ideally hard to be removed, watermarks discourage piracy and have thus been regarded as important tools to protect the intellectual property. A watermark aimed at uniquely identifying an artifact is referred to as a fingerprint. After presenting a formal definition of digital watermarks, we introduce an unbiased fingerprinting protocol -- based on oblivious transfer -- that lends no advantage to the prosecuting party in a dispute around intellectual property breach.
Keywords: computer crime; industrial property; software engineering; watermarking; authorship-ownership information; digital watermarks; fair fingerprinting protocol; intellectual property breach; oblivious transfer; piracy; prosecuting party; software misuses; unbiased fingerprinting protocol; Intellectual property; Protocols; Public key; Semantics; Software; Watermarking; oblivious transfer; software fingerprinting (ID#: 15-8687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299904&isnumber=7299862
Elmrabit, N.; Shuang-Hua Yang; Lili Yang, "Insider Threats in Information Security Categories and Approaches," in Automation and Computing (ICAC), 2015 21st International Conference on, pp. 1-6, 11-12 Sept. 2015. doi: 10.1109/IConAC.2015.7313979
Abstract: The main concern of most security experts in the last years is the need to mitigate insider threats. However, leaking and selling data these days is easier than before; with the use of the invisible web, insiders can leak confidential data while remaining anonymous. In this paper, we give an overview of the various basic characteristics of insider threats. We also consider current approaches and controls to mitigating the level of such threats by broadly classifying them into two categories.
Keywords: Internet; data privacy; security of data; confidential data; information security ;insider threats; invisible Web; security experts; Authorization; Cloud computing; Companies; Databases; Information security; Intellectual property; Insider threats; data leaking; insider attacks; insider predictions; privileged user abuse (ID#: 15-8688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313979&isnumber=7313638
Bidmeshki, M.-M.; Makris, Y., "Toward Automatic Proof Generation for Information Flow Policies in Third-Party Hardware IP," in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, pp. 163-168, 5-7 May 2015. doi: 10.1109/HST.2015.7140256
Abstract: The proof carrying hardware intellectual property (PCHIP) framework ensures trustworthiness by developing proofs for security properties designed to prevent introduction of malicious behaviors via third-party hardware IP. However, converting a design to a formal representation and developing proofs for the desired security properties is a cumbersome task for IP developers and requires extra knowledge of formal reasoning methods, proof development and proof checking. While security properties are generally specific to each design, information flow policies are a set of policies which ensure that no secret information is leaked through untrusted channels, and are mainly applicable to the designs which manipulate secret and sensitive data. In this work, we introduce the VeriCoq-IFT framework which aims to (i) automate the process of converting designs from HDL to the Coq formal language, (ii) generate security property theorems ensuring information flow policies, (iii) construct proofs for such theorems, and (iv) check their validity for the design, with minimal user intervention. We take advantage of Coq proof automation facilities in proving the generated theorems for enforcing these policies and we demonstrate the applicability of our automated framework on two DES encryption circuits. By providing essential information, the trustworthiness of these circuits in terms of information flow policies is verified automatically. Any alteration of the circuit description against information flow policies causes proofs to fail. Our methodology is the first but essential step in the adoption of PCHIP as a valuable method to authenticate the trustworthiness of third party hardware IP with minimal extra effort.
Keywords: formal languages; industrial property; theorem proving; trusted computing; Coq formal language; DES encryption circuits; HDL; PCHIP framework; VeriCoq-IFT framework; automatic proof generation; formal reasoning methods; information flow policies; malicious behaviors; proof carrying hardware intellectual property framework; proof checking; proof development; third-party hardware; Hardware; Hardware design languages; IP networks; Sensitivity; Trojan horses; Wires (ID#: 15-8689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140256&isnumber=7140225
Rass, Stefan; Schartner, Peter, "Licensed Processing of Encrypted Information," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 703-704, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346894
Abstract: We report on work in progress concerning a computational model for data processing in privacy. As a core design goal here, we will focus on how the data owner can authorize another party to process data on his behalf. In that scenario, the algorithm or software for the processing can even be provided by a third party. The goal is here to protect the intellectual property rights of all relevant players, while retaining an efficient system that allows data processing in distrusted environments, such as clouds.
Keywords: Cloud computing; Data processing; Encoding; Encryption; Licenses; cloud computing; cryptography; licensing; private function evaluation; security (ID#: 15-8690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346894&isnumber=7346791
Djouadi, S.M.; Melin, A.M.; Ferragut, E.M.; Laska, J.A.; Jin Dong; Drira, A., "Finite Energy and Bounded Actuator Attacks on Cyber-Physical Systems," in Control Conference (ECC), 2015 European, pp. 3659-3664, 15-17 July 2015. doi: 10.1109/ECC.2015.7331099
Abstract: As control system networks are being connected to enterprise level networks for remote monitoring, operation, and system-wide performance optimization, these same connections are providing vulnerabilities that can be exploited by malicious actors for attack, financial gain, and theft of intellectual property. Much effort in cyber-physical system (CPS) protection has focused on protecting the borders of the system through traditional information security techniques. Less effort has been applied to the protection of cyber-physical systems from intelligent attacks launched after an attacker has defeated the information security protections to gain access to the control system. In this paper, attacks on actuator signals are analyzed from a system theoretic context. The threat surface is classified into finite energy and bounded attacks. These two broad classes encompass a large range of potential attacks. The effect of theses attacks on a linear quadratic (LQ) control are analyzed, and the optimal actuator attacks for both finite and infinite horizon LQ control are derived, therefore the worst case attack signals are obtained. The closed-loop system under the optimal attack signals is given and a numerical example illustrating the effect of an optimal bounded attack is provided.
Keywords: actuators; closed loop systems; infinite horizon; linear quadratic control; networked control systems; security of data; signal processing; CPS protection; actuator signals; bounded actuator attacks; closed-loop system; control system networks; cyber-physical system protection; enterprise level networks; finite energy actuator attacks; infinite horizon LQ control; information security protections; information security techniques; intelligent attacks; linear quadratic control; optimal actuator attacks; optimal attack signals; remote monitoring; system theoretic context; system-wide performance optimization; Actuators; Closed loop systems; Computer science; Cyber-physical systems; Information security; Sensors (ID#: 15-8691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7331099&isnumber=7330515
Vani, K.; Gupta, D., "Investigating the Impact of Combined Similarity Metrics and POS Tagging in Extrinsic Text Plagiarism Detection System," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 1578-1584, 10-13 Aug. 2015
doi: 10.1109/ICACCI.2015.7275838
Abstract: Plagiarism is an illicit act which has become a prime concern mainly in educational and research domains. This deceitful act is usually referred as an intellectual theft which has swiftly increased with the rapid technological developments and information accessibility. Thus the need for a system/ mechanism for efficient plagiarism detection is at its urgency. In this paper, an investigation of different combined similarity metrics for extrinsic plagiarism detection is done and it focuses on unfolding the importance of combined similarity metrics over the commonly used single metric usage in plagiarism detection task. Further the impact of utilizing part of speech tagging (POS) in the plagiarism detection model is analyzed. Different combinations of the four single metrics, Cosine similarity, Dice coefficient, Match coefficient and Fuzzy-Semantic measure is used with and without POS tag information. These systems are evaluated using PAN1 -2014 training and test data set and results are analyzed and compared using standard PAN measures, viz, recall, precision, granularity and plagdet_score.
Keywords: fuzzy set theory; industrial property; security of data; text analysis; POS tag information; POS tagging; combined similarity metrics; commonly used single metric usage; cosine similarity; dice coefficient; educational domain; extrinsic plagiarism detection; extrinsic text plagiarism detection system; fuzzy-semantic measure; information accessibility; intellectual theft; match coefficient; part of speech tagging; plagiarism detection model; plagiarism detection task; research domain; standard PAN measure; technological development; Feature extraction; Measurement; Plagiarism; Semantics;Speech; Tagging; Training; Combined Metrics; Extrinsic Plagiarism;POS tagging; Single Metrics; Vector Space Model (ID#: 15-8692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275838&isnumber=7275573
Backer, J.; Hely, D.; Karri, R., "On Enhancing the Debug Architecture of a System-on-Chip (Soc) to Detect Software Attacks," in Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFTS), 2015 IEEE International Symposium on, pp. 29-34, 12-14 Oct. 2015. doi: 10.1109/DFT.2015.7315131
Abstract: The prevalent use of systems-on-chip (SoCs) makes them prime targets for software attacks. Proposed security countermeasures monitor software execution in real-time, but are impractical, and require impractical changes to the internal logic of intellectual property (IP) cores. We leverage the software observability provided by the readily available SoC debug architecture to detect attacks without modifying IP cores. We add hardware components to configure the debug architecture for security monitoring, to store a golden software execution model, and to notify a trusted kernel process when an attack is detected. Our evaluations show that the additions do not impact runtime software execution, and incur 9% area and power overheads on a low-cost processor core.
Keywords: computer debugging; logic circuits; system-on-chip; IP cores; SoC debug architecture enhancement; area overheads; attack detection; hardware components; intellectual property cores; internal logic; low-cost processor core; power overheads; runtime software execution; security monitoring; software attack detection; software execution model; software execution monitoring; software observability; system-on-chip; trusted kernel process; IP networks; Instruments; Monitoring; Registers; Software; Table lookup (ID#: 15-8693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7315131&isnumber=7315124
Xiaolong Guo; Dutta, R.G.; Jin, Y.; Farahmandi, F.; Mishra, P., "Pre-Silicon Security Verification and Validation: A Formal Perspective," in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1-6, 8-12 June 2015. doi: 10.1145/2744769.2747939
Abstract: Reusable hardware Intellectual Property (IP) based System-on-Chip (SoC) design has emerged as a pervasive design practice in the industry today. The possibility of hardware Trojans and/or design backdoors hiding in the IP cores has raised security concerns. As existing functional testing methods fall short in detecting unspecified (often malicious) logic, formal methods have emerged as an alternative for validation of trustworthiness of IP cores. Toward this direction, we discuss two main categories of formal methods used in hardware trust evaluation: theorem proving and equivalence checking. Specifically, proof-carrying hardware (PCH) and its applications are introduced in detail, in which we demonstrate the use of theorem proving methods for providing high-level protection of IP cores. We also outline the use of symbolic algebra in equivalence checking, to ensure that the hardware implementation is equivalent to its design specification, thus leaving little space for malicious logic insertion.
Keywords: electronic engineering computing; industrial property; integrated circuit design; integrated circuit testing; security of data; system-on-chip; theorem proving; IP cores protection; PCH; SoC design; equivalence checking; formal methods; functional testing methods; hardware Trojans; hardware trust evaluation; logic insertion; pervasive design; presilicon security validation; presilicon security verification; proof-carrying hardware; reusable hardware intellectual property; system-on-chip design; theorem proving methods; Hardware; IP networks; Logic gates; Polynomials; Sensitivity; Trojan horses (ID#: 15-8694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167331&isnumber=7167177
Li Zhang; Chip-Hong Chang, "Public Key Protocol for Usage-Based Licensing of FPGA IP Cores," in Circuits and Systems (ISCAS), 2015 IEEE International Symposium on, pp. 25-28, 24-27 May 2015. doi: 10.1109/ISCAS.2015.7168561
Abstract: Application developers are now turning to field-programmable gate array (FPGA) devices for solutions of small to medium volume due to its post-fabrication flexibility. Unfortunately, the existing upfront intellectual property (IP) licensing model for FPGA based third-party IP cores is economically unattractive. The IP bitstreams in transaction are also vulnerable to cloning, misappropriation and reverse engineering. This paper proposes a secure pay-per-use licensing protocol to avoid complicated communication flow and high implementation cost, while preventing the IP rights from being compromised or abused by all parties involved. The protocol guarantees the confidentiality and integrity of the security-critical components and forbids the implementation of licensed IP cores on gray market or counterfeit chips. The public-crypto based core installation module used to self-configure the licensed IP cores occupies only limited FPGA fabrics temporarily.
Keywords: field programmable gate arrays; logic circuits; microprocessor chips; modules; protocols; public key cryptography; reverse engineering; FPGA IP core; IP bitstream; IP right; counterfeit chip; field programmable gate array; gray market; intellectual property; pay-per-use licensing protocol; post-fabrication flexibility; public key protocol; public-crypto based core installation module; reverse engineering; security critical component; usage-based licensing; Computer integrated manufacturing; Cryptography; Fabrics; Field programmable gate arrays; IP networks; Licenses; Protocols (ID#: 15-8695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7168561&isnumber=7168553
Konstantinou, C.; Keliris, A.; Maniatakos, M., "Privacy-Preserving Functional IP Verification Utilizing Fully Homomorphic Encryption," in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, pp. 333-338, 9-13 March 2015. Doi: (not provided)
Abstract: Intellectual Property (IP) verification is a crucial component of System-on-Chip (SoC) design in the modern IC design business model. Given a globalized supply chain and an increasing demand for IP reuse, IP theft has become a major concern for the IC industry. In this paper, we address the trust issues that arise between IP owners and IP users during the functional verification of an IP core. Our proposed scheme ensures the privacy of IP owners and users, by a) generating a privacy-preserving version of the IP, which is functionally equivalent to the original design, and b) employing homomorphically encrypted input vectors. This allows the functional verification to be securely outsourced to a third-party, or to be executed by either parties, while revealing the least possible information regarding the test vectors and the IP core. Experiments on both combinational and sequential benchmark circuits demonstrate up to three orders of magnitude IP verification slowdown, due to the computationally intensive fully homomorphic operations, for different security parameter sizes.
Keywords: cryptography; data privacy; industrial property; IC design business model; IC industry; IP core; IP reuse; IP theft; IP users; SoC design; fully homomorphic encryption; functional verification; globalized supply chain; intellectual property verification; magnitude IP verification; privacy-preserving functional IP verification; privacy-preserving version; security parameter sizes; sequential benchmark circuits; system-on-chip; Encryption; IP networks; Libraries; Logic gates; Noise (ID#: 15-8696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092410&isnumber=7092347
Prashanthi, R., "A Hybrid Fragile High Capacity Watermarking Technique with Template Matching Detection Scheme," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-6, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282332
Abstract: With the rapid growth in the development of digital content, the terms security and privacy are very challenging issues. Watermarking is a broad interesting field which provides way in hiding an informal or formal data in another for maintaining intellectual property rights. The key perspectives of watermarking are security, robustness, invisibility and capacity. In the proposed technique the capacity is increased by embedding two RGB images in one RGB image. The invisibility of the hidden information is obtained by employing least significant bit substitution. The security is enhanced using a novel template matching detection scheme. The proposed fragile watermarking system destroys the watermarks when modified or tampered thus providing integrity and authentication features.
Keywords: image colour analysis; image matching; image watermarking; industrial property; message authentication; RGB image; authentication feature; digital content; fragile watermarking system; hidden information ;high capacity watermarking technique; informal data; intellectual property right; template matching detection scheme; Indexes; Robustness; Watermarking; Fragile; Watermarking; capacity; invisibility; robustness; security (ID#: 15-8697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7282332&isnumber=7282219
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Reconnaissance 2015 |
The capacity to survey analyze and assess a network is a critical aspect of developing resilient systems. The work cited here addresses multiple methods and approaches to network reconnaissance. All were presented in 2015.
Jafarian, J.H.; Al-Shaer, E.; Qi Duan, "Adversary-Aware IP Address Randomization for Proactive Agility Against Sophisticated Attackers," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 738-746, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218443
Abstract: Network reconnaissance of IP addresses and ports is prerequisite to many host and network attacks. Meanwhile, static configurations of networks and hosts simplify this adversarial reconnaissance. In this paper, we present a novel proactive-adaptive defense technique that turns end-hosts into untraceable moving targets, and establishes dynamics into static systems by monitoring the adversarial behavior and reconfiguring the addresses of network hosts adaptively. This adaptability is achieved by discovering hazardous network ranges and addresses and evacuating network hosts from them quickly. Our approach maximizes adaptability by (1) using fast and accurate hypothesis testing for characterization of adversarial behavior, and (2) achieving a very fast IP randomization (i.e., update) rate through separating randomization from end-hosts and managing it via network appliances. The architecture and protocols of our approach can be transparently deployed on legacy networks, as well as software-defined networks. Our extensive analysis and evaluation show that by adaptive distortion of adversarial reconnaissance, our approach slows down the attack and increases its detectability, thus significantly raising the bar against stealthy scanning, major classes of evasive scanning and worm propagation, as well as targeted (hacking) attacks.
Keywords: IP networks; computer network security; software defined networking; adversary-aware IP address randomization; network hosts; proactive agility; software-defined networks; sophisticated attackers; Conferences; IP networks; Logic gates; Probes; Protocols; Reconnaissance; Servers (ID#: 16-9110)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218443&isnumber=7218353
Yue-Bin Luo; Bao-Sheng Wang; Xiao-Feng Wang; Xiao-Feng Hu; Gui-Lin Cai; Hao Sun, "RPAH: Random Port and Address Hopping for Thwarting Internal and External Adversaries," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, no., pp. 263-270, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.383
Abstract: Network servers and applications commonly use static IP addresses and communication ports, making themselves easy targets for network reconnaissances and attacks. Port and address hopping is a novel and effective moving target defense (MTD) which hides network servers and applications by constantly changing IP addresses and ports. In this paper, we develop a novel port and address hopping mechanism called Random Port and Address Hopping (RPAH), which constantly and unpredictably mutates IP addresses and communication ports based on source identity, service identity as well as time with high rate. RPAH provides us a more strength and effective MTD mechanism with three hopping frequency, i.e., source hopping, service hopping and temporal hopping. In RPAH networks, the real IPs (rIPs) and real ports (rPorts) remain untouched and packets are routed based on dynamic and temporary virtual IPs (vIPs) of servers. Therefore, messages from adversaries using static, invalid or inactive IP addresses/ports will be detected and denied. Our experiments and evaluation show that RPAH is effective in defense against various internal and external threats such as network scanning, SYN flooding attack and worm propagation, while introducing an acceptable operation overhead.
Keywords: IP networks; computer network security; frequency hop communication; MTD; RPAH; SYN flooding attack; communication ports; moving target defense; network scanning; network servers; random port and address hopping; static IP address; worm propagation; Demultiplexing; IP networks; Internet; Ports (Computers); Security; Servers; Synchronization; dynamic mutation; moving target defense; network security; port and address hopping (ID#: 16-9111)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345291&isnumber=7345233
Voyiatzis, A.G.; Katsigiannis, K.; Koubias, S., "A Modbus/TCP Fuzzer for Testing Internetworked Industrial Systems," in Emerging Technologies & Factory Automation (ETFA), 2015 IEEE 20th Conference on, pp. 1-6, 8-11 Sept. 2015. doi: 10.1109/ETFA.2015.7301400
Abstract: Modbus/TCP is a network protocol for industrial communications encapsulated in TCP/IP network packets. There is an increasing need to test existing Modbus protocol implementations for security vulnerabilities, as devices become accessible even from the Internet. Fuzz testing can be used to discover implementation bugs in a fast and economical way. We present the design and implementation of MTF, a Modbus/TCP Fuzzer. The MTF incorporates a reconnaissance phase in the testing procedure so as to assist mapping the capabilities of the tested device and to adjust the attack vectors towards a more guided and informed testing rather than plain random testing. The MTF was used to test eight implementations of the Modbus protocol and revealed bugs and vulnerabilities that crash the execution, effectively resulting in denial of service attacks using only a few network packets.
Keywords: Internet; computer network security;industrial control; program debugging; program testing; transport protocols; MTF design; MTF implementation; Modbus protocol implementations; Modbus/TCP fuzzer; TCP/IP network packets; attack vectors; denial-of-service attacks; fuzz testing; industrial communications; internetworked industrial system testing; network protocol; reconnaissance phase; security vulnerabilities; testing procedure; Computer crashes; Computer crime; Protocols; Reconnaissance; Sockets; Software; Testing (ID#: 16-9112)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301400&isnumber=7301399
Albanese, M.; Battista, E.; Jajodia, S., "A Deception Based Approach for Defeating OS and Service Fingerprinting," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 317-325, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346842
Abstract: Cyber attacks are typically preceded by a reconnaissance phase in which attackers aim at collecting critical information about the target system, including information about network topology, services, operating systems, and unpatched vulnerabilities. Specifically, operating system fingerprinting aims at determining the operating system of a remote host in either a passive way, through sniffing and traffic analysis, or an active way, through probing. Similarly, service fingerprinting aims at determining what services are running on a remote host. In this paper, we propose an approach to defeat an attacker's fingerprinting effort through deception. To defeat OS fingerprinting, we manipulate outgoing traffic so that it resembles traffic generated by a host with a different operating system. Similarly, to defeat service fingerprinting, we modify the service banner by intercepting and manipulating certain packets before they leave the host or network. Experimental results show that our approach can efficiently and effectively deceive an attacker.
Keywords: computer network security; operating systems (computers); telecommunication network topology; telecommunication services; telecommunication traffic; OS fingerprinting; attacker fingerprinting; cyber attacks; deception based approach; network topology; operating system fingerprinting; outgoing traffic; reconnaissance phase; remote host; service banner; service fingerprinting; traffic analysis; Fingerprint recognition; IP networks; Operating systems; Ports (Computers); Probes; Protocols; Standards (ID#: 16-9113)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346842&isnumber=7346791
Costin, A., "All Your Cluster-Grids Are Belong to Us: Monitoring the (In)Security of Infrastructure Monitoring Systems," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 550-558, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346868
Abstract: Monitoring of the high-performance computing systems and their components, such as clusters, grids and federations of clusters, is performed using monitoring systems for servers and networks, or Network Monitoring Systems (NMS). These monitoring tools assist system administrators in assessing and improving the health of their infrastructure. A successful attack on the infrastructure monitoring tools grants the attacker elevated power over the monitoring tasks, and eventually over some management functionality of the interface or over hosts running those interfaces. Additionally, detailed and accurate fingerprinting and reconnaissance of a target infrastructure is possible when such interfaces are publicly exposed. A successful reconnaissance allows an attacker to craft an efficient second stage attacks, such as targeted, mimicry and blended attacks. We provide in this paper a comprehensive security analysis of some of the most popular infrastructure monitoring tools for grids, clusters and High-Performance Computing (HPC) systems. We also provide insights based on the infrastructure data openly exposed over the Internet. The wide use of some of the most popular infrastructure monitoring tools makes this data exposure possible. For example, we found such monitoring interfaces to expose infrastructure details of systems inside many high-profile organizations, including two top national laboratories for nuclear research and one top Internet non-profit foundation. We also present our findings on a plethora of web vulnerabilities in the entire version-span of such monitoring tools, and discuss at a high-level the possible attacks. The results of our research allow us to “monitor” an “alarming” mismanagement reality of grid infrastructure. The aim of this work is to raise the awareness about this novel risk to cloud infrastructure.
Keywords: Internet; cloud computing; grid computing; parallel processing; security of data; system monitoring; workstation clusters; HPC systems; Internet; NMS; Web vulnerabilities; cloud infrastructure; clusters; comprehensive security analysis; grid infrastructure; high-performance computing; infrastructure monitoring systems; insecurity monitoring; network monitoring systems; open data exposure; Cloud computing; Kernel; Monitoring; Ports (Computers); Privacy; Security; Servers (ID#: 16-9114)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346868&isnumber=7346791
Ward, J.R.; Younis, M., "Distributed Beamforming Relay Selection to Increase Base Station Anonymity in Wireless Ad Hoc Networks," in Computer Communication and Networks (ICCCN), 2015 24th International Conference on, pp. 1-8, 3-6 Aug. 2015. doi: 10.1109/ICCCN.2015.7288399
Abstract: Wireless ad hoc networks have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, nodes act as data sources and forward information to a central base station (BS) that may also perform network management tasks. The critical role of the BS makes it a target for an adversary's attack. Even if an ad hoc network employs conventional security primitives such as encryption and authentication, an adversary can apply traffic analysis techniques to find the BS. Therefore, the BS should be kept anonymous to protect its identity, role, and location. Previous work has demonstrated distributed beamforming to be an effective technique to boost BS anonymity in wireless ad hoc networks; however, the increased anonymity and corresponding energy consumption depend on the quality and quantity of selected helper relays. In this paper we present a novel, distributed approach for determining a set of relays per hop that boosts BS anonymity using evidence theory analysis while minimizing energy consumption. The identified relay set is further prioritized using local wireless channel statistics. The simulation results demonstrate the effectiveness our approach.
Keywords: ad hoc networks; array signal processing; relay networks (telecommunication); telecommunication network management; telecommunication power management; telecommunication security; wireless channels; central base station; commercial community; distributed beamforming relay selection; energy consumption minimization; evidence theory analysis; hostile border; identity protection; industrial control; local wireless channel statistics; military community; traffic analysis technique; wireless ad hoc network security; Array signal processing; Mobile ad hoc networks; Protocols; Relays; Synchronization; Wireless communication (ID#: 16-9115)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7288399&isnumber=7288342
Hirayama, Takayuki; Toyoda, Kentaroh; Sasase, Iwao, "Fast Target Link Flooding Attack Detection Scheme by Analyzing Traceroute Packets Flow," in Information Forensics and Security (WIFS), 2015 IEEE International Workshop on, pp. 1-6, 16-19 Nov. 2015. doi: 10.1109/WIFS.2015.7368594
Abstract: Recently, a botnet based DDoS (Distributed Denial of Service) attack, called target link flooding attack, has been reported that cuts off specific links over the Internet and disconnects a specific region from other regions. Detecting or mitigating the target link flooding attack is more difficult than legacy DDoS attack techniques, since attacking flows do not reach the target region. Although many mitigation schemes are proposed, they detect the attack after it occurs. In this paper, we propose a fast target link flooding attack detection scheme by leveraging the fact that the traceroute packets are increased before the attack caused by the attacker's reconnaissance. Moreover, by analyzing the characteristic of the target link flooding attack that the number of traceroute packets simultaneously increases in various regions over the network, we propose a detection scheme with multiple detection servers to eliminate false alarms caused by sudden increase of traceroute packets sent by legitimate users. We show the effectiveness of our scheme by computer simulations.
Keywords: Computational modeling; Reconnaissance (ID#: 16-9116)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7368594&isnumber=7368550
Vukalovic, J.; Delija, D., "Advanced Persistent Threats - Detection and Defense," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 1324-1330, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160480
Abstract: The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).
Keywords: authorisation; cryptography; data protection; access control; advanced persistent threats; anomaly detection; attack vectors; computer systems; encryption; security policies; security upgrades; software IDS tools; Command and control systems; Data mining; Malware; Monitoring; Organizations; Servers (ID#: 16-9117)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160480&isnumber=7160221
Kotson, M.C.; Schulz, A., "Characterizing Phishing Threats with Natural Language Processing," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 308-316, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346841
Abstract: Spear phishing is a widespread concern in the modern network security landscape, but there are few metrics that measure the extent to which reconnaissance is performed on phishing targets. Spear phishing emails closely match the expectations of the recipient, based on details of their experiences and interests, making them a popular propagation vector for harmful malware. In this work we use Natural Language Processing techniques to investigate a specific real-world phishing campaign and quantify attributes that indicate a targeted spear phishing attack. Our phishing campaign data sample comprises 596 emails - all containing a web bug and a Curriculum Vitae (CV) PDF attachment - sent to our institution by a foreign IP space. The campaign was found to exclusively target specific demographics within our institution. Performing a semantic similarity analysis between the senders' CV attachments and the recipients' LinkedIn profiles, we conclude with high statistical certainty (p <; 10-4) that the attachments contain targeted rather than randomly selected material. Latent Semantic Analysis further demonstrates that individuals who were a primary focus of the campaign received CVs that are highly topically clustered. These findings differentiate this campaign from one that leverages random spam.
Keywords: computer crime; computer network security; invasive software; natural language processing; statistical analysis; unsolicited e-mail; Web bug; curriculum vitae PDF attachment; foreign IP space; latent semantic analysis; malware; modern network security landscape; natural language processing; propagation vector; recipient LinkedIn profiles; semantic similarity analysis; sender CV attachments; spear phishing emails; spear phishing threat characterization; statistical certainty; Reconnaissance (ID#: 16-9118)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346841&isnumber=7346791
Patil Madhubala R., "Survey on Security Concerns in Cloud computing," in Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, pp. 1458-1462, 8-10 Oct. 2015. doi: 10.1109/ICGCIoT.2015.7380697
Abstract: Cloud consists of vast number of servers. Cloud contains tremendous amount of information. There are various problems in cloud computing such as storage, bandwidth, environment problems like availability, Heterogeneity, scalability and security problems like reliability and privacy. Though so many efforts are taken to solve these problems there are still some security problems [1]. Ensuring security to this data is important issue in cloud Storage. Cloud computing security can be defined as broad set of technologies, policies and controls deployed to protect applications, data and corresponding infrastructure of cloud computing. Due to tremendous progress in technology providing security to customers data becomes more and more important. This paper will tell the need of third party auditor in security of cloud. This paper will give brief idea about what are the security threats in cloud computing. This paper will analyze the various security objectives such as confidentiality, integrity, authentication, auditing, accountability, availability, authorization. This paper also studies the various data security concerns such as various reconnaissance techniques, denial of service, account cracking, hostile and self-replicating codes, system or network penetration, Buffer overflow, SQL injection attack.
Keywords: Cloud computing; Computer crime; Data privacy; Reconnaissance; Servers; Data security concerns; Security objectives; Third party audit; cloud computing; cloud computing security (ID#: 16-9119)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7380697&isnumber=7380415
Bou-Harb, E.; Debbabi, M.; Assi, C., "A Time Series Approach for Inferring Orchestrated Probing Campaigns by Analyzing Darknet Traffic," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 180-185, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.9
Abstract: This paper aims at inferring probing campaigns by investigating dark net traffic. The latter probing events refer to a new phenomenon of reconnaissance activities that are distinguished by their orchestration patterns. The objective is to provide a systematic methodology to infer, in a prompt manner, whether or not the perceived probing packets belong to an orchestrated campaign. Additionally, the methodology could be easily leveraged to generate network traffic signatures to facilitate capturing incoming packets as belonging to the same inferred campaign. Indeed, this would be utilized for early cyber attack warning and notification as well as for simplified analysis and tracking of such events. To realize such goals, the proposed approach models such challenging task as a problem of interpolating and predicting time series with missing values. By initially employing trigonometric interpolation and subsequently executing state space modeling in conjunction with a time-varying window algorithm, the proposed approach is able to pinpoint orchestrated probing campaigns by only monitoring few orchestrated flows. We empirically evaluate the effectiveness of the proposed model using 330 GB of real dark net data. By comparing the outcome with a previously validated work, the results indeed demonstrate the promptness and accuracy of the proposed approach.
Keywords: Internet; computer network security; interpolation; overlay networks; system monitoring; telecommunication congestion control; time series; cyber attack notification; cyber attack warning; darknet traffic analysis; event analysis; event tracking; incoming packet capture; network traffic signature; orchestrated flow monitoring; orchestrated probing campaign inference; orchestration pattern; probing packets; reconnaissance activity; state space modeling; time series approach; time-varying window algorithm; trigonometric interpolation; Clustering algorithms; IP networks; Internet; Interpolation; Kalman filters; Telescopes; Time series analysis (ID#: 16-9120)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299912&isnumber=7299862
Ward, J.R.; Younis, M., "Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks," in Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, pp. 103-108, 27-29 May 2015. doi: 10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS's anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; wireless sensor networks; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy; wireless sensor networks (ID#: 16-9121)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7165947&isnumber=7165923
Gillani, F.; Al-Shaer, E.; Lo, S.; Qi Duan; Ammar, M.; Zegura, E., "Agile Virtualized Infrastructure to Proactively Defend Against Cyber Attacks," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 729-737, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218442
Abstract: DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.
Keywords: computer network security; formal logic; virtualisation; DDoS attacks; Mininet; PlanetLab; SMT logic; VN migration; VN placement; agile virtualized infrastructure; attack mitigation techniques; critical network resources;cyber attacks; distributed denial-of-service attack; network availability; network resource reallocation; virtual networks; Computational modeling; Computer crime; Mathematical model; Reconnaissance; Routing protocols; Servers; Substrates (ID#: 16-9122)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218442&isnumber=7218353
Rushing, D.; Guidry, J.; Alkadi, I., "Collaborative Penetration-Testing and Analysis Toolkit (CPAT)," in Aerospace Conference, 2015 IEEE, pp. 1-9, 7-14 March 2015. doi: 10.1109/AERO.2015.7119262
Abstract: Penetration testing (or “pentesting”) is critical to both maintaining and increasing the reliability of computer networks while lessening their vulnerability. The number, importance and value of these networks has been growing over the past decade, and their capabilities and respective uses have been integrated into many aspects of our lives. Without penetration testing, our networks can fall victim to a myriad of malicious mayhem which has the potential for serious, large-scale ramifications, and when these networks are not operating as expected it is often individuals who suffer. However, penetration testing poses its own new and diverse set of problems to security analysts. Due to the abstract nature of performing a pentest, the near complete lack of design geared toward effective collaboration and teamwork in many widely used penetration testing tools can create a notable hindrance for security teams. This paper describes a software project surrounding network penetration testing from a collaborative standpoint and the problems associated with team-based efforts utilizing present network analysis tools and technologies.
Keywords: program testing; security of data; CPAT; collaborative penetration-testing and analysis toolkit; large-scale ramifications; malicious mayhem; network analysis tools; security teams; software project; team-based efforts; Biographies; Computer hacking; Integrated circuits; Reconnaissance; Meteor framework; penetration testing; real time data (ID#: 16-9123)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119262&isnumber=7118873
Ramachandruni, R.S.; Poornachandran, P., "Detecting the Network Attack Vectors on SCADA Systems," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 707-712, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275694
Abstract: Currently critical infrastructures such as SCADA systems are increasingly under threat, they often go unreported. There is a great need in addressing them. Today majority of the industries use these SCADA systems, so it is very critical to protect these systems. Attack on these systems could cause serious damage to the infrastructure and sometimes a threat to human life as well. As per date there are very few solutions to address SCADA security. So, it is important to take countermeasures against the attacks on these systems. In this paper we will analyze the use of honeypot systems in detecting the network attack vectors on SCADA systems. We will start by analyzing and testing various honeypot features which can help in providing additional security for SCADA systems. A Honeypot is built to mimic the services of an ICS; exposing them to the Internet, making them attractive for attackers and monitor the attackers activities. The goal is to model the attacking methodologies and suggest recommendations to make SCADA system secure.
Keywords: {Internet; SCADA systems; computer network security; critical infrastructures; ICS; Internet; SCADA security systems; critical infrastructures; honeypot systems; industrial control system; network attack vector detection; Internet; MIMICs; Monitoring; Protocols; Reconnaissance; SCADA systems; Honeypots; ICS; IDS; IPS; SCADA (ID#: 16-9124)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275694&isnumber=7275573
Ullrich, J.; Kieseberg, P.; Krombholz, K.; Weippl, E., "On Reconnaissance with IPv6: A Pattern-Based Scanning Approach," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 186-192, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.48
Abstract: Today's capability of fast Internet-wide scanning allows insights into the Internet ecosystem, but the on-going transition to the new Internet Protocol version 6 (IPv6) makes the approach of probing all possible addresses infeasible, even at current speeds of more than a million probes per second. As a consequence, the exploitation of frequent patterns has been proposed to reduce the search space. Current patterns are manually crafted and based on educated guesses of administrators. At the time of writing, their adequacy has not yet been evaluated. In this paper, we assess the idea of pattern-based scanning for the first time, and use an experimental set-up in combination with three real-world data sets. In addition, we developed a pattern-based algorithm that automatically discovers patterns in a sample and generates addresses for scanning based on its findings. Our experimental results confirm that pattern-based scanning is a promising approach for IPv6 reconnaissance, but also that currently known patterns are of limited benefit and are outperformed by our new algorithm. Our algorithm not only discovers more addresses, but also finds implicit patterns. Furthermore, it is more adaptable to future changes in IPv6 addressing and harder to mitigate than approaches with manually crafted patterns.
Keywords: IP networks; Internet; protocols; IPv6 addressing; IPv6 reconnaissance; Internet Protocol version 6; Internet ecosystem; Internet-wide scanning; pattern-based algorithm; pattern-based scanning approach; search space; Internet; Ports (Computers);Probes; Protocols; Reconnaissance; Servers; Standards; Addresses; IPv6; Network Security (ID#: 16-9125)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7299913&isnumber=7299862
Robertson, S.; Alexander, S.; Micallef, J.; Pucci, J.; Tanis, J.; Macera, A., "CINDAM: Customized Information Networks for Deception and Attack Mitigation," in Self-Adaptive and Self-Organizing Systems Workshops (SASOW), 2015 IEEE International Conference on, pp. 114-119, 21-25 Sept. 2015. doi: 10.1109/SASOW.2015.23
Abstract: The topology of networks typically remains static over long periods of time, giving attackers the advantage of long planning cycles to develop, test, and refine targeted attacks. The CINDAM design preempts the attacker by creating ephemeral, per-host views of the protected enclave to transform the constant topology of computing networks into deceptive, mutable, and individualized ones that are able to impede nation-state attacks while still providing mission services to legitimate users. CINDAM achieves this deception without affecting network operations and without modifying client and server software. CINDAM is being implemented using software-defined networking technology for a cost-effective cyber deception solution.
Keywords: computer network security; software defined networking; telecommunication network planning; telecommunication network topology; CINDAM design; cost-effective cyber deception solution; customized information networks for deception and attack mitigation; nation-state attacks; network topology; software-defined networking technology; Conferences; P networks; Network topology; Ports (Computers); Reconnaissance; Servers; Topology; Adaptive Networks; CINDAM; Deception; Networks; SDN (ID#: 16-9126)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7306566&isnumber=7306518
Chia-Nan Kao; Yung-Cheng Chang; Nen-Fu Huang; Salim, I.S.; I-Ju Liao; Rong-Tai Liu; Hsien-Wei Hung, "A Predictive Zero-Day Network Defense Using Long-Term Port-Scan Recording," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 695-696, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346890
Abstract: Zero-day attack is a critical network attack. The zero-day attack period (ZDAP) is the period from the release of malware/exploit until a patch becomes available. IDS/IPS cannot effectively block zero-day attacks because they use pattern-based signatures in general. This paper proposes a Prophetic Defender (PD) by which ZDAP can be minimized. Prior to actual attack, hackers scan networks to identify hosts with vulnerable ports. If this port scanning can be detected early, zero-day attacks will become detectable. PD architecture makes use of a honeypot-based pseudo server deployed to detect malicious port scans. A port-scanning honeypot was operated by us in 6 years from 2009 to 2015. By analyzing the 6-year port-scanning log data, we understand that PD is effective for detecting and blocking zero-day attacks. The block rate of the proposed architecture is 98.5%.
Keywords: computer network security; digital signatures; PD architecture; ZDAP; critical network attack; honeypot-based pseudoserver; long-term port-scan recording; malicious port scan detection; malware; pattern-based signatures; port scanning detection; port-scanning honeypot; port-scanning log data; predictive zero-day network defense; prophetic defender; vulnerable ports; zero-day attack blocking; zero-day attack detection; zero-day attack period; Computer architecture; Computer hacking; Malware; Market research; Ports (Computers);Reconnaissance; Servers (ID#: 16-9127)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7346890&isnumber=7346791
Al-Hakbani, M.M.; Dahshan, M.H., "Avoiding Honeypot Detection in Peer-to-Peer Botnets," in Engineering and Technology (ICETECH), 2015 IEEE International Conference on, pp. 1-7, 20-20 March 2015. doi: 10.1109/ICETECH.2015.7275017
Abstract: A botnet is group of compromised computers that are controlled by a botmaster, who uses them to perform illegal activities. Centralized and P2P (Peer-to-Peer) botnets are the most commonly used botnet types. Honeypots have been used in many systems as computer defense. They are used to attract botmasters to add them in their botnets; to become spies in exposing botnet attacker behaviors. In recent research works, improved mechanisms for honeypot detection have been proposed. Such mechanisms would enable bot masters to distinguish honeypots from real bots, making it more difficult for honeypots to join botnets. This paper presents a new method that can be used by security defenders to overcome the authentication procedure used by the advanced two-stage reconnaissance worm (ATSRW). The presented method utilizes the peer list information sent by an infected host during the ATSRW authentication process and uses a combination of IP address spoofing and fake TCP three-way handshake. The paper provides an analytical study on the performance and the success probability of the presented method. We show that the presented method provide a higher chance for honeypots to join botnets despite security measures taken by botmasters.
Keywords: message authentication; peer-to-peer computing; ATSRW authentication process; IP address spoofing; advanced two-stage reconnaissance worm; centralized botnet; fake TCP three-way handshake; honeypot detection; peer-to-peer botnets; success probability; Authentication; Computers; Delays; Grippers; IP networks; Peer-to-peer computing; P2P; botnet; detecting; honeypot; honeypot aware; peer-to-peer (ID#: 16-9128)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275017&isnumber=7274993
Kotenko, I.; Doynikova, E., "The CAPEC Based Generator of Attack Scenarios for Network Security Evaluation," in Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2015 IEEE 8th International Conference on, vol. 1, pp. 436-441, 24-26 Sept. 2015. doi: 10.1109/IDAACS.2015.7340774
Abstract: The paper proposes a technique and a software tool for generation of attack scenarios - random sequences of attack patterns and appropriate sequences of security events. The technique suggested is based on the application of open standards for representation of attack patterns and vulnerabilities. The tool was developed in scope of the integrated system of network security analysis, risk assessment and countermeasure generation. It is intended to test effectiveness of this system by simulation of the input data - random attacks against computer networks.
Keywords: computer network security; software tools; CAPEC based generator; common attack pattern enumeration and classification; computer network security evaluation; software tool; Computer networks; Dictionaries; Generators; Knowledge engineering; Reconnaissance; Software; attack graphs; attack patterns; cyber security; risk assessment; security evaluation; security events (ID#: 16-9129)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7340774&isnumber=7340677
Chavez, Adrian R.; Stout, William M.S.; Peisert, Sean, "Techniques for the Dynamic Randomization of Network Attributes," in Security Technology (ICCST), 2015 International Carnahan Conference on, pp. 1-6, 21-24 Sept. 2015. doi: 10.1109/CCST.2015.7389661
Abstract: Critical infrastructure control systems continue to foster predictable communication paths and static configurations that allow easy access to our networked critical infrastructure around the world. This makes them attractive and easy targets for cyber-attack. We have developed technologies that address these attack vectors by automatically reconfiguring network settings. Applying these protective measures will convert control systems into "moving targets" that proactively defend themselves against attack. This "Moving Target Defense" (MTD) revolves about the movement of network reconfiguration, securely communicating reconfiguration specifications to other network nodes as required, and ensuring that connectivity between nodes is uninterrupted. Software-defined Networking (SDN) is leveraged to meet many of these goals. Our MTD approach eliminates adversaries targeting known static attributes of network devices and systems, and consists of the following three techniques: (1) Network Randomization for TCP/UDP Ports; (2) Network Randomization for IP Addresses; (3) Network Randomization for Network Paths In this paper, we describe the implementation of the aforementioned technologies. We also discuss the individual and collective successes for the techniques, challenges for deployment, constraints and assumptions, and the performance implications for each technique.
Keywords: IP networks; Overlay networks; Ports (Computers); Protocols; Reconnaissance; Routing; Virtual private networks; Computer Security; Dynamic Defense; IP Address Hopping; Moving Target Defense; Software Defined Networking (ID#: 16-9130)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7389661&isnumber=7389647
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Security Architecture and Resilience 2015 |
The requirement for resilience in network security architecture is part of the hard problem in the Science of Security. The work cited here on these interrelated subjects was presented in 2015.
Kishore, R.; Pappa, A.C.; Varshini S, I., "Light Weight Security Architecture for Cluster Based Wireless Sensor Networks," in Ubiquitous Wireless Broadband (ICUWB), 2015 IEEE International Conference on, pp. 1-5, 4-7 Oct. 2015. doi: 10.1109/ICUWB.2015.7324468
Abstract: Recent technological advancements and increasing potential applications have led to the use of Wireless Sensor Networks in various fields. Since the sensor nodes are usually deployed in places where there is no human surveillance, these networks are highly vulnerable to various security threats. The memory, power and processing constraints of the sensor nodes pose challenges to developing efficient security algorithms. Although many cryptographic techniques have been proposed to mitigate outside attacks, the securing of networks against inside attacks has not been addressed effectively. In this paper, a novel security system has been proposed to address the overall security requirement for Wireless Sensor Networks. Defense against outside attacks is provided using a cluster-based authentication and key management scheme using Elliptic Curve Cryptography, while inside attacks are taken care by a Hybrid Intrusion Detection System using Bayesian probabilistic model for decision making. The simulation results show the overall effectiveness of the proposed scheme, revealing better performance in terms of resilience to outside attacks, memory capacity, energy efficiency and false alarm rate.
Keywords: Bayes methods; energy conservation; pattern clustering; public key cryptography; telecommunication security; wireless sensor networks; Bayesian probabilistic model; cluster based wireless sensor network; cluster-based authentication scheme; cryptographic technique; decision making; elliptic curve cryptography; energy efficiency; false alarm rate; hybrid intrusion detection system; key management scheme; light weight security architecture; memory capacity; outside attack mitigation; sensor node pose processing constraint; Authentication; Bayes methods; Decision making; Elliptic curve cryptography; Intrusion detection; Wireless sensor networks (ID#: 16-9492)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7324468&isnumber=7324387
Penera, E.; Chasaki, D., "Packet Scheduling Attacks on Shipboard Networked Control Systems," in Resilience Week (RWS), 2015, pp. 1-6, 18-20 Aug. 2015. doi: 10.1109/RWEEK.2015.7287421
Abstract: Shipboard networked control systems are based on a distributed control system architecture that provides remote and local control monitoring. In order to allow the network to scale a hierarchical communication network is composed of highspeed Ethernet based network switches. Ethernet is the prevalent medium to transfer control data, such as control signals, alarm signal, and sensor measurements on the network. However, communication capabilities bring new security vulnerabilities and make communication links a potential target for various kinds of cyber/physical attacks. The goal of this work is to implement and demonstrate a network layer attack against networked control systems, by tampering with temporal characteristics of the network, leading to time varying delays and packet scheduling abnormalities.
Keywords: computer network security; delay systems; local area networks; networked control systems; scheduling; ships; telecommunication control; time-varying systems; alarm signal; communication capability; communication link; control data; control signal; cyber attack; distributed control system architecture; hierarchical communication network; highspeed Ethernet based network switch; network layer attack; packet scheduling abnormality; packet scheduling attack; physical attack; remote and local control monitoring; security vulnerability; sensor measurement; shipboard networked control system; temporal characteristics; time varying delay; Delays; IP networks; Network topology; Networked control systems; Security; Topology (ID#: 16-9493)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7287421&isnumber=7287407
Harshe, O.A.; Teja Chiluvuri, N.; Patterson, C.D.; Baumann, W.T., "Design and Implementation of a Security Framework for Industrial Control Systems," in Industrial Instrumentation and Control (ICIC), 2015 International Conference on, pp. 127-132, 28-30 May 2015. doi: 10.1109/IIC.2015.7150724
Abstract: We address the problems of network and reconfiguration attacks on an industrial control system (ICS) by describing a trustworthy autonomic interface guardian architecture (TAIGA) that provides security against attacks originating from both supervisory and plant control nodes. In contrast to the existing security techniques which attempt to bolster perimeter security at supervisory levels, TAIGA physically isolates trusted defense mechanisms from untrusted components and monitors the physical process to detect an attack. Trusted components in TAIGA are implemented in programmable logic (PL). Our implementation of TAIGA integrates a trusted safety-preserving backup controller, and a mechanism for preemptive switching to a backup controller when an attack is detected. A hardware implementation of our approach on an inverted pendulum system illustrates how TAIGA improves resilience against software reconfiguration and network attacks.
Keywords: control engineering computing; industrial control; nonlinear systems; pendulums; production engineering computing; programmable controllers; software engineering; switching systems (control); trusted computing; ICS; TAIGA; industrial control system; inverted pendulum system; network attack; perimeter security; plant control node; preemptive switching; programmable logic; reconfiguration attack; security framework; security technique; software reconfiguration; supervisory control node; supervisory level; trusted defense mechanism; trusted safety-preserving backup controller; trustworthy autonomic interface guardian architecture; untrusted component; Production; Safety; Security; Sensors; Servomotors; Switches (ID#: 16-9494)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7150724&isnumber=7150576
Lyn, K.G.; Lerner, L.W.; McCarty, C.J.; Patterson, C.D., "The Trustworthy Autonomic Interface Guardian Architecture for Cyber-Physical Systems," in Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, pp. 1803-1810, 26-28 Oct. 2015. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.263
Abstract: The growing connectivity of cyber-physical systems (CPSes) has led to an increased concern over the ability of cyber-attacks to inflict physical damage. Current cyber-security measures focus on preventing attacks from penetrating control supervisory networks. These reactive techniques, however, are often plagued with vulnerabilities and zero-day exploits. Embedded processors in CPS field devices often possess little security of their own, and are easily exploited once the network is penetrated. We identify four possible outcomes of a cyber-attack on a CPS embedded processor. We then discuss five trust requirements that a device must satisfy to guarantee correct behavior through the device's lifecycle. Next, we examine the Trustworthy Autonomic Interface Guardian Architecture (TAIGA) which monitors communication between the embedded controller and physical process. This autonomic architecture provides the physical process with a last line of defense against cyber-attacks. TAIGA switches process control to a trusted backup controller if an attack causes a system specification violation. We conclude with experimental results of an implementation of TAIGA on a hazardous cargo-carrying robot.
Keywords: cyber-physical systems; trusted computing; CPS embedded processor; TAIGA; cyber-attacks; cyber-physical systems; cyber-security measures; embedded controller; physical process; reactive techniques; trusted backup controller; trustworthy autonomic interface guardian architecture; Control systems; Process control; Program processors; Sensors; Trojan horses; Cyber-physical systems; autonomic control; embedded device security; resilience; trust (ID#: 16-9495)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363316&isnumber=7362962
Dayal, A.; Tbaileh, A.; Yi Deng; Shukla, S., "Distributed VSCADA: An Integrated Heterogeneous Framework for Power System Utility Security Modeling and Simulation," in Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, pp. 1-6, 13-13 April 2015. doi: 10.1109/MSCPES.2015.7115408
Abstract: The economic machinery of the United States is reliant on complex large-scale cyber-physical systems which include electric power grids, oil and gas systems, transportation systems, etc. Protection of these systems and their control from security threats and improvement of the robustness and resilience of these systems, are important goals. Since all these systems have Supervisory Control and Data Acquisition (SCADA) in their control centers, a number of test beds have been developed at various laboratories. Usually on such test beds, people are trained to operate and protect these critical systems. In this paper, we describe a virtualized distributed test bed that we developed for modeling and simulating SCADA applications and to carry out related security research. The test bed is a virtualized by integrating various heterogeneous simulation components. This test bed can be reconfigured to simulate the SCADA of a power system, or a transportation system or any other critical systems, provided a back-end domain specific simulator for such systems are attached to it. In this paper, we describe how we created a scalable architecture capable of simulating larger infrastructures and by integrating communication models to simulate different network protocols. We also developed a series of middleware packages that integrates various simulation platforms into our test bed using the Python scripting language. To validate the usability of the test bed, we briefly describe how a power system SCADA scenario can be modeled and simulated in our test bed.
Keywords: SCADA systems; authoring languages; control engineering computing; middleware; power system security; power system simulation; Python scripting language; back-end domain specific simulator; complex large-scale cyber-physical systems; distributed VSCADA; economic machinery; heterogeneous simulation components; integrated heterogeneous framework; middleware packages; network protocols; power system utility security modeling; power system utility security simulation platform; supervisory control and data acquisition; system protection; transportation system; virtualized distributed test bed; Databases; Load modeling; Power systems; Protocols; SCADA systems; Servers; Software; Cyber Physical Systems; Cyber-Security; Distributed Systems; NetworkSimulation; SCADA (ID#: 16-9496)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7115408&isnumber=7115373
Januario, F.; Santos, A.; Palma, L.; Cardoso, A.; Gil, P., "A Distributed Multi-Agent Approach for Resilient Supervision Over a IPv6 WSAN Infrastructure," in Industrial Technology (ICIT), 2015 IEEE International Conference on, pp. 1802-1807, 17-19 March 2015. doi: 10.1109/ICIT.2015.7125358
Abstract: Wireless Sensor and Actuator Networks has become an important area of research. They can provide flexibility, low operational and maintenance costs and they are inherently scalable. In the realm of Internet of Things the majority of devices is able to communicate with one another, and in some cases they can be deployed with an IP address. This feature is undoubtedly very beneficial in wireless sensor and actuator networks applications, such as monitoring and control systems. However, this kind of communication infrastructure is rather challenging as it can compromise the overall system performance due to several factors, namely outliers, intermittent communication breakdown or security issues. In order to improve the overall resilience of the system, this work proposes a distributed hierarchical multi-agent architecture implemented over a IPv6 communication infrastructure. The Contiki Operating System and RPL routing protocol were used together to provide a IPv6 based communication between nodes and an external network. Experimental results collected from a laboratory IPv6 based WSAN test-bed, show the relevance and benefits of the proposed methodology to cope with communication loss between nodes and the server.
Keywords: Internet of Things; multi-agent systems; routing protocols; wireless sensor networks; Contiki operating system; IP address; IPv6 WSAN infrastructure; IPv6 communication infrastructure; Internet of Things; RPL routing protocol; distributed hierarchical multiagent architecture; distributed multiagent approach; external network; intermittent communication; resilient supervision; wireless sensor and actuator networks; Actuators; Electric breakdown; Monitoring; Peer-to-peer computing; Routing protocols; Security (ID#: 16-9497)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7125358&isnumber=7125066
Hoefling, M.; Heimgaertner, F.; Menth, M.; Katsaros, K.V.; Romano, P.; Zanni, L.; Kamel, G., "Enabling Resilient Smart Grid Communication over the Information-Centric C-DAX Middleware," in Networked Systems (NetSys), 2015 International Conference and Workshops on, pp. 1-8, 9-12 March 2015
doi: 10.1109/NetSys.2015.7089080
Abstract: Limited scalability, reliability, and security of today’s utility communication infrastructures are main obstacles to the deployment of smart grid applications. The C-DAX project aims at providing and investigating a communication middleware for smart grids to address these problems, applying the information-centric networking and publish/subscribe paradigm. We briefly describe the C-DAX architecture, and extend it with a flexible resilience concept, based on resilient data forwarding and data redundancy. Different levels of resilience support are defined, and their underlying mechanisms are described. Experiments show fast and reliable performance of the resilience mechanism.
Keywords: middleware; power engineering computing; smart power grids; communication middleware; data redundancy; flexible resilience concept; information-centric C-DAX middleware; information-centric networking; publish/subscribe paradigm; resilient data forwarding; resilient smart grid communication; smart grids; utility communication infrastructures; Delays; Monitoring; Reliability; Resilience; Security; Subscriptions; Synchronization (ID#: 16-9498)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7089080&isnumber=7089054
Marnerides, A.K.; Bhandari, A.; Murthy, H.; Mauthe, A.U., "A Multi-Level Resilience Framework for Unified Networked Environments," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 1369-1372, 11-15 May 2015. doi: 10.1109/INM.2015.7140498
Abstract: Networked infrastructures underpin most social and economical interactions nowadays and have become an integral part of the critical infrastructure. Thus, it is crucial that heterogeneous networked environments provide adequate resilience in order to satisfy the quality requirements of the user. In order to achieve this, a coordinated approach to confront potential challenges is required. These challenges can manifest themselves under different circumstances in the various infrastructure components. The objective of this paper is to present a multi-level resilience approach that goes beyond the traditional monolithic resilience schemes that focus mainly on one infrastructure component. The proposed framework considers four main aspects, i.e. users, application, network and system. The latter three are part of the technical infrastructure while the former profiles the service user. Under two selected scenarios this paper illustrates how an integrated approach coordinating knowledge from the different infrastructure elements allows a more effective detection of challenges and facilitates the use of autonomic principles employed during the remediation against challenges.
Keywords: security of data; anomaly detection; autonomic principles; critical infrastructure; heterogeneous networked environments; monolithic resilience schemes; multilevel resilience framework; unified networked environments; Computer architecture; Conferences; Malware; Monitoring; Resilience; Systematics; Anomaly Detection; Autonomic Networks; Network Architectures; Resilience; Security (ID#: 16-9499)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140498&isnumber=7140257
Bloom, G.; Narahari, B.; Simha, R.; Namazi, A.; Levy, R., "FPGA SoC architecture and runtime to prevent hardware Trojans from leaking secrets," in Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, pp. 48-51, 5-7 May 2015. doi: 10.1109/HST.2015.7140235
Abstract: Hardware Trojans compromise security by invalidating the assumption that hardware provides a root-of-trust for secure systems. We propose a novel approach for an FPGA system-on-chip (SoC) to ensure confidentiality of trusted software despite hardware Trojan attacks. Our approach employs defensive techniques that feature morphing on-chip resources for moving target defense against fabrication-time Trojans, onion-encryption for confidentiality, and replication of functionally-equivalent variants of processing elements with arbitrated voting for resilience to design-time Trojans. These techniques are enabled by partial runtime reconfiguration (PRR) and are managed by a hardware abstraction layer (HAL) that reduces developer burden. We call our approach the Morph Onion-encryption Replication PRR HAL, or MORPH. MORPH aims to provide a stable interface for embedded systems developers to use in deploying applications that are resilient to hardware Trojans.
Keywords: cryptography; embedded systems; field programmable gate arrays; system-on-chip; trusted computing; FPGA SoC architecture; HAL; MORPH; PRR; arbitrated voting; design-time Trojans; embedded systems developers; fabrication-time trojans; hardware abstraction layer; hardware trojans; morph onion-encryption replication PRR HAL; on-chip resource morphing; partial runtime reconfiguration; root-of-trust; secret leaking; secure systems; system-on-chip; trusted software; Cryptography; Field programmable gate arrays; Hardware; IP networks; System-on-chip; Trojan horses (ID#: 16-9500)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140235&isnumber=7140225
Chugh, J., "Resilience, survivability and availability in WDM optical mesh network," in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on , vol., no., pp.222-227, 11-13 March 2015. Doi: (not provided)
Abstract: The network has become essential to all aspects of modern life and thus the consequences of network disruption have become increasingly severe. It is widely recognized that the generally network is not sufficiently resilient, survivable, highly available and dependable and that significant research, development and engineering is necessary to improve the situation. This paper describes the high level architecture of WDM optical mesh network for resilience, survivability and availability. This paper also describes about protection and restoration schemes available for optical network and further depicts how these protection and restoration schemes can be used to design highly resilient, highly survivable and highly available network (99.99999).
Keywords: optical communication; telecommunication network reliability; telecommunication security; wavelength division multiplexing; wireless mesh networks; WDM optical mesh network; network disruption; protection schemes; restoration schemes; Availability; Optical fiber networks; Optical fibers; Resilience; Routing; Wavelength division multiplexing; Optical Network; Survivability; WDM (ID#: 16-9501)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100249&isnumber=7100186
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Quick Response(QR)Code 2015 |
QR codes are used to store information in two dimensional grids which can be decoded quickly. The work here deals with extending its encoding and decoding implementation for user authentication and access control as well as tagging. For the Science of Security community, the work is relevant to cyber physical systems, cryptography, and resilience. The work cited here was presented in 2015.
Xiangpeng Fu; Kaiying Feng; Changzhong Wang; Junxing Zhang, "Improving Fingerprint Based Access Control System Using Quick Response Code," in Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2015 IEEE International Conference on, pp.1-5, 12-14 June 2015. doi: 10.1109/CIVEMSA.2015.7158611
Abstract: Access control systems have been widely used in physical security to authenticate the passing people and control their entrance. The existing systems can be classified into the fingerprint based system, the proximity card based system, etc. according to the adopted authentication techniques. However, proximity cards are easy to lose, while fingerprints also become less reliable because they can be copied to make fakes. It appears access control systems relying on only one single authentication technique can be really risky. In this paper, we improve the traditional fingerprint based access control system with an additional authentication process and a remote authorization scheme, both of which are based on Quick Response Code (QR code). The second authentication process leverages the one-time password (OTP) and the personalized response to a challenge contained in the QR code to enhance security. The authorization scheme assists a remote manager to grant temporary access to otherwise unauthorized personnel using the time-stamped authorization information stored in the QR code. We have implemented the prototype of the proposed system in the .NET framework. Our experiments show the prototype takes about 77 ms to offer more rigorous authentication and 134 ms to provide both strengthened authentication and authorization.
Keywords: QR codes; authorisation; fingerprint identification; message authentication;.NET framework; OTP;QR code; Quick Response code; authentication process; fingerprint based access control system; one-time password; personalized response; physical security; remote authorization scheme; remote manager; time-stamped authorization information; Authentication; Authorization; Fingerprint recognition; Generators; Servers; QR code; authentication; physical security (ID#: 15-8924)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158611&isnumber=7158585
Suresh, M.; Saravana Kumar, P.; Sundararajan, T.V.P., "IoT Based Airport Parking System," in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, pp. 1-5, 19-20 March 2015. doi: 10.1109/ICIIECS.2015.7193216
Abstract: The proliferation of technology paves way to new kind of devices that can communicate with other devices to produce output mostly on wireless communication. Wirelessly communicating embedded devices are brought to one another in a single link over Internet called IoT (Internet of Things). If all objects and people in daily life were equipped with identifiers, computers could manage and inventory them. Besides using RFID, the tagging of things may be achieved through such technologies as near field communication, barcodes, QR codes and digital watermarking. Here new method of using embedded technology to provide such application, Arduino is used as an embedded controller to interface Ethernet shield with a PC/Laptop to provide IoT over Ethernet. A user can use this parking service in the airport scenario provided by airport authority with user ID and password. Whenever a user need to check the vehicle in the parking lot, uses the ID and password to logon into the airport web link and view the status of the car in the parking lot using IoT. IoT Based Airport Parking System is discussed here to implement Arduino environment as IoT application.
Keywords: Internet of Things; airports; embedded systems; intelligent transportation systems; local area networks; radio links; Arduino environment; Internet of Things; IoT based airport parking system; QR codes; RFID; airport web link; digital watermarking; embedded controller; embedded technology; interface Ethernet; laptop; near field communication; single link; user ID; wireless communication; Airports; Boards; Conferences; Internet of things; Technological innovation; Vehicles; IoT (Internet of Things); QR code (Quick Response Code); RFID (Radio-Frequency Identification) (ID#: 15-8925)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7193216&isnumber=7192777
Hogpracha, W.; Vongpradhip, S., "Recognition System for QR Code on Moving Car," in Computer Science & Education (ICCSE), 2015 10th International Conference on, pp. 14-18, 22-24 July 2015. doi: 10.1109/ICCSE.2015.7250210
Abstract: Most researches about QR code recognition were about reading QR code from image or read from motionless object. This paper proposes a system which read QR code from motion object where QR code attached on a windshield. The experiment was done by recording a video file, while the car was moving at various speeds then analyze video files. From the experiment results, the identification success rate of 100 percent was achieved when the car was moving under 30 kilometer per hour. At the speed of 60 kilometer per hour, the identification success rate was 30 percent.
Keywords: QR codes; image coding; image recognition; video recording; video signal processing; QR code recognition; motionless object; moving car; recognition system; video file; Automotive components; Bar codes; Image recognition;2D Barcode; QR Code; Quick Response Code; car; vehicle; windshield (ID#: 15-8926)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7250210&isnumber=7250193
Mahajan, J.R.; Patil, N.N., "Alpha Channel for Integrity Verification Using Digital Signature on Reversible Watermarking QR," in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, pp. 602-606, 26-27 Feb. 2015. doi: 10.1109/ICCUBEA.2015.123
Abstract: Based on the increased awareness towards digital rights in commercial activity, the internet and media demands protection to enhance security. In this paper, predicated level of reversible watermarking scheme for image integrity and tamper detection or localization aspect. The proposed integrity analysis process is based on the region of interest watermarking with signatures and the use of alpha channel. Also integrity is analyzed at three distinct levels, which are used for the detection of modification, modified location and forensic analysis respectively. Behind this a digital watermark with 2D Barcode is a widely interesting research in the security field. Here this paper proposes a matrix barcode with the three level image watermarking using QR code and digital signature to verify image copyrights. A new watermarking method via the use of alpha channel is proposed. The alpha channel is used for controlling the transparency of the image. In proposed watermarking framework, it allows a user with an appropriate hash function SHA to verify the authenticity, integrity & ownership of an image. It is used as a metric in localization of tampered regions. The schemes successfully localize attack. The analysis scheme has shows high fidelity and is capable of localizing modified regions in watermarked image or localization of tampering. Mainly the channel watermarking can added to make system more robust to detect tampering attacks. Sole Purpose is to verify the integrity i.e. tamper verification.
Keywords: QR codes; copyright; cryptography; data integrity; digital signatures; feature extraction; image coding; image watermarking; 2D barcode; QR code; alpha channel; authenticity verification; digital signature; hash function; image copyright; image integrity; image watermarking; integrity verification; reversible watermarking scheme; tamper detection; Authentication; Digital images; Digital signatures; Multimedia communication; Robustness; Watermarking; Alpha Channel; Authentication; Image security; Quick Response Codes; Watermarking (ID#: 15-8927)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155918&isnumber=7155781
Misun Ahn; Seunghyun Hong; Sungwon Lee, "A Research on the QR Code Recognition Improvement Using the Cloud-Based Pre-Generated Image Matching Scheme," in Information Networking (ICOIN), 2015 International Conference on, pp. 356-357, 12-14 Jan. 2015. doi: 10.1109/ICOIN.2015.7057912
Abstract: This paper describes a method to recognize a Quick Response Code (QR) a novel. The QR Code is a two-dimensional code, which is currently used in various fields. According to the salient growth smartphone market, the recognition distance of QR Code is increased but the recognition angle is still limited. To tackle the issue, we propose a QR Code recognition method using `cloud-based pre-generated image matching'. Our experimental results show the efficiency of the proposed method.
Keywords: QR codes; cloud computing; image coding; image matching; smart phones; QR code recognition method; cloud-based pregenerated image matching scheme; quick response code; smartphone market; two-dimensional code; Cameras; Image matching; Image recognition; Mobile handsets; Servers; Shape; Cloud Computing; QR Code; Recognition (ID#: 15-8928)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057912&isnumber=7057846
Bajpai, M.K., "Researching through QR Codes in Libraries," in Emerging Trends and Technologies in Libraries and Information Services (ETTLIS), 2015 4th International Symposium on, pp. 291-294, 6-8 Jan. 2015. doi: 10.1109/ETTLIS.2015.7048214
Abstract: The reach of information is now much easier and effective in the age of information technology. Electronic documents and resources have changed the entire paradigm of education and research. Students, teachers, professionals, academicians and researchers may get their information on their desktop/laptop/Smartphone through electronic platform. The availability of Information and Communication Technology (ICT) and their use has also produced some challenges amongst the library professionals to make optimum use of information resources available. It has been observed that in spite of awareness programs, several e-resources are being underutilized or less used. There are several tools which may be used to raise the usage of e-resources to the maximum. One of them is QR Code technology. Quick Response codes are basically made for product promotion more likely as Bar-codes. In this paper a working model is presented which can be used in the Libraries to maximize the usage of e-books and other products and services of the Library.
Keywords: QR codes; academic libraries; digital libraries; electronic publishing; information resources; library automation; ICT; QR code technology; academic libraries; bar-codes; desktop; e-books; e-resources; electronic documents; electronic platform; electronic resources; information and communication technology; information resources; laptop; library products; library professionals; library services; product promotion; quick response codes; smartphone; Educational institutions; Electronic publishing; Internet; Libraries; Market research; Software; Academic Libraries; QR Codes; Quick Response Code; Smartphone; e-books (ID#: 15-8929)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7048214&isnumber=7048153
Md Numan-Al-Mobin, A.; Cross, W.M.; Kellar, J.J.; Anagnostou, D.E., "RFID Integrated QR Code Tag Antenna," in Microwave Symposium (IMS), 2015 IEEE MTT-S International, pp. 1-3, 17-22 May 2015. doi: 10.1109/MWSYM.2015.7167044
Abstract: This paper presents an entirely new RFID tag antenna design that incorporates the QR (Quick Response) code for security purposes. The tag antenna is designed to work at 2.45 GHz frequency. The RFID integrated QR code tag antenna is printed with an additive material deposition system that enables to produce a low cost tag antenna with extended security.
Keywords: QR codes; UHF antennas; microstrip antennas; radiofrequency identification; telecommunication security; QR code tag antenna; RFID tag antenna design; frequency 2.45 GHz; quick response code; Antenna measurements; Antennas; Data communication; Electronic mail; Resonant frequency; ISM frequency band; QR code; RFID; antenna; bar code antenna; security applications; supply-chain management; tag (ID#: 15-8930)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167044&isnumber=7166703
Khandal, D.; Somwanshi, D., "A Novel Cost Effective Access Control and Auto Filling Form System Using QR Code," in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pp. 1-5, 10-13 Aug. 2015. doi: 10.1109/ICACCI.2015.7275575
Abstract: QR codes are used to store information in two dimensional grids which can be decoded quickly. The proposed work here deals with Quick response (QR) code extending its encoding and decoding implementation to design a new articulated user authentication and access control mechanism. The work also proposes a new simultaneous registration system for offices and organizations. The proposed system retrieves the candidate's information from their QR identification code and transfers the data to the digital application form, along with granting authentication to authorized QR image from the database. The system can improve the quality of service and thus it can increase the productivity of any organization.
Keywords: QR codes; authorisation; cryptography; decoding; image coding; information retrieval; information storage; quality of service; QR identification code; articulated user authentication design; authorized QR image; auto filling form system; candidate information retrieval; cost effective access control system; data transfer; decoding implementation; digital application form; encoding implementation; information storage; offices; organizations; quality of service improvement; quick response code; registration system; two-dimensional grid; Decoding; Handwriting recognition; IEC; ISO; Image recognition; Magnetic resonance imaging; Monitoring; Authentication; Automated filling form; Code Reader; Embedded system; Encoding-Decoding; Proteus; QR codes; Security (ID#: 15-8931)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7275575&isnumber=7275573
Zhu Siyang, "Deformed Two-Dimension Code Quick Recognition Algorithm Design and Implementation in Uncertain Environment," in Measuring Technology and Mechatronics Automation (ICMTMA), 2015 Seventh International Conference on, pp.322-325, 13-14 June 2015. doi: 10.1109/ICMTMA.2015.83
Abstract: This paper proposes a recognition algorithm for solving the puzzle of Quickly Response (QR) identification at highly deviated shooting angle. This algorithm includes five procedures, i.e. Binarization processing, edge detection, Hough transformation, morphological processing and projective geometry transformation, and it has been simulated and verified by java language environment and android mobile terminal environment, and some operation had been implemented, such as write and find information by scanning or reading two-dimension code.
Keywords: Hough transforms; Java; QR codes; edge detection; geometry; mobile computing; smart phones; Android mobile terminal environment; Hough transformation; Java language environment; QR code identification; Quick Response code recognition algorithm; binarization processing; edge detection; morphological processing; projective geometry transformation; Accuracy; Algorithm design and analysis; Encoding; Error correction codes; Image edge detection; Image segmentation; Deformed; Recognition algorithm; Two-dimension code (ID#: 15-8932)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7263576&isnumber=7263490
Yang Tian; Kaigui Bian; Guobin Shen; Xiaochen Liu; Xiaoguang Li; Moscibroda, T., "Contextual-Code: Simplifying Information Pulling from Targeted Sources in Physical World," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 2245-2253, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218611
Abstract: The popularity of QR code clearly indicates the strong demand of users to acquire (or pull) further information from interested sources (e.g., a poster) in the physical world. However, existing information pulling practices such as a mobile search or QR code scanning incur heavy user involvement to identify the targeted posters. Meanwhile, businesses (e.g., advertisers) are also interested to learn about the behaviors of potential customers such as where, when, and how users show interests in their offerings. Unfortunately, little such context information are provided by existing information pulling systems. In this paper, we present Contextual-Code (C-Code) - an information pulling system that greatly relieves users' efforts in pulling information from targeted posters, and in the meantime provides rich context information of user behavior to businesses. C-Code leverages the rich contextual information captured by the smartphone sensors to automatically disambiguate information sources in different contexts. It assigns simple codes (e.g., a character) to sources whose contexts are not discriminating enough. To pull the information from an interested source, users only need to input the simple code shown on the targeted source. Our experiments demonstrate the effectiveness of C-Code design. Users can effectively and uniquely identify targeted information sources with an average accuracy over 90%.
Keywords: binary codes; smart phones; ubiquitous computing; QR code; contextual code; information pulling; interested source; physical world; quick-response code; rich context information; smartphone sensors; targeted information sources; targeted sources; user behavior; Business; Context; IEEE 802.11 Standard; Interference; Magnetic separation; Sensor phenomena and characterization (ID#: 15-8933)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7218611&isnumber=7218353
Tzu-Han Chou; Chuan-Sheng Ho; Yan-Fu Kuo, "QR Code Detection Using Convolutional Neural Networks," in Advanced Robotics and Intelligent Systems (ARIS), 2015 International Conference on, pp. 1-5, 29-31 May 2015. doi: 10.1109/ARIS.2015.7158354
Abstract: Barcodes have been long used for data storage. Detecting and locating barcodes in images of complex background is an essential yet challenging step in the process of automatic barcode reading. This work proposed an algorithm that localizes and segments two-dimensional quick response (QR) barcodes. The localization involved a convolutional neural network that could detect partial QR barcodes. Majority voting was then applied to determine barcode locations. Then image processing algorithms were implemented to segment barcodes from the background. Experimental results show that the proposed approach was robust to detect QR barcodes with rotation and deformation.
Keywords: QR codes; image segmentation; object recognition; automatic barcode reading; barcode detection; barcode location; complex background images; convolutional neural networks; data storage; image processing algorithms; partial QR barcode detection; two-dimensional quick response barcode segmentation; Convolution; Face recognition; Feature extraction; Image recognition; Image segmentation; Neurons; Training (ID#: 15-8934)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7158354&isnumber=7158226
Leal, J.; Couto, R.; Costa, P.M.; Galvão, T., "Exploring Ticketing Approaches Using Mobile Technologies: QR Codes, NFC and BLE," in Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, pp. 7-12, 15-18 Sept. 2015. doi: 10.1109/ITSC.2015.9
Abstract: There is a growing interest in integrating public transportation with the smartphone and mobile ticketing provides just that. To do so, different technologies can be used, such as Near Field Communication, Quick Response Codes and Bluetooth Low Energy. This paper explores the possibility of implementing a mobile ticketing solution, with focus on the ticket validation process, using these technologies. They are analyzed and compared at different levels and two possible approaches proposed. Both solutions are presented in terms of infrastructure and maintenance cost, as well as passenger interaction and benefit. The feasibility and performance of the technologies is analyzed and presented in the context of the proposed approaches. As a result, a mobile ticketing solution can be implemented using different technologies, and their choice depends on factors such as the available funds, the intended interaction level, performance and the size of the target audience.
Keywords: codes; mobile communication; near-field communication; smart phones; BLE; NFC; QR codes; exploring ticketing approaches; integrating public transportation; maintenance cost; mobile technologies; mobile ticketing; passenger interaction; smartphone; target audience; ticket validation process; Bluetooth; Context; Lighting; Mobile communication; Performance evaluation; Vehicles; ble; bluetooth; mobile ticketing solution; nfc; public transport; qr codes (ID#: 15-8935)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7313101&isnumber=7312804
John, R.A.; Raahemifar, K., "Designing a 2D Color Barcode," in Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, pp. 297-301, 3-6 May 2015. doi: 10.1109/CCECE.2015.7129203
Abstract: The barcode technology has undergone many dramatic changes during the past few years. This is due to the increased need for encoding more data into the barcode. One of the latest developments in the barcode technology is color barcode, developed by Microsoft's High Capacity Color Barcode (HCCB). With four or eight colors, color barcodes are now used extensively in certain areas such as airports, print advertising and mobile phones compared to the 2D black and white barcodes such as Quick Response (QR) code and data matrix. This paper provides an overview of six papers related to color barcode and later introduces an algorithm developed to encode data using binarization and grouping of bits to form a color barcode and also to decode data from color barcode.
Keywords: bar codes; decoding; image colour analysis; 2D color barcode technology; HCCB; Microsoft high capacity color barcode; airports; bit binarization; bit grouping; data decoding; mobile phones; print advertising; Color; Computers; Decoding; Encoding; Image color analysis; Mobile communication; Mobile handsets (ID#: 15-8936)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7129203&isnumber=7129089
Tretinjak, M.F., "The Implementation of QR Codes in the Educational Process," in Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2015 38th International Convention on, pp. 833-835, 25-29 May 2015. doi: 10.1109/MIPRO.2015.7160387
Abstract: Quick Response (QR) codes are two-dimensional (2-D) barcodes that can contain information such as URL links (e.g. a link to YouTube video, website link) and text (e.g. contact details, product details). These square pattern codes consist of black modules on a white background. QR code generator is software that stores data (e.g. URL link, text, Google maps location) into a QR code. This encoded data can be decoded by scanning the QR code symbol with a mobile device that is equipped with a camera and a QR code reader software. QR codes have a number of purposes; they are mostly used in manufacturing (e.g. product traceability, process control, inventory and equipment management), warehousing and logistics (e.g. item tracking), retailing (e.g. sales management), healthcare (e.g. medical records management, patient identification, equipment and device tracking), transportation (e.g. ticketing and boarding passes), office automation (e.g. document management), marketing and advertising (e.g. mobile marketing, electronic tickets, coupons, payments). This paper will describe various methods for the implementation of QR codes in the educational process. Experience from the School of Electrical Engineering in Zagreb shows that QR codes supports both independent and collaborative learning and can create an interactive learning environment.
Keywords: QR codes; computer aided instruction; electrical engineering computing; electrical engineering education; image sensors; interactive systems; mobile handsets; QR code generator; QR code reader software; QR code symbol; URL links; Zagreb; advertising; black modules; camera; collaborative learning; educational process; electrical engineering school; healthcare; independent learning; interactive learning environment; logistics; marketing; mobile device; office automation; quick response codes; retailing; square pattern codes; transportation; two-dimensional barcodes; warehousing; white background; Cameras; Electrical engineering; Generators; Google; Mobile handsets; Software; Uniform resource locators (ID#: 15-8937)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7160387&isnumber=7160221
Nikou, S.A.; Economides, A.A., "The Effects of Perceived Mobility and Satisfaction on the Adoption of Mobile-Based Assessment," in Interactive Mobile Communication Technologies and Learning (IMCL), 2015 International Conference on, pp. 167-171, 19-20 Nov. 2015. doi: 10.1109/IMCTL.2015.7359579
Abstract: Mobile-based Assessment is increasingly used in different educational settings. Its successful implementation though depends on user acceptance. While previous research provides evidence on acceptance of mobile learning and computer-based assessment, there are not many studies focusing explicitly on the acceptance of Mobile-based Assessment. This study examines the impact of Perceived Mobility, Satisfaction, Perceived Usefulness and Perceived Ease of Use on students' Behavioral Intention to Use Mobile-based Assessment. 47 secondary school students, using their mobile devices and Quick Response (QR) - coding technology, participated in an outdoor Mobile-based Assessment procedure during their visit in a Botanic Garden. Partial Least Squares (PLS) was used for the data analysis of the recorded students' perceptions about the Mobile-based Assessment. Results show that Perceived Mobility, Satisfaction, Perceived Usefulness and Perceived Ease of Use are all significant determinants of Behavioral Intention to Use mobile-based assessment. Several important implications for designing and implementing mobile-based assessment procedures are discussed.
Keywords: behavioural sciences computing; least squares approximations; mobile learning; PLS; behavioral intention; computer-based assessment; educational settings; mobile learning; mobile-based assessment; partial least squares; perceived ease of use; perceived mobility; perceived usefulness; quick response coding technology; user acceptance; Data analysis; Education; Focusing; Instruments; Mobile communication; Mobile handsets; QR codes; mobile learning; mobile-based assessment; motivation; outdoor education; perceived mobility; technology acceptance model (ID#: 15-8938)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359579&isnumber=7359535
Gao, Lei; Liu, Yu; Yuan, Yubo; Wang, Xiaokun, "Design and Implementation of Optical Cable Label System for Smart Substation Based on QR Code," in TENCON 2015 - 2015 IEEE Region 10 Conference, pp. 1-5, 1-4 Nov. 2015. doi: 10.1109/TENCON.2015.7372848
Abstract: The conventional optical cable labels in smart substations contain too little information to support operation or maintenance. To solve this problem, this paper introduced a QR (Quick Response) code system for labeling and identification of optical cable labels in China's smart substations. The label format is specified for three kinds of cables, and the implementation methods of generation module and identification module are also stated. The generation module can provide design drawings of secondary system, QR code labels and database for handheld terminals. The identification module can quickly identify the optical cable by scanning the QR code and show its physical connection and virtual circuit to users. In that way, scan-and-read for signal circuits is achieved, which improves the efficiency of commissioning, operation and maintenance work in smart substations.
Keywords: Databases; Optical design; Optical fiber cables; Optical fibers; Substations; Wiring; QR code; database; optical cable label; smart substation (ID#: 15-8939)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7372848&isnumber=7372693
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.