Publications of Interest
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: research (at) securedatabank.net
(ID#:14-2638)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
IP Piracy
Intellectual Property protection continues to be a matter of major research interest. The articles cited here look at hardware security and provenance and piracy prevention. They were published between May and August of 2014.
- Rostami, M.; Koushanfar, F.; Karri, R., "A Primer on Hardware Security: Models, Methods, and Metrics," Proceedings of the IEEE, vol.102, no.8, pp.1283, 1295, Aug. 2014. doi: 10.1109/JPROC.2014.2335155 Abstract: The multinational, distributed, and multistep nature of integrated circuit (IC) production supply chain has introduced hardware-based vulnerabilities. Existing literature in hardware security assumes ad hoc threat models, defenses, and metrics for evaluation, making it difficult to analyze and compare alternate solutions. This paper systematizes the current knowledge in this emerging field, including a classification of threat models, state-of-the-art defenses, and evaluation metrics for important hardware-based attacks.
Keywords: pattern classification; security of data ;IC production supply chain; ad hoc threat models; evaluation metrics; hardware security; hardware-based attacks; hardware-based vulnerabilities; integrated circuit; threat models classification; Computer security; Hardware; Integrated circuit modeling; Security; Supply chain management; Trojan horses; Watermarking; Counterfeiting; IP piracy; hardware Trojans; reverse engineering; side-channel attacks (ID#:14-2948)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860363&isnumber=6860340
- Rajendran, J.; Sinanoglu, O.; Karri, R., "Regaining Trust in VLSI Design: Design-for-Trust Techniques," Proceedings of the IEEE , vol.102, no.8, pp.1266,1282, Aug. 2014. doi: 10.1109/JPROC.2014.2332154 Designers use third-party intellectual property (IP) cores and outsource various steps in their integrated circuit (IC) design flow, including fabrication. As a result, security vulnerabilities have been emerging, forcing IC designers and end-users to reevaluate their trust in hardware. If an attacker gets hold of an unprotected design, attacks such as reverse engineering, insertion of malicious circuits, and IP piracy are possible. In this paper, we shed light on the vulnerabilities in very large scale integration (VLSI) design and fabrication flow, and survey design-for-trust (DfTr) techniques that aim at regaining trust in IC design. We elaborate on four DfTr techniques: logic encryption, split manufacturing, IC camouflaging, and Trojan activation. These techniques have been developed by reusing VLSI test principles.
Keywords: VLSI; cryptography; integrated circuit design; logic circuits; microprocessor chips; reverse engineering; DfTr techniques; IP cores; IP piracy; VLSI design; design-for-trust techniques; integrated circuit camouflaging; integrated circuit design flow; logic encryption; malicious circuits; regaining trust; reverse engineering; security vulnerabilities; split manufacturing; third-party intellectual property cores; trojan activation; unprotected design; very large scale integration design; Design methodology; Encryption; Hardware; integrated circuit modeling; Logic gates; Very large scale integration; Design automation; design for testability; security (ID#:14-2949)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6856167&isnumber=6860340
- Rahman, M.T.; Forte, D.; Quihang Shi; Contreras, G.K.; Tehranipoor, M., "CSST: An Efficient Secure Split-Test for Preventing IC Piracy," North Atlantic Test Workshop (NATW), 2014 IEEE 23rd,pp.43,47, 14-16 May 2014. doi: 10.1109/NATW.2014.17 With the high costs associated with modern IC fabrication, most semiconductor companies have gone fabless, i.e., they outsource manufacturing of their designs to contract foundries. This horizontal business model has led to many well documented issues associated with untrusted foundries including IC overproduction and shipping improperly or insufficiently tested chips. Entering such chips in the supply chain can be catastrophic for critical applications. We propose a new Secure Split-Test to give control over testing back to the IP owner. Each chip is locked during test. The IP owner is the only entity who can interpret the locked test results and unlock passing chips. In this way, SST can prevent shipping overproduction and defective chips from reaching the supply chain. The proposed method considerably simplifies the communication required between the foundry and IP owner compared to the original version of the secure split test. The results demonstrate that our new technique is more secure than the original and with less communication barriers.
Keywords: integrated circuit testing; supply chain management; CSST; IC fabrication ;IC overproduction; IC piracy prevention ;IC shipping; IP owner; communication barriers; efficient secure split-test; horizontal business model; outsource manufacturing; semiconductor companies; supply chain; Assembly; Foundries; IP networks; Integrated circuits; Security; Supply chains; Testing (ID#:14-2950)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875447&isnumber=6875429
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Immersive Systems
Immersion systems, commonly known as "virtual reality", are used for a variety of functions such as gaming, rehabilitation, and training. These systems mix the virtual with the actual, and have implications for cybersecurity because they may make the jump from virtual to actual systems. The research cited here was presented between January and August of 2014.
- Gebhardt, S.; Pick, S.; Oster, T.; Hentschel, B.; Kuhlen, T., "An Evaluation Of A Smart-Phone-Based Menu System For Immersive Virtual Environments," 3D User Interfaces (3DUI), 2014 IEEE Symposium on , vol., no., pp.31,34, 29-30 March 2014. doi: 10.1109/3DUI.2014.6798837 System control is a crucial task for many virtual reality applications and can be realized in a broad variety of ways, whereat the most common way is the use of graphical menus. These are often implemented as part of the virtual environment, but can also be displayed on mobile devices. Until now, many systems and studies have been published on using mobile devices such as personal digital assistants (PDAs) to realize such menu systems. However, most of these systems have been proposed way before smartphones existed and evolved to everyday companions for many people. Thus, it is worthwhile to evaluate the applicability of modern smartphones as carrier of menu systems for immersive virtual environments. To do so, we implemented a platform-independent menu system for smartphones and evaluated it in two different ways. First, we performed an expert review in order to identify potential design flaws and to test the applicability of the approach for demonstrations of VR applications from a demonstrator's point of view. Second, we conducted a user study with 21 participants to test user acceptance of the menu system. The results of the two studies were contradictory: while experts appreciated the system very much, user acceptance was lower than expected. From these results we could draw conclusions on how smartphones should be used to realize system control in virtual environments and we could identify connecting factors for future research on the topic.
Keywords: human computer interaction; mobile computing; smart phones; user interfaces; virtual reality; VR applications; immersive virtual environments; platform-independent menu system; smart-phone-based menu system; system control; user acceptance; virtual reality; Control systems; Mobile communication; Navigation; Smart phones; Usability; Virtual environments (ID#:14-2939)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798837&isnumber=6798822
- Hansen, N.T.; Hald, K.; Stenholt, R., "Poster: Amplitude Test For Input Devices For System Control In Immersive Virtual Environment," 3D User Interfaces (3DUI), 2014 IEEE Symposium on, vol., no., pp.137,138, 29-30 March 2014. doi: 10.1109/3DUI.2014.6798858 In this study, the amplitudes best suited to compare four input devices are examined in the context of a pointer-based system control interface for immersive virtual environments. The interfaces are based on a pen and tablet, a touch tablet, hand-tracking using Kinect and a Wii Nunchuk analog stick. This is done as a preliminary study in order to be able to compare the interfaces with the goal of evaluating them in the context of using virtual environments in a class lecture. Five amplitudes are tested for each of the four interfaces by having test participants mark menu elements in an eight-part radial menu using each combination of amplitude and interface. The amplitudes to be used for future experiments were found. Also, the movement times for the interfaces do not fit the predictions of Fitts' law.
Keywords: interactive devices; notebook computers; user interfaces; virtual reality; Kinect; Wii Nunchuk analog stick; amplitude test; class lecture; hand-tracking; immersive virtual environments; input devices; menu elements; pen; pointer-based system control interface; touch tablet; Context; Control systems; Educational institutions; Indexes; Layout; Three-dimensional displays; Virtual environments (ID#:14-2940)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798858&isnumber=6798822
- Khan, N.M.; Kyan, M.; Ling Guan, "ImmerVol: An Immersive Volume Visualization System," Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2014 IEEE International Conference on, pp.24,29, 5-7 May 2014. doi: 10.1109/CIVEMSA.2014.6841433 Volume visualization is a popular technique for analyzing 3D datasets, especially in the medical domain. An immersive visual environment provides easier navigation through the rendered dataset. However, visualization is only one part of the problem. Finding an appropriate Transfer Function (TF) for mapping color and opacity values in Direct Volume Rendering (DVR) is difficult. This paper combines the benefits of the CAVE Automatic Virtual Environment with a novel approach towards TF generation for DVR, where the traditional low-level color and opacity parameter manipulations are eliminated. The TF generation process is hidden behind a Spherical Self Organizing Map (SSOM). The user interacts with the visual form of the SSOM lattice on a mobile device while viewing the corresponding rendering of the volume dataset in real time in the CAVE. The SSOM lattice is obtained through high-dimensional features extracted from the volume dataset. The color and opacity values of the TF are automatically generated based on the user's perception. Hence, the resulting TF can expose complex structures in the dataset within seconds, which the user can analyze easily and efficiently through complete immersion.
Keywords: data visualisation; feature extraction; image colour analysis; medical computing; opacity; rendering (computer graphics); self-organising feature maps; transfer functions;vectors;3D datasets analysis; CAVE; DVR; ImmerVol; SSOM; automatic virtual environment; color values; direct volume rendering; feature extraction; immersive volume visualization system; medical domain; navigation; opacity values; rendered dataset; spherical self organizing map; transfer function; Data visualization; Image color analysis; Lattices; Rendering (computer graphics);Three-dimensional displays; Training; Vectors (ID#:14-2941)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841433&isnumber=6841424
- Basu, A; Johnsen, K., "Ubiquitous Virtual Reality 'To-Go'," Virtual Reality (VR), 2014 IEEE, pp.161,162, March 29 2014-April 2 2014. doi: 10.1109/VR.2014.6802101 We propose to demonstrate a ubiquitous immersive virtual reality system that is highly scalable and accessible to a larger audience. With the advent of handheld and wearable devices, we have seen it gain considerable popularity among the common masses. We present a practical design of such a system that offers the core affordances of immersive virtual reality in a portable and untethered configuration. In addition, we have developed an extensive immersive virtual experience that involves engaging users visually and aurally. This is an effort towards integrating VR into the space and time of user workflows.
Keywords: notebook computers; ubiquitous computing; virtual reality; wearable computers; handheld devices; immersive virtual experience; portable configuration; ubiquitous immersive virtual reality system; ubiquitous virtual reality; untethered configuration; wearable devices; Educational institutions; Pediatrics; Positron emission tomography; Three-dimensional displays; Training; Virtual environments (ID#:14-2942)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802101&isnumber=6802028
- Laha, B.; Bowman, D.A; Socha, J.J., "Effects of VR System Fidelity on Analyzing Isosurface Visualization of Volume Datasets," Visualization and Computer Graphics, IEEE Transactions on, vol.20, no.4, pp.513, 522, April 2014. doi: 10.1109/TVCG.2014.20 Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-mCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Keywords: computerised tomography; data analysis; data visualisation; image texture; rendering (computer graphics); virtual reality; 3D texture-based rendering; CAVE-like display; SR-mCT scans; VR system fidelity; beetles; head tracking; isosurface visualization; synchrotron microscopic computed tomography; volume data analysis; volume datasets; volume visualization ;Abstracts; Computed tomography; Isosurfaces; Measurement; Rendering (computer graphics);Three-dimensional displays; Visualization; Immersion; micro-CT; data analysis; volume visualization; 3D visualization; CAVE; virtual environments; virtual reality (ID#:14-2943)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777465&isnumber=6777423
- Grechkin, T.Y.; Plumert, J.M.; Kearney, J.K., "Dynamic Affordances in Embodied Interactive Systems: The Role of Display and Mode of Locomotion," Visualization and Computer Graphics, IEEE Transactions on, vol.20, no.4, pp.596,605, April 2014. doi: 10.1109/TVCG.2014.18 We investigated how the properties of interactive virtual reality systems affect user behavior in full-body embodied interactions. Our experiment compared four interactive virtual reality systems using different display types (CAVE vs. HMD) and modes of locomotion (walking vs. joystick). Participants performed a perceptual-motor coordination task, in which they had to choose among a series of opportunities to pass through a gate that cycled open and closed and then board a moving train. Mode of locomotion, but not type of display, affected how participants chose opportunities for action. Both mode of locomotion and display affected performance when participants acted on their choices. We conclude that technological properties of virtual reality system (both display and mode of locomotion) significantly affected opportunities for action available in the environment (affordances) and discuss implications for design and practical applications of immersive interactive systems.
Keywords: gait analysis; helmet mounted displays; virtual reality; CAVE; HMD; embodied interactive systems; full-body embodied interactions; head-mounted display; interactive virtual reality systems ;locomotion mode; perceptual-motor coordination task; user behavior; Interactive systems; Legged locomotion; Logic gates; Psychology; Tracking; Virtual environments; Virtual reality; embodied interaction; affordances; perceptual-motor coordination; display type; interaction technique; mode of locomotion (ID#:14-2944)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777453&isnumber=6777423
- Masiero, B.; Vorlander, M., "A Framework for the Calculation of Dynamic Crosstalk Cancellation Filters," Audio, Speech, and Language Processing, IEEE/ACM Transactions on, vol.22, no.9, pp.1345, 1354, Sept. 2014. doi: 10.1109/TASLP.2014.2329184 Dynamic crosstalk cancellation (CTC) systems commonly find use in immersive virtual reality (VR) applications. Such dynamic setups require extremely high filter update rates, so filter calculation is usually performed in the frequency-domain for higher efficiency. This paper proposes a general framework for the calculation of dynamic CTC filters to be used in immersive VR applications. Within this framework, we introduce a causality constraint to the frequency-domain calculation to avoid undesirable wrap-around effects and echo artifacts. Furthermore, when regularization is applied to the CTC filter calculation, in order to limit the output levels at the loudspeakers, noncausal artifacts appear at the CTC filters and the resulting ear signals. We propose a global minimum-phase regularization to convert these anti-causal ringing artifacts into causal artifacts. Finally, an aspect that is especially critical for dynamic CTC systems is the filter switch between active loudspeakers distributed in a surround audio-visual display system with 360 deg of freedom of operator orientation. Within this framework we apply a weighted filter calculation to control the filter switch, which allows the loudspeakers' contribution to be windowed in space, resulting in a smooth filter transition.
Keywords: acoustic signal processing; crosstalk; filtering theory; frequency-domain analysis; interference suppression; loudspeakers; virtual reality; CTC filter calculation; VR applications; active loudspeakers; anticausal ringing artifacts; dynamic CTC filters; d ynamic CTC systems; dynamic crosstalk cancellation filters; dynamic crosstalk cancellation systems; dynamic setups; ear signals; echo artifacts; filter switch; filter transition; filter update rates; frequency-domain calculation; minimum-phase regularization; noncausal artifacts; operator orientation; surround audio-visual display system; virtual reality applications; weighted filter calculation; wrap-around effects; Crosstalk; Ear; Frequency-domain analysis; Loudspeakers; Speech; Speech processing; Time-domain analysis; Binaural technique; causal implementation; dynamic crosstalk cancellation; minimum-phase regularization (ID#:14-2945)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6826553&isnumber=6851231
- Yifeng He; Ziyang Zhang; Xiaoming Nan; Ning Zhang; Fei Guo; Rosales, E.; Ling Guan, "vConnect: Connect the Real World To The Virtual World," Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2014 IEEE International Conference on, pp.30,35, 5-7 May 2014. doi: 10.1109/CIVEMSA.2014.6841434 The Cave Automatic Virtual Environment (CAVE) is a fully immersive Virtual Reality (VR) system. CAVE systems have been widely used in many applications, such as architectural and industrial design, medical training and surgical planning, museums and education. However, one limitation for most of the current CAVE systems is that they are separated from the real world. The user in the CAVE is not able to sense the real world around him or her. In this paper, we propose a vConnect architecture, which aims to establish real-time bidirectional information exchange between the virtual world and the real world. Furthermore, we propose finger interactions which enable the user in the CAVE to manipulate the information in a natural and intuitive way. We implemented a vHealth prototype, a CAVE-based real-time health monitoring system, through which we demonstrated that the user in the CAVE can visualize and manipulate the real-time physiological data of the patient who is being monitored, and interact with the patient.
Keywords: health care; patient monitoring; physiology; real-time systems; software architecture; virtual reality; CAVE; VR system; cave automatic virtual environment; health monitoring system; patient monitoring; physiological data; real-time bidirectional information exchange; vConnect architecture; vHealth prototype; virtual reality; virtual world; Biomedical monitoring; Computers; Data visualization; Medical services; Prototypes; Real-time systems; Three-dimensional displays (ID#:14-2946)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841434&isnumber=6841424
- Hodgson, E.; Bachmann, E.; Thrash, T., "Performance of Redirected Walking Algorithms in a Constrained Virtual World," Visualization and Computer Graphics, IEEE Transactions on, vol.20, no.4, pp.579, 587, April 2014. doi: 10.1109/TVCG.2014.34 Redirected walking algorithms imperceptibly rotate a virtual scene about users of immersive virtual environment systems in order to guide them away from tracking area boundaries. Ideally, these distortions permit users to explore large unbounded virtual worlds while walking naturally within a physically limited space. Many potential virtual worlds are composed of corridors, passageways, or aisles. Assuming users are not expected to walk through walls or other objects within the virtual world, these constrained worlds limit the directions of travel and as well as the number of opportunities to change direction. The resulting differences in user movement characteristics within the physical world have an impact on redirected walking algorithm performance. This work presents a comparison of generalized RDW algorithm performance within a constrained virtual world. In contrast to previous studies involving unconstrained virtual worlds, experimental results indicate that the steer-to-orbit keeps users in a smaller area than the steer-to-center algorithm. Moreover, in comparison to steer-to-center, steer-to-orbit is shown to reduce potential wall contacts by over 29%.
Keywords: virtual reality; aisles; constrained virtual world; corridors; generalized RDW algorithm; immersive virtual reality; passageways; physical world; redirected walking algorithm performance; steer-to-center algorithm; steer-to-orbit algorithm; unbounded virtual worlds; user movement characteristics; Extraterrestrial measurements; Legged locomotion; Navigation; Orbits; Rendering (computer graphics);Tracking; Virtual environments; Virtual environments; redirected walking; navigation; locomotion interface; algorithm comparison (ID#:14-2947)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777456&isnumber=6777423
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Keystroke Analysis
Keystrokes are the basis for behavioral biometrics. The rhythms and patterns of the individual user can become the basis for a unique biological identification. Research into this area of computer security is growing. The work cited here appeared between January and August of 2014.
- Montalvao, Jugurta; Freirem, Eduardo O.; Bezerra, Murilo A; Garcia, Rodolfo, "Empirical keystroke analysis in passwords," Biosignals and Biorobotics Conference (2014): Biosignals and Robotics for Better and Safer Living (BRC), 5th ISSNIP-IEEE, pp.1,6, 26-28 May 2014. doi: 10.1109/BRC.2014.6880989 Rhythmic patterns in passwords are addressed as a kind of biometrics. Experimental results are obtained through two publicly available databases. A preprocessing step (time interval equalization) is applied to both down-down keystroke latency and key hold-down time. Improvements from this preprocessing step are shown through experiments intentionally adapted from papers by the owners of both databases. Afterwards our main experiments are guided by questions Q1: How long does it take for a typist to develop a proper timing signature associated to a new meaningless password? And Q2: How does the number of symbols affect biometric performance? Measurements show that for the password .tie5Roanl typists need many dozens of repetitions to stabilize their typing rhythm. As for question Q2, experimental results show better performance for the shorter password try4-mbs, and that even for longest one studied, .tie5Roanl, there is room for performance improvement.
Keywords: Biometrics; keystroke; password (ID#:14-2634)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880989&isnumber=6880949
- Ahmed, AA; Traore, I, "Biometric Recognition Based on Free-Text Keystroke Dynamics," Cybernetics, IEEE Transactions on, vol. 44, no.4, pp. 458, 472, April 2014. doi: 10.1109/TCYB.2013.2257745 Accurate recognition of free text keystroke dynamics is challenging due to the unstructured and sparse nature of the data and its underlying variability. As a result, most of the approaches published in the literature on free text recognition, except for one recent one, have reported extremely high error rates. In this paper, we present a new approach for the free text analysis of keystrokes that combines monograph and digraph analysis, and uses a neural network to predict missing digraphs based on the relation between the monitored keystrokes. Our proposed approach achieves an accuracy level comparable to the best results obtained through related techniques in the literature, while achieving a far lower processing time. Experimental evaluation involving 53 users in a heterogeneous environment yields a false acceptance ratio (FAR) of 0.0152% and a false rejection ratio (FRR) of 4.82%, at an equal error rate (EER) of 2.46%. Our follow-up experiment, in a homogeneous environment with 17 users, yields FAR=0% and FRR=5.01%, at EER=2.13%.
Keywords: biometrics (access control);neural nets; text analysis; EER; FAR; FRR; biometric recognition; digraph analysis; equal error rate ;false acceptance ratio; false rejection ratio; free text analysis; free-text keystroke dynamics; monograph analysis; neural network; Biometrics; continuous authentication; free text recognition; keystroke analysis; neural networks (ID#:14-2635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6515332&isnumber=6766657
- Kowtko, M.A, "Biometric Authentication For Older Adults," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845213 In recent times, cyber-attacks and cyber warfare have threatened network infrastructures from across the globe. The world has reacted by increasing security measures through the use of stronger passwords, strict access control lists, and new authentication means; however, while these measures are designed to improve security and Information Assurance (IA), they may create accessibility challenges for older adults and people with disabilities. Studies have shown the memory performance of older adults decline with age. Therefore, it becomes increasingly difficult for older adults to remember random strings of characters or passwords that have 12 or more character lengths. How are older adults challenged by security measures (passwords, CAPTCHA, etc.) and how does this affect their accessibility to engage in online activities or with mobile platforms? While username/password authentication, CAPTCHA, and security questions do provide adequate protection; they are still vulnerable to cyber-attacks. Passwords can be compromised from brute force, dictionary, and social engineering style attacks. CAPTCHA, a type of challenge-response test, was developed to ensure that user inputs were not manipulated by machine-based attacks. Unfortunately, CAPTCHA are now being exploited by new vulnerabilities and exploits. Insecure implementations through code or server interaction have circumvented CAPTCHA. New viruses and malware now utilize character recognition as means to circumvent CAPTCHA [1]. Security questions, another challenge response test that attempts to authenticate users, can also be compromised through social engineering attacks and spyware. Since these common security measures are increasingly being compromised, many security professionals are turning towards biometric authentication. Biometric authentication is any form of human biological measurement or metric that can be used to identify and authenticate an authorized user of a secure system. Biometric authentication- can include fingerprint, voice, iris, facial, keystroke, and hand geometry [2]. Biometric authentication is also less affected by traditional cyber-attacks. However, is Biometrics completely secure? This research will examine the security challenges and attacks that may risk the security of biometric authentication. Recently, medical professionals in the TeleHealth industry have begun to investigate the effectiveness of biometrics. In the United States alone, the population of older adults has increased significantly with nearly 10,000 adults per day reaching the age of 65 and older [3]. Although people are living longer, that does not mean that they are living healthier. Studies have shown the U.S. healthcare system is being inundated by older adults. As security with the healthcare industry increases, many believe that biometric authentication is the answer. However, there are potential problems; especially in the older adult population. The largest problem is authentication of older adults with medical complications. Cataracts, stroke, congestive heart failure, hard veins, and other ailments may challenge biometric authentication. Since biometrics often utilize metrics and measurement between biological features, anyone of the following conditions and more could potentially affect the verification of users. This research will analyze older adults and their impact of biometric authentication on the verification process.
Keywords: authorisation; biometrics (access control);invasive software; medical administrative data processing; mobile computing; CAPTCHA; Cataracts; IA;T eleHealth industry; US healthcare system; access control lists; authentication means; biometric authentication; challenge-response test; congestive heart failure; cyber warfare; cyber-attacks; dictionary; hard veins; healthcare industry; information assurance; machine-based attacks; medical professionals; mobile platforms; network infrastructures; older adults; online activities; security measures; security professionals; social engineering style attacks; spyware; stroke; username-password authentication; Authentication; Barium; CAPTCHAs; Computers; Heart; Iris recognition; Biometric Authentication; CAPTCHA; Cyber-attacks; Information Security; Older Adults; Telehealth (ID#:14-2636)<
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845213&isnumber=6845183
- Pei-Yuan Wu; Chi-Chen Fang; Chang, J.M.; Gilbert, S.B.; Kung, S.Y., "Cost-effective Kernel Ridge Regression Implementation For Keystroke-Based Active Authentication System," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.6028,6032, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854761 In this study a keystroke-based authentication system is implemented on a large-scale free-text keystroke data set, where cost effective kernel-based learning algorithms are designed to enable trade-off between computational cost and accuracy performance. The authentication process evaluates the user's typing behavior on a vocabulary of words, where the judgments based on each word are concatenated by weighted votes, whose weights are also trained to provide optimal fusion of independent judgments. A novel truncated-RBF kernel is also implemented to provide better cost-performance trade-off. Experimental results validate the cost-effectiveness of the developed authentication system.
Keywords: learning (artificial intelligence); message authentication; radial basis function networks; regression analysis; cost effective kernel-based learning algorithm; cost-effective kernel ridge regression; keystroke-based authentication system; large-scale free-text keystroke data set; truncated-RBF kernel; Accuracy; Authentication; Complexity theory; Kernel; Polynomials; Training; Vectors; active authentication; cost-effective; fusion methods; kernel methods; keystroke; truncated-RBF (ID#:14-2637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854761&isnumber=6853544
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Language Based Security
Application-level security is a key to defending against application-level attacks. Because these applications are typically specified and implemented in programming languages, this area is generally known as "language-based security". Research into language -based security focuses on a range of languages and approaches. The works cited here were presented between January and August of 2014.
- Almorsy, M.; Grundy, J., "SecDSVL: A Domain-Specific Visual Language to Support Enterprise Security Modelling," Software Engineering Conference (ASWEC), 2014 23rd Australian, pp.152, 161, 7-10 April 2014. doi: 10.1109/ASWEC.2014.18 Enterprise security management requires capturing different security and IT systems' details, analyzing and enforcing these security details, and improving employed security to meet new risks. Adopting structured models greatly helps in simplifying and organizing security specification and enforcement processes. However, existing security models are generally limited to specific security details and do not deliver a comprehensive security model. They also often do not have user-friendly notations, being complicated extensions of existing modeling languages (such as UML). In this paper, we introduce a comprehensive Security Domain Specific Visual Language (SecDSVL), which enables capturing of key security details to support enterprise systems security management process. We discuss our SecDSVL, tool support and the model-based enterprise security management approach it supports, give a usage example, and present evaluation experiments of SecDSVL.
Keywords: business data processing; risk management; security of data; specification languages; visual languages; IT system details; SecDSVL; UML; enterprise security modelling; enterprise system security management process; model-based enterprise security management approach; modeling languages; security domain specific visual language; security models; security specification; security system details; Analytical models; Color; Passive optical networks; Security; Shape; Unified modeling language; Visualization; Domain Specific Visual Language; model-based security management; visual modelling tools (ID#:14-2951)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824120&isnumber=6824087
- Hatzivasilis, G.; Papaefstathiou, I; Manifavas, C.; Papadakis, N., "A Reasoning System for Composition Verification and Security Validation," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,4, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814001 The procedure to prove that a system-of-systems is composable and secure is a very difficult task. Formal methods are mathematically-based techniques used for the specification, development and verification of software and hardware systems. This paper presents a model-based framework for dynamic embedded system composition and security evaluation. Event Calculus is applied for modeling the security behavior of a dynamic system and calculating its security level with the progress in time. The framework includes two main functionalities: composition validation and derivation of security and performance metrics and properties. Starting from an initial system state and given a series of further composition events, the framework derives the final system state as well as its security and performance metrics and properties. We implement the proposed framework in an epistemic reasoner, the rule engine JESS with an extension of DECKT for the reasoning process and the JAVA programming language.
Keywords: Java; embedded systems; formal specification; formal verification; reasoning about programs; security of data; software metrics; temporal logic; DECKT;J AVA programming language; composition validation; composition verification; dynamic embedded system composition; epistemic reasoner; event calculus; formal methods; model-based framework; performance metrics; reasoning system ;rule engine JESS; security evaluation; security validation; system specification; system-of-systems; Cognition; Computational modeling; Embedded systems; Measurement; Protocols; Security; Unified modeling language (ID#:14-2952)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814001&isnumber=6813963
- Dooley, Rion; Stubbs, Joe; Basney, Jim, "The MyProxy Gateway," Science Gateways (IWSG), 2014 6th International Workshop on, pp.6,11, 3-5 June 2014. doi: 10.1109/IWSG.2014.8 In 2000, the original My Proxy server was released to provide a centralized way to securely store and delegate grid credentials. In 2009, the OAuth for My Proxy (OA4MP) server was released in response to security concerns expressed by resource providers and a strong trend of science gateways moving to the web. OA4MP provided a standards-based way for users to delegate X.509 credentials from My Proxy to science gateways without exposing user passwords to third-party services. This addressed both a security concern for service providers and a desire by gateway developers for a standards-based approaches to security. While OA4MP solved some problems, it introduced others. The My Proxy Gateway Service (MPG) is a Restful API to My Proxy that picks up where OA4MP left off by supporting OAuth2 credential renewal, attribute insertion, trust root management, language agnostic access patterns, and improved accounting. In this paper we first start by looking at related work and detailing the evolution of My Proxy up to the writing of this paper. Next we briefly describe OAuth2 and highlight the differences between it and OAuth1. After that we describe the MPG, its multiple configurations, and security considerations. We conclude with finishing remarks.
Keywords: Authentication; Authorization; Browsers; Logic gates; Servers; Web services; REST; api; authentication; grid; myproxy; oauth; security; web service (ID#:14-2953)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6882061&isnumber=6882053
- Dong-Ah Lee; Eui-sub Kim; Junbeom Yoo; Jang-Soo Lee; Jong Gyun Choi, "FBDtoVerilog 2.0: An Automatic Translation of FBD into Verilog to Develop FPGA," Information Science and Applications (ICISA), 2014 International Conference on, pp.1,4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847402 The PLC (Programmable Logic Controller) is a digital computer which has been widely used for nuclear RPSs (Reactor Protection Systems). There is increasing concern that such RPSs are being threatened because of its complexity, maintenance cost, security problems, etc. Recently, nuclear industry is developing FPGA-based RPSs to provide diversity or to change the platform. Developing the new platform, however, is challenge for software engineers in nuclear domain because the two platform, PLC-based and FPGA-based, are too different to apply their knowledge. This paper proposes an automatic translation of FBD (Function Block Diagram: a programming language of PLC software) into HDL (Hardware Description Language). We implemented an automatic translation tool, 'FBDtoVerilog 2.0,' which helps software engineers design FPGA-based RPSs with their experience and knowledge. Case study using a prototype version of a real-world RPS in Korea shows 'FBDtoVerilog 2.0' translates FBD programs for PLC into HDL reasonably.
Keywords: control engineering computing; field programmable gate arrays; fission reactors;hardware description languages; nuclear engineering computing;nuclear power stations; power engineering computing; programmable controllers; FBDtoVerilog 2.0;FPGA-based RPS;HDL; Korea; PLC; digital computer; function block diagram; hardware description language; maintenance cost; nuclear RPS; nuclear domain; programmable logic controller; reactor protection systems; security problems; software engineers; Field programmable gate arrays; Hardware design languages; Libraries; Power generation; Safety; Software; Wires (ID#:14-2954)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847402&isnumber=6847317
- Zhongpai Gao; Guangtao Zhai; Xiongkuo Min, "Information Security Display System Based On Temporal Psychovisual Modulation," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on, pp.449, 452, 1-5 June 2014. doi: 10.1109/ISCAS.2014.6865167 This paper introduces an information security display system using temporal psychovisual modulation (TPVM). TPVM was proposed as a new information display technology using the interplay of signal processing, optoelectronics and psychophysics. Since the human visual system cannot detect quick temporal changes above the flicker fusion frequency (about 60 Hz) and yet modern display technologies offer much higher refresh rates, there is a chance for a single display to simultaneously serve different contents to multiple observers. A TPVM display broadcasts a set of images called atom frames at a high speed, and those atom frames are then weighted by liquid crystal (LC) shutter based viewing devices that are synchronized with the display before entering the human visual system and fusing into the desired visual stimuli. And through different viewing devices, people can see different information. In this work, we develop a TPVM based information security display prototype. There are two kinds of viewers, those authorized viewers with the viewing devices who can see the secret information and those unauthorized viewers (bystanders) without the viewing devices who only see mask/disguise images. The prototype is built on a 120 Hz LCD screen with synchronized LC shutter glasses that were originally developed for stereoscopic display. The system is written in C++ language with SDKs of Nvidia 3D Vision, DirectX, CEGUI, MuPDF and etc. We also added human-computer interaction support of the system using Kinect. The information security display system developed in this work serves as a proof-of-concept of the TPVM paradigm, as well as a testbed for future research of TPVM technology.
Keywords: computer displays; human computer interaction; image sensors; security of data; stereo image processing; C++ language; CEGUI; DirectX; Kinect; LC shutter based viewing devices; MuPDF; Nvidia 3D Vision; TPVM; flicker fusion frequency; frequency 60 Hz; human visual system ;human-computer interaction support; information display technology ;information security display system; liquid crystal shutter; optoelectronics; psychophysics; signal processing; stereoscopic display; temporal psychovisual modulation; Brightness; Electronic publishing; Games; Glass; Information security; Synchronization; Three-dimensional displays (ID#:14-2955)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865167&isnumber=6865048
- Buchler, M.; Hossen, K.; Mihancea, P.F.; Minea, M.; Groz, R.; Oriat, C., "Model Inference And Security Testing In The Spacios Project," Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on, pp.411, 414, 3-6 Feb. 2014. doi: 10.1109/CSMR-WCRE.2014.6747207 The SPaCIoS project has as goal the validation and testing of security properties of services and web applications. It proposes a methodology and tool collection centered around models described in a dedicated specification language, supporting model inference, mutation-based testing, and model checking. The project has developed two approaches to reverse engineer models from implementations. One is based on remote interaction (typically through an HTTP connection) to observe the runtime behaviour and infer a model in black-box mode. The other is based on analysis of application code when available. This paper presents the reverse engineering parts of the project, along with an illustration of how vulnerabilities can be found with various SPaCIoS tool components on a typical security benchmark.
Keywords: Web services; hypermedia; program diagnostics; program verification; reverse engineering; security of data; specification languages; transport protocols HTTP connection; SPaCIos project; Web applications; application code analysis; black-box mode; dedicated specification language; model checking; model inference; mutation-based testing; remote interaction; reverse engineering; runtime behaviour; security benchmark; security testing; tool collection; Abstracts; Analytical models; Concrete; Crawlers; Security; Semantics; Testing; Control Flow Inference; Data-Flow Inference; Reverse-Engineering; Security; Web Application (ID#:14-2956)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747207&isnumber=6747152
- Lihong Guo; Jian Wang; Haitao Wu; He Du, "eXtensible Markup Language access Control Model With Filtering Privacy Based On Matrix Storage," Communications, IET , vol.8, no.11, pp.1919,1927, July 24 2014. doi: 10.1049/iet-com.2013.0570 With eXtensible Markup Language (XML) becoming a ubiquitous language for data storage and transmission in various domains, effectively safeguarding the XML document containing sensitive information is a critical issue. In this study, the authors propose a new access control model with filtering privacy. Based on the idea of separating the structure and content of the XML document, they provide a method to extract the main structure of the XML document and use matrix to save the structure information, at the same time, the start-end region encoding is used to combine the corresponding structure and content skillfully. These not only save the storage space but also efficiently speed up the search and make it convenient to find the relevant elements, especially the finding of the related content. In order to evaluate the security and efficiency of this model, the security analysis and simulation experiment verify its performance in this work.
Keywords: XML; authorisation; data privacy; document handling; information filtering; storage management; ubiquitous computing; XML document content; XML document structure; access control model; data storage space; data transmission;eXtensible Markup Language; flltering privacy; matrix storage; security analysis; sensitive information; start-end region encoding; structure information; ubiquitous language (ID#:14-2957)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855939&isnumber=6855933
- Woodruff, J.; Watson, R.N.M.; Chisnall, D.; Moore, S.W.; Anderson, J.; Davis, B.; Laurie, B.; Neumann, P.G.; Norton, R.; Roe, M., "The CHERI Capability Model: Revisiting RISC In An Age Of Risk," Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on, pp.457,468, 14-18 June 2014. doi: 10.1109/ISCA.2014.6853201 Motivated by contemporary security challenges, we reevaluate and refine capability-based addressing for the RISC era. We present CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection. We demonstrate that CHERI enables language memory model enforcement and fault isolation in hardware rather than software, and that the CHERI mechanisms are easily adopted by existing programs for efficient in-program memory safety. In contrast to past capability models, CHERI complements, rather than replaces, the ubiquitous page-based protection mechanism, providing a migration path towards deconflating data-structure protection and OS memory management. Furthermore, CHERI adheres to a strict RISC philosophy: it maintains a load-store architecture and requires only single-cycle instructions, and supplies protection primitives to the compiler, language runtime, and operating system. We demonstrate a mature FPGA implementation that runs the FreeBSD operating system with a full range of software and an open-source application suite compiled with an extended LLVM to use CHERI memory protection. A limit study compares published memory safety mechanisms in terms of instruction count and memory overheads. The study illustrates that CHERI is performance-competitive even while providing assurance and greater flexibility with simpler hardware.
Keywords: field programmable gate arrays; operating systems (computers);reduced instruction set computing; security of data; CHERI hybrid capability model; CHERI memory protection; FPGA implementation; FreeBSD operating system; MIPS ISA; OS memory management; RISC era; byte-granularity memory protection; capability hardware enhanced RISC instruction; compiler; data-structure protection; fault isolation; field programmable gate array; in-program memory safety; instruction count; instruction set architecture ;language memory model enforcement; language runtime; load-store architecture; memory overhead; open-source application suite; reduces instruction set computing; single-cycle instructions; ubiquitous page-based protection mechanism; Abstracts; Coprocessors; Ground penetrating radar; Registers; Safety (ID#:14-2958)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853201&isnumber=6853187
- Pura, M.L.; Buchs, D., "Model Checking ARAN Ad Hoc Secure Routing Protocol With Algebraic Petri Nets," Communications (COMM), 2014 10th International Conference on, pp.1,4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866692 Modeling and verifying the security protocols for ad hoc networks is a very complex task, because this type of networks is very complex. In this paper we present a new approach: the use of algebraic Petri nets as implemented by AlPiNA tool to model ad hoc networks and to verify some of the security properties of ARAN ad hoc secure routing protocol. The results we have obtained are in concordance with the other research on this protocol, and thus they validate the use of this methodology. Our approach has several advantages. An increase of performance was obtained, in the sense that we managed to verify the protocol for larger topologies than it was previous reported. The specification language of algebraic Petri nets is more expressive than the languages use by other tools, and it is more suited for model based code generation.
Keywords: Petri nets; ad hoc networks; routing protocols; telecommunication security; ARAN ad hoc secure routing protocol; AlPiNA tool; ad hoc networks; algebraic Petri nets; security protocols; Ad hoc networks; Object oriented modeling; Routing; Routing protocols; Security; Topology; ARAN; AlPiNA; ad hoc networks; algebraic Petri nets; model checking (ID#:14-2959)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866692&isnumber=6866648
- Karande, AM.; Kalbande, D.R., "Web Service Selection Based On Qos Using Tmodel Working On Feed Forward Network," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.29,33, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781247 This paper address the selection of web services using tmodel of SOA which is designed using feed forward network. This construction will be done using XML language. Ontology provides a terminology about concepts and their relationships within a domain along with the activities taking place in that domain, and the theories, elementary principles governing that domain. Using supervised learning method of feed forward neural network, ontologies of different domain can be matched. Feed forward neural network can be used for pattern matching with back propagation techniques. Pattern defined here will be quality parameter. This quality parameter can be selected using tModel structure of UDDI. Web service Provider present in UDDI can differentiate services using Quality categorization by labeling the qualities i.e. performance, security. This differentiation be done using QoS ontology for service Identification. The registered service descriptions by the service provider contain the semantic profile and QoS parameters. ANN matching model consists of training phase and matching phase based on ontology domain.
Keywords: Web services; XML; backpropagation; feedforward neural nets; ontologies (artificial intelligence); pattern matching; quality of service; QoS; QoS ontology;S OA tmodel; UDDI tModel structure; Web service provider; Web service selection; XML language; backpropagation technique; feedforward neural network; pattern matching; quality categorization; service identification supervised learning method; Artificial neural networks; Feeds; Neurons; Ontologies; Quality of service; Reliability; Service-oriented architecture; QoS; feedforward neural network; service: oriented architecture; tModel; web service repository builder (ID#:14-2960)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781247&isnumber=6781240
- Buinevich, M.; Izrailov, K., "Method and utility for recovering code algorithms of telecommunication devices for vulnerability search," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.172,176, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778943 The article describes a method for searching vulnerabilities in machine code based on the analysis of its algorithmized representation obtained with the help of an utility being a part of the method. Vulnerability search falls within the field of telecommunication devices. Phase-by-phase description of the method is discussed, as well as the software architecture of the utility and their limitations in terms of application and preliminary effectiveness estimate results. A forecast is given as to developing the method and the utility in the near future.
Keywords: assembly language; binary codes; reverse engineering; security of data; algorithmized representation; code recovery algorithm; machine code; phase-by-phase description; software architecture; telecommunication devices; vulnerability search; Algorithm design and analysis; Assembly; Communications technology; Educational institutions; Information security; Software; Software algorithms; binary codes; information security; program language extension; reverse engineering and decompilation; telecommunications (ID#:14-2961)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778943&isnumber=6778899
- Lau, R.Y.K.; Yunqing Xia; Yunming Ye, "A Probabilistic Generative Model for Mining Cybercriminal Networks from Online Social Media," Computational Intelligence Magazine, IEEE, vol.9, no.1, pp.31,43, Feb. 2014. doi: 10.1109/MCI.2013.2291689 There has been a rapid growth in the number of cybercrimes that cause tremendous financial loss to organizations. Recent studies reveal that cybercriminals tend to collaborate or even transact cyber-attack tools via the "dark markets" established in online social media. Accordingly, it presents unprecedented opportunities for researchers to tap into these underground cybercriminal communities to develop better insights about collaborative cybercrime activities so as to combat the ever increasing number of cybercrimes. The main contribution of this paper is the development of a novel weakly supervised cybercriminal network mining method to facilitate cybercrime forensics. In particular, the proposed method is underpinned by a probabilistic generative model enhanced by a novel context-sensitive Gibbs sampling algorithm. Evaluated based on two social media corpora, our experimental results reveal that the proposed method significantly outperforms the Latent Dirichlet Allocation (LDA) based method and the Support Vector Machine (SVM) based method by 5.23% and 16.62% in terms of Area Under the ROC Curve (AUC), respectively. It also achieves comparable performance as the state-of-the-art Partially Labeled Dirichlet Allocation (PLDA) method. To the best of our knowledge, this is the first successful research of applying a probabilistic generative model to mine cybercriminal networks from online social media.
Keywords: data mining; digital forensics; sampling methods; social networking (online);AUC;PLDA method; area under the ROC curve; collaborative cybercrime activities; context-sensitive Gibbs sampling algorithm; cyber-attack tools; cybercrime forensics; dark markets; online social media; partially labeled Dirichlet allocation method; probabilistic generative model; social media corpora; supervised cybercriminal network mining method; underground cybercriminal communities; Computer crime; Computer security; Data mining; Hackers; Natural language processing; Network security; Probabilstic logic; Social network services (ID#:14-2962)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6710252&isnumber=6710231
- Bannour, B.; Escobedo, J.; Gaston, C.; Le Gall, P.; Pedroza, G., "Designing Sequence Diagram Models for Robustness to Attacks," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.26,33, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.50 The omnipresence of complex distributed component-based systems offers numerous opportunities for malicious parties, especially thanks to the numerous communication mechanisms brought into play. This is particularly true for Smart Grids systems in which electricity networks and information technology are coupled to provide smarter and more efficient energy production-to-consumption chain. Indeed, Smart Grids are clearly security sensitive since a lot of components usually operate outside of the trusted company's border. In this paper, we propose a model-based methodology targeting the diagnostic of attacks with respect to some trusted components. The methodology combines UML sequence diagrams (SD) and formal symbolic techniques in order to model and analyze systems and threats from early design stages. We introduce a criterion that allows us to qualify or not a SD as robust with respect to an attack, also modeled as a SD. The criterion is defined by comparing traces as they are perceived by trusted components. We illustrate our approach with a UML sequence diagram issued from a Smart Grid case study.
Keywords: Unified Modeling Language; diagrams; security of data; smart power grids; UML sequence diagrams; attack diagnostics ;complex distributed component-based systems; energy production-to-consumption chain; formal symbolic techniques; malicious parties; model-based methodology; security sensitivity; sequence diagram model design; smart grid systems; trusted components; Electricity; Registers; Security; Semantics; Smart grids; Unified modeling language; Robustness to attacks; attack diagnosis; model analysis; security watchdog testing; sequence diagrams; smart grids; symbolic execution} (ID#:14-2963)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825635&isnumber=6825623
- Kishore, K.R.; Mallesh, M.; Jyostna, G.; Eswari, P.R.L.; Sarma, S.S., "Browser JS Guard: Detects and Defends Against Malicious Javascript Injection Based Drive By Download Attacks," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, pp.92,100, 17-19 Feb. 2014. doi: 10.1109/ICADIWT.2014.6814705 In the recent times, most of the systems connected to Internet are getting infected with the malware and some of these systems are becoming zombies for the attacker. When user knowingly or unknowingly visits a malware website, his system gets infected. Attackers do this by exploiting the vulnerabilities in the web browser and acquire control over the underlying operating system. Once attacker compromises the users web browser, he can instruct the browser to visit the attackers website by using number of redirections. During the process, users web browser downloads the malware without the intervention of the user. Once the malware is downloaded, it would be placed in the file system and responds as per the instructions of the attacker. These types of attacks are known as Drive by Download attacks. Now-a-days, Drive by Download is the major channel for delivering the Malware. In this paper, Browser JS Guard an extension to the browser is presented for detecting and defending against Drive by Download attacks via HTML tags and JavaScript.
Keywords: Java; Web sites; authoring languages; invasive software; online front-ends; operating systems (computers); security of data; HTML tags ;Internet; browser JS guard; download attacks; drive by download attacks; file system; malicious JavaScript injection; malware Web site; operating system; user Web browser; Browsers; HTML; Malware; Monitoring; Web pages; Web servers; DOM Change Methods; Drive by Download Attacks; HTML tags ;JavaScript Functions; Malware; Web Browser; Web Browser Extensions (ID#:14-2964)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814705&isnumber=6814661
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Microelectronics Security
Microelectronics are at the center of the IT world. Their security--provenance, integrity of their manufacture, and capacity for providing embedded security--is both an opportunity and a problem for cybersecurity research. The works cited here were presented between January and August of 2014 and cover a wide range of microelectronics security issues.
- Jagasivamani, M.; Gadfort, P.; Sika, M.; Bajura, M.; Fritze, M., "Split-fabrication Obfuscation: Metrics And Techniques," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.7,12, 6-7 May 2014. doi: 10.1109/HST.2014.6855560 Split-fabrication has been proposed as an approach for secure and trusted access to advanced microelectronics manufacturing capability using un-trusted sources. Each wafer to be manufactured is processed by two semiconductor foundries, combining the front-end capabilities of an advanced but untrusted semiconductor foundry with the back-end capabilities a trusted semiconductor foundry. Since the security of split fabrication relates directly to a front-end foundry's ability to interpret the partial circuit designs it receives, metrics are needed to evaluate the obfuscation of these designs as well as circuit design techniques to alter these metrics. This paper quantitatively examines several "front-end" obfuscation techniques and metrics inspired by information theory, and evaluates their impact on design effort, area, and performance penalties.
Keywords: integrated circuits; network synthesis; semiconductor technology; circuit design techniques; front-end obfuscation techniques; microelectronics manufacturing; partial circuit designs; semiconductor foundries; split-fabrication obfuscation; Entropy; Foundries; Libraries; Logic gates; Manufacturing; Measurement; Standards (ID#:14-2965)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855560&isnumber=6855557
- Farrugia, R.A, "Reversible De-Identification for lossless image compression using Reversible Watermarking," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on , vol., no., pp.1258,1263, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859760 De-Identification is a process which can be used to ensure privacy by concealing the identity of individuals captured by video surveillance systems. One important challenge is to make the obfuscation process reversible so that the original image/video can be recovered by persons in possession of the right security credentials. This work presents a novel Reversible De-Identification method that can be used in conjunction with any obfuscation process. The residual information needed to reverse the obfuscation process is compressed, authenticated, encrypted and embedded within the obfuscated image using a two-level Reversible Watermarking scheme. The proposed method ensures an overall single-pass embedding capacity of 1.25 bpp, where 99.8% of the images considered required less than 0.8 bpp while none of them required more than 1.1 bpp. Experimental results further demonstrate that the proposed method managed to recover and authenticate all images considered.
Keywords: data compression; image coding; image watermarking; message authentication; video surveillance; image authentication; image recovery; lossless image compression; obfuscation process; reversible de-identification; reversible watermarking; video surveillance systems; Cryptography; Face; Generators; Image color analysis; Payloads; Vectors; Watermarking (ID#:14-2966)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859760&isnumber=6859515
- Signorini, G.; Grivet-Talocia, S.; Stievano, IS.; Fanucci, L., "Macromodel-based Signal and Power Integrity simulations of an LP-DDR2 interface in mSiP," Microelectronics and Electronics (PRIME), 2014 10th Conference on Ph.D. Research in, pp.1,4, June 30 2014-July 3 2014. doi: 10.1109/PRIME.2014.6872719 Signal and Power Integrity (SI/PI) analyses assume a paramount importance to ensure a secure integration of high-speed communication interfaces in low-cost highly-integrated System-in-Package(s) (SiP) for mobile applications. In an iterative fashion, design and time-domain SI/PI verifications are alternated to assess and optimize system functionality. The resulting complexity of the analysis limits simulation coverage and requires extremely long runtimes (hours, days). In order to ensure post-silicon correlation, electrical macromodels of Package/PCB parasitics and high-speed I/Os can be generated and included in the testbenches to expedite simulations. Using as example an LP-DDR2 memory interface to support the operations of a mobile digital base-band processor, we have developed and applied a macromodelling flow to demonstrate simulation run-time speed-up factors (x1200+), and enable interface-level analyses to study the effects of Package/PCB parasitics on signals and PDNs, as well as the corresponding degradation in the timing budget.
Keywords: iterative methods; mobile handsets; printed circuits; system-in-package; time-domain analysis;LP-DDR2 interface; SI-PI analysis; electrical macromodel; high-speed I-O; high-speed communication interfaces; integration security; iterative fashion; low-cost highly-integrated system-in-packages; mSiP; macromodel-based signal-power integrity simulation; mobile application; package-PCB parasitics; post-silicon correlation; simulation coverage; system functionality assessment; system functionality optimization; time-domain SI-PI verification; Analytical models; Complexity theory; Mathematical model; Mobile communication; Packaging; Silicon; Time-domain analysis (ID#:14-2967)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6872719&isnumber=6872647
- Ristov, P.; Mrvica, A; Miskovic, T., "Secure Data Storage," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1586,1591, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859818 Secure storage of data and the current availability of data and information are the most important aspects of any ICT (Information and Communications Technology) system. Data storage systems are mandatory components of modern information systems. The term backup refers to creating a backup or copy of data with the aim of restoring the data in case the original data become corrupted and inaccessible. Reliable and secure automated data storage is nowadays of great importance for business based on the smooth progress of information in an enterprise. Companies have to implement appropriate systems for securing the data storage. Some shipping companies use systems for saving data and applications in the so-called cloud. Cloud computing enables efficient and reliable fleet management. This technology reduces the cost of managing data and information resources, regardless of the size of the fleet.
Keywords: back-up procedures; cloud computing; information systems ;information technology; naval engineering; security of data; cloud computing; data backup; fleet management; information and communications technology; modern information systems ;secure data storage; shipping companies; Cloud computing; Companies; Computers; Media; Memory; Servers; Storage area networks (ID#:14-2968)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859818&isnumber=6859515
- Grznic, T.; Perhoc, D.; Maric, M.; Vlasic, F.; Kulcsar, T., "CROFlux -- Passive DNS method for Detecting Fast-Flux Domains," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1376,1380, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859782 In this paper we present our approach to fast flux detection called CROFlux that relies on the passive DNS replication method. The presented model can significantly reduce the number of false positive detections, and can detect other suspicious domains that are used for fast flux. This algorithm is used and implemented in Advanced Cyber Defense Centre - a European project co-funded by the European Commission.
Keywords: Internet; security of data; Advanced Cyber Defense Centre; CROFlux; fast-flux domain detection; passive DNS replication method; Classification algorithms; Content distribution networks; Europe; IP networks; Malware; Peer-to-peer computing; Servers (ID#:14-2970)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859782&isnumber=6859515
- Wahane, G.; Kanthe, AM.; Simunic, D., "Technique for Detection Of Cooperative Black Hole Attack Using True-Link In Mobile Ad-Hoc Networks," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on , vol., no., pp.1428,1434, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859791 Mobile Ad-hoc Network (MANET) is a collection of communication devices or nodes that wish to communicate without any fixed infrastructure and predetermined organization of available links. Security is a major challenge for these networks owing to their features of open medium, dynamically changing topologies. A black hole is a malicious node that falsely replies for any route requests without having active route to specified destination and drops all the receiving packets. Sometimes the black hole nodes cooperate with each other with the aim of dropping packets. These are known as cooperative black hole attack. This proposed work suggests the modification of Ad-hoc on Demand Distance Vector (AODV) routing protocol. We used a technique for detecting as well as defending against a cooperative black hole attack using True-link concept. True-link is a timing based countermeasure to the cooperative black hole attack. This paper shows the performance of MANET decreases for end-to-end delay, normalized routing overhead and increases throughput and packet delivery ratio.
Keywords: cooperative communication; mobile ad hoc networks; routing protocols ;telecommunication links ;telecommunication security; AODV routing protocol; MANET; active route; ad-hoc on demand distance vector; black hole nodes; communication devices; communication nodes; cooperative black hole attack; end-to-end delay; fixed infrastructure; malicious node; mobile ad-hoc networks; packet delivery ratio; predetermined organization; receiving packets; route requests; routing overhead ;timing based countermeasure; true-link concept; Delays; Mobile ad hoc networks; Routing; Routing protocols; Security (ID#:14-2971)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859791&isnumber=6859515
- Kounelis, I; Muftic, S.; Loschner, J., "Secure and Privacy-Enhanced E-Mail System Based on the Concept of Proxies," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1405, 1410, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859787 Security and privacy on the Internet and especially the e-mail, is becoming more and more important and crucial for the user. The requirements for the protection of e-mail include issues like tracking and privacy intrusions by hackers and commercial advertisers, intrusions by casual observers, and even spying by government agencies. In an expanding email use in the digital world, Internet and mobile, the quantity and sensitivity of personal information has also tremendously expanded. Therefore, protection of data and transactions and privacy of user information is key and of interest for many users. Based on such motives, in this paper we present the design and current implementation of our secure and privacy-enhanced e-mail system. The system provides protection of e-mails, privacy of locations from which the e-mail system is accessed, and authentication of legitimate users. Differently from existing standard approaches, which are based on adding security extensions to e-mail clients, our system is based on the concept of proxy servers that provide security and privacy of users and their e-mails. It uses all required standards: S/MIME for formatting of secure letters, strong cryptographic algorithms, PKI protocols and certificates. We already have the first implementation and an instance of the system is very easy to install and to use.
Keywords: Internet; cryptographic protocols; data privacy; electronic mail; public key cryptography; Internet; PKI protocols; S-MIME; casual observers; commercial advertisers; cryptographic algorithms; digital world; government agencies; legitimate user authentication; locations privacy; privacy intrusions; privacy-enhanced e-mail system; proxy concept; secure letters; security extensions; tracking intrusions; user information privacy; Cryptography; Electronic mail; Postal services; Privacy; Servers; Standards; E-mail; PKI; Proxy Server; S/MIME; X.509 certificates (ID#:14-2972)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859787&isnumber=6859515
- Cassettari, R.; Fanucci, L.; Boccini, G., "A New Hardware Implementation Of The Advanced Encryption Standard Algorithm For Automotive Applications," Microelectronics and Electronics (PRIME), 2014 10th Conference on Ph.D. Research in , vol., no., pp.1,4, June 30 2014-July 3 2014. doi: 10.1109/PRIME.2014.6872672 Modern cars are no longer mere mechanical devices and they are dominated by a large number of IT systems that guide a wide number of embedded systems called Electronic Control Unit (ECU). While this transformation has driven major advancements in efficiency and safety, it has also introduced a range of new potential risks. After a brief introduction of the security in automotive environment we investigate how the automotive community approached this problem. In order to ensure some security aspects in automotive environment, it is needed a hardware implementation of the Advanced Encryption Standard (AES) algorithm with higher speed throughput than existing solutions. For this purpose, a new hardware implementation of this cryptographic algorithm is presented. The implementation results are compared with previous works.
Keywords: automobiles; control engineering computing; cryptography; embedded systems; AES; ECU; IT systems; advanced encryption standard algorithm; automotive applications; automotive community; automotive environment; cryptographic algorithm; electronic control unit; embedded systems; hardware implementation; modern cars; Algorithm design and analysis; Automotive engineering; Encryption; Hardware; Throughput (ID#:14-2973)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6872672&isnumber=6872647
- Portelo, J.; Raj, B.; Abad, A; Trancoso, I, "Privacy-preserving Speaker Verification Using Secure Binary Embeddings," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1268, 1272, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859762 Remote speaker verification services typically rely on the system having access to the users recordings, or features derived from them, and/or a model for the users voice. This conventional approach raises several privacy concerns. In this work, we address this privacy problem in the context of a speaker verification system using a factor analysis based front-end extractor, the so-called i-vectors. Preserving privacy in our context means that neither the system observes voice samples or speech models from the user, nor the user observes the universal model owned by the system. This is achieved by transforming speaker i-vectors to bit strings in a way that allows for the computation of approximate distances, instead of exact ones. The key to the transformation uses a hashing scheme known as secure binary embeddings. Then, an SVM classifier with a modified kernel operates on the hashes. Experiments showed that the secure system yielded similar results as its non-private counterpart. The approach may be extended to other types of biometric authentication.
Keywords: approximation theory; data privacy; speaker recognition; SVM classifier; approximate distance computation; binary embeddings security; biometric authentication; factor analysis; front- end extractor; i-vectors; privacy-preserving speaker verification; remote speaker verification; Euclidean distance; Hamming distance; Privacy; Quantization (signal); Speech; Support vector machines; Vectors (ID#:14-2974)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859762&isnumber=6859515
- Picek, S.; Batina, L.; Jakobovic, D.; Carpi, R.B., "Evolving Genetic Algorithms For Fault Injection Attacks," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1106, 1111, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859734 Genetic algorithms are used today to solve numerous difficult problems. However, it is often needed to specialize and adapt them further in order to successfully tackle some specific problem. One such example is the fault injection attack where the goal is to find a specific set of parameters that can lead to a successful cryptographic attack in a minimum amount of time. In this paper we address the process of the specialization of genetic algorithm from its standard form to the final, highly-specialized one. In this process we needed to customize crossover operator, add a mapping between the values in cryptographic domain and genetic algorithm domain and finally to adapt genetic algorithm to work on-the-fly. For the last phase of development we plan to go to the memetic algorithm by adding a local search strategy. Furthermore, we give a comparison between our algorithm and random search which is the mostly employed method for this problem at the moment. Our experiments show that our algorithm significantly outperforms the random search.
Keywords: cryptography; genetic algorithms; search problems crossover operator; cryptographic attack; fault injection attacks; genetic algorithms; local search strategy; memetic algorithm; Genetic algorithms; Monte Carlo methods; Optimization; Search problems; Security; Sociology (ID#:14-2975)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859734&isnumber=6859515
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Natural Language Processing
Natural Language Processing research focuses on developing efficient algorithms to process texts and to make their information accessible to computer applications. Texts can contain information with different complexities ranging from simple word or token-based representations, to rich hierarchical syntactic representations, to high-level logical representations across document collections. Research cited in this area was presented between January and August of 2014. Specific languages addressed include Turkish, Hindi, Bangla, and Farsi, as well as English.
- Cambria, E.; White, B., "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]," Computational Intelligence Magazine, IEEE, vol.9, no.2, pp.48,57, May 2014. doi: 10.1109/MCI.2014.2307227 Natural language processing (NLP) is a theory-motivated range of computational techniques for the automatic analysis and representation of human language. NLP research has evolved from the era of punch cards and batch processing (in which the analysis of a sentence could take up to 7 minutes) to the era of Google and the likes of it (in which millions of webpages can be processed in less than a second). This review paper draws on recent developments in NLP research to look at the past, present, and future of NLP technology in a new light. Borrowing the paradigm of `jumping curves' from the field of business management and marketing prediction, this survey article reinterprets the evolution of NLP research as the intersection of three overlapping curves-namely Syntactics, Semantics, and Pragmatics Curves which will eventually lead NLP research to evolve into natural language understanding.
Keywords: Internet; natural language processing; search engines; Google; NLP research evolution; NLP technology; Webpages; automatic human language analysis; automatic human language representation; batch processing; business management; computational techniques; jumping NLP curves; marketing prediction; natural language processing research; natural language understanding; pragmatics curve; punch cards; semantics curve; syntactics curve (ID#:14-2976)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786458&isnumber=6786379
- Estiri, A; Kahani, M.; Ghaemi, H.; Abasi, M., "Improvement of an Abstractive Summarization Evaluation Tool Using Lexical-Semantic Relations And Weighted Syntax Tags In Farsi Language," Intelligent Systems (ICIS), 2014 Iranian Conference on, pp.1,6, 4-6 Feb. 2014. doi: 10.1109/IranianCIS.2014.6802594 In recent years, high increase in the amount of published web elements and the need to store, classify, restore, and process them have intensified the importance of natural language processing and its related tools such as automatic summarizers and machine translators. In this paper, a novel approach for evaluating automatic abstractive summarization system is proposed which can also be used in the other Natural Language Processing and Information Retrieval Applications. By comparing auto-abstracts (abstracts created by machine) with human abstracts (ideal abstracts created by human), the metrics introduced in the proposed tool can automatically measure the quality of auto-abstracts. Evidently, we can't semantically compare texts of abstractive summaries by comparison of just their words' appearance. So it is necessary to use a lexical database such as WordNet. We use FerdowsNet with a proper idea for Farsi language and it notably improves the evaluation results. This tool has been assessed by linguistic experts. This tool contains metric for determining the quality of summaries automatically by comparing them with summaries generated by humans (Ideal summaries). Evidently, we can't semantically compare texts of abstractive summaries by comparison of just their words' appearance and it is necessary to use a lexical database. We use this database with a proper idea together with Farsi parser in order to identify groups forming sentences and the results of evaluation improve significantly.
Keywords: database management systems; information retrieval; language translation; natural language processing; Farsi language; Web elements; WordNet; abstractive summaries; abstractive summarization evaluation tool; automatic abstractive summarization system; human abstracts; information retrieval applications; lexical database; lexical semantic relations; linguistic experts; machine translators; natural language processing; weighted syntax tags; Abstracts; Databases; Equations; Measurement; Natural language processing; Semantics; Standards; Automatic Abstractive Summarizer; Evaluation; Farsi Natural Language Processing (NLP); Parse tree; Semantics; Sentences groups; parser (ID#:14-2977)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802594&isnumber=6798982
- Mills, M.T.; Bourbakis, N.G., "Graph-Based Methods for Natural Language Processing and Understanding--A Survey and Analysis," Systems, Man, and Cybernetics: Systems, IEEE Transactions on, vol.44, no.1, pp.59, 71, Jan. 2014. doi: 10.1109/TSMCC.2012.2227472 This survey and analysis presents the functional components, performance, and maturity of graph-based methods for natural language processing and natural language understanding and their potential for mature products. Resulting capabilities from the methods surveyed include summarization, text entailment, redundancy reduction, similarity measure, word sense induction and disambiguation, semantic relatedness, labeling (e.g., word sense), and novelty detection. Estimated scores for accuracy, coverage, scalability, and performance are derived from each method. This survey and analysis, with tables and bar graphs, offers a unique abstraction of functional components and levels of maturity from this collection of graph-based methodologies.
Keywords: graph theory; natural language processing; bar graphs; functional components; graph-based methodologies; graph-based methods; labeling; mature products; maturity level; natural language processing; natural language understanding; novelty detection; redundancy reduction; scores estimation; semantic relatedness; similarity measure; summarization;tables; text entailment; word disambiguation; word sense induction; Accuracy; Clustering algorithms; Context; Natural language processing; Semantics; Signal processing algorithms; Syntactics; Graph methods; natural language processing (NLP); natural language understanding (NLU) (ID#:14-2978)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6576885&isnumber=6690269
- Kandasamy, K.; Koroth, P., "An Integrated Approach To Spam Classification On Twitter Using URL Analysis, Natural Language Processing And Machine Learning Techniques," Electrical, Electronics and Computer Science (SCEECS), 2014 IEEE Students' Conference on, pp.1,5, 1-2 March 2014. doi: 10.1109/SCEECS.2014.6804508 In the present day world, people are so much habituated to Social Networks. Because of this, it is very easy to spread spam contents through them. One can access the details of any person very easily through these sites. No one is safe inside the social media. In this paper we are proposing an application which uses an integrated approach to the spam classification in Twitter. The integrated approach comprises the use of URL analysis, natural language processing and supervised machine learning techniques. In short, this is a three step process.
Keywords: classification; learning (artificial intelligence);natural language processing; social networking (online);unsolicited e-mail; Twitter; URL analysis; natural language processing; social media; social networks; spam classification; spam contents; supervised machine learning techniques; Accuracy; Machine learning algorithms; Natural language processing; Training; Twitter; Unsolicited electronic mail; URLs; machine learning; natural language processing; tweets (ID#:14-2979)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804508&isnumber=6804412
- Vincze, V.; Farkas, R., "De-identification in Natural Language Processing," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1300,1303, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859768 Natural language processing (NLP) systems usually require a huge amount of textual data but the publication of such datasets is often hindered by privacy and data protection issues. Here, we discuss the questions of de-identification related to three NLP areas, namely, clinical NLP, NLP for social media and information extraction from resumes. We also illustrate how de-identification is related to named entity recognition and we argue that de-identification tools can be successfully built on named entity recognizers.
Keywords: data privacy; natural language processing; NLP areas; NLP systems; data protection; information extraction; natural language processing; privacy protection; social media; textual data; Databases; Educational institutions; Electronic mail; Informatics; Information retrieval; Media; Natural language processing (ID#:14-2980)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859768&isnumber=6859515
- Leopold, H.; Mendling, J.; Polyvyanyy, A, "Supporting Process Model Validation through Natural Language Generation," Software Engineering, IEEE Transactions on, vol.40, no.8, pp.818, 840, Aug. 1 2014. doi: 10.1109/TSE.2014.2327044 The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
Keywords: information systems; natural language processing; business process models completeness complexity; linguistic complexity; natural language generation; natural language text generation; natural-looking text generation; process model completeness; process model correctness; process model validation; process-aware information systems; structure complexity; verbalization techniques; Adaptation models; Analytical models; Business; Context; Context modeling; Natural languages; Unified modeling language; Business process model validation; natural language text generation; verbalization (ID#:14-2981)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823180&isnumber=6874608
- Khanaferov, David; Luc, Christopher; Wang, Taehyung, "Social Network Data Mining Using Natural Language Processing and Density Based Clustering," Semantic Computing (ICSC), 2014 IEEE International Conference on, pp.250,251, 16-18 June 2014. doi: 10.1109/ICSC.2014.48 There is a growing need to make sense of all the raw data available on the Internet, hence, the purpose of this study is to explore the capabilities of data mining algorithms applied to social networks. We propose a system to mine public Twitter data for information relevant to obesity and health as an initial case study. This paper details the findings of our project and critiques the use of social networks for data mining purposes.
Keywords: Cleaning; Clustering algorithms ;Data mining; Natural language processing; Semantics; Twitter; NLP; clustering; data mining; sentiment analysis; social network (ID#:14-2982)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6882032&isnumber=6881979
- Ozturk, S.; Sankur, B.; Gungor, T.; Yilmaz, M.B.; Koroglu, B.; Agin, O.; Isbilen, M.; Ulas, C.; Ahat, M., "Turkish Labeled Text Corpus," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1395,1398, 23-25 April 2014. doi: 10.1109/SIU.2014.6830499 A labeled text corpus made up of Turkish papers' titles, abstracts and keywords is collected. The corpus includes 35 number of different disciplines, and 200 documents per subject. This study presents the text corpus' collection and content. The classification performance of Term Frequcney - Inverse Document Frequency (TF-IDF) and topic probabilities of Latent Dirichlet Allocation (LDA) features are compared for the text corpus. The text corpus is shared as open source so that it could be used for natural language processing applications with academic purposes.
Keywords: natural language processing; pattern classification; probability; text analysis; LDA features; TF-IDF; Turkish labeled text corpus; Turkish paper abstracts; Turkish paper keywords; Turkish paper titles; academic purposes; classification performance; latent Dirichlet allocation features; natural language processing applications; term frequency-inverse document frequency; text corpus collection; text corpus content; topic probabilities; Abstracts; Conferences; Natural language processing; Resource management; Signal processing; Support vector machines; XML; Classification; Corpus; Inverse Document Frequency; Latent Dirichlet Allocation; NLP; Natural Language Processing; Paper; TF-IDF; Term Frequency ; Turkish (ID#:14-2983)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830499&isnumber=6830164
- Ucan, S.; Huanying Gu, "A Platform For Developing Privacy Preserving Diagnosis Mobile Applications," Biomedical and Health Informatics (BHI), 2014 IEEE-EMBS International Conference on, pp.509, 512, 1-4 June 2014. doi: 10.1109/BHI.2014.6864414 Healthcare Information Technology has been in great vogue in the world today due to the dominant need of computational intelligence for processing, retrieval, and the use of health care information. This paper presents a platform system for developing self-diagnosis mobile applications. The mobile application developers can use this platform to develop applications that give the possible diagnosis according to users' symptoms without revealing any sensitive information about the users. The system consists of stop word removal, natural language processing, privacy preserving information retrieval, and decision support.
Keywords: data privacy; health care; information retrieval; information use; mobile computing; computational intelligence; decision support; health care information processing; health care information retrieval; health care information use; healthcare information technology; natural language processing; privacy preserving diagnosis mobile application development; privacy preserving information retrieval; stop word removal; Databases; Diseases; Medical diagnostic imaging; Mobile communication; Natural language processing; Servers; Decision Support; Healthcare; Natural Language Processing; Privacy (ID#:14-2984)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6864414&isnumber=6864286
- Kats, Yefim, "Semantic Search and NLP-Based Diagnostics," Computer-Based Medical Systems (CBMS), 2014 IEEE 27th International Symposium on, pp.277,280, 27-29 May 2014 doi: 10.1109/CBMS.2014.68 This study considers issues in semantic representation of written texts, especially in the context of entropy-based approach to natural language processing in biomedical applications. These issues lie at the intersection of Web search methodologies, ontology studies, lexicon studies, and natural language processing. The presented in the article entropy-based methodology is aimed at enhancing search techniques and diagnostics by capturing semantic properties of written texts. The range of possible applications ranges from forensic linguistics to psychological diagnostics and evaluation. The presented case study assumes that for texts written under atypical mental conditions, the level of relative text entropy may fall below a certain threshold and the distribution of entropy across the text may show unusual patterns, thus contributing to the semantic assessment of a subject's mental state. Further processing methods potentially contributing to psychological evaluation diagnosis and ontology-based search are discussed.
Keywords: Cultural differences; Entropy; Medical diagnostic imaging; Natural language processing; Ontologies; Psychology; Semantics; Semantic Web; diagnostics; lexicon; natural language processing; ontology; psychological evaluation; text entropy (ID#:14-2985)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881891&isnumber=6881826
- Heimerl, F.; Lohmann, S.; Lange, S.; Ertl, T., "Word Cloud Explorer: Text Analytics Based on Word Clouds," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.1833, 1842, 6-9 Jan, 2014. doi: 10.1109/HICSS.2014.231 Word clouds have emerged as a straightforward and visually appealing visualization method for text. They are used in various contexts as a means to provide an overview by distilling text down to those words that appear with highest frequency. Typically, this is done in a static way as pure text summarization. We think, however, that there is a larger potential to this simple yet powerful visualization paradigm in text analytics. In this work, we explore the usefulness of word clouds for general text analysis tasks. We developed a prototypical system called the Word Cloud Explorer that relies entirely on word clouds as a visualization method. It equips them with advanced natural language processing, sophisticated interaction techniques, and context information. We show how this approach can be effectively used to solve text analysis tasks and evaluate it in a qualitative user study.
Keywords: data visualisation; natural language processing; text analysis; context information; natural language processing; sophisticated interaction techniques; text analysis tasks; text analytics; text summarization; visualization method; visualization paradigm; word cloud explorer; word clouds; Context; Layout; Pragmatics; Tag clouds; Text analysis; User interfaces; Visualization ;interaction; natural language processing; tag clouds; text analytics; visualization; word cloud explorer; word clouds (ID#:14-2986)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758829&isnumber=6758592
- Jain, A; Lobiyal, D.K., "A New Method For Updating Word Senses in Hindi WordNet," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.666,671, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781359 Hindi WordNet, a rich computational lexicon is widely being used for many Hindi Natural Language Processing (NLP) applications. However it does not presently provide exhaustive list of senses for every word, which degrades the performance of such NLP applications. In this paper, we propose a graph based model and its associated techniques to automatically acquire words' senses. In the literature no such method is available which is capable of automatically identify the senses of the Hindi words. We use a Hindi part of speech tagged corpus for building the graph model. The linkage between noun-noun concepts is extracted on the basis of syntactic and semantic relationships. All of the senses of a word including the sense which is not present in Hindi WordNet are extracted. Our method also finds the categories of similar words. Using this model applications of NLP can be achieved at a higher level.
Keywords: graph theory; natural language processing; Hindi WordNet; Hindi natural language processing; Hindi part of speech tagged corpus; NLP applications; computational lexicon; graph based model; noun-noun concepts; semantic relationships; syntactic relationships; word sense updating; Speech; Hindi WordNet; Natural Language processing; Word sense Disambiguation (ID#:14-2987)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781359&isnumber=6781240
- Rabbani, M.; Alam, K.M.R.; Islam, M., "A New Verb Based Approach For English To Bangla Machine Translation," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on , vol., no., pp.1,6, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850684 This paper proposes verb based machine translation (VBMT), a new approach of machine translation (MT) from English to Bangla (EtoB). For translation, it simplifies any form (i.e. simple, complex, compound, active and passive form) of English sentence into the simplest form of English sentence i.e. subject plus verb plus object. When compared with existing rule based EtoB MT schemes, VBMT doesn't employ exclusive or individual structural rules of various English sentences; it only detects the main verb from any form of English sentence and then transforms it into the simplest form of English sentence. Thus VBMT can translate from EtoB very simply, correctly and efficiently. Rule based EtoB MT is tough because it requires the matching of sentences with the stored rules. Moreover, many existing EtoB MT schemes which deploy rules are almost inefficient to translate complex or complicated sentences because it is difficult to match them with well-established rules of English grammar. VBMT is efficient because after identifying the main verb of any form of English sentence, it binds the remaining parts of speech (POS) as subject and object. VBMT has been successfully implemented for the MT of Assertive, Interrogative, Imperative, Exclamatory, Active-Passive, Simple, Complex, and Compound form of English sentences applicable in both desktop and mobile applications.
Keywords: language translation; natural language processing; English to Bangla machine translation; EtoB MT schemes; VBMT; human language technology; natural language processing; verb based machine translation;Compounds;Conferences;Databases;Informatics;Knowledge based systems; Natural languages; Tagging; English to Bangla; Human Language Technology; Natural Language Processing; Rule based Machine Translation (ID#:14-2988)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850684&isnumber=6850678
- Sen, M.U.; Erdogan, H., "Learning Word Representations for Turkish," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1742, 1745, 23-25 April 2014. doi: 10.1109/SIU.2014.6830586 High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skip-gram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the "question" sets that we generated. The learned word embeddings and the question sets are publicly available at our website.
Keywords: learning (artificial intelligence); natural language processing; text analysis; English language; Euclidean space; NLP tasks; Turkish text corpus; high-quality word representations; learned word embeddings; learned word representations; linear relationships; question sets; skip-gram model; unsupervised learning; word embeddings; Conferences; Natural language processing; Probabilistic logic; Recurrent neural networks; Signal processing; Vectors; Deep Learning; Natural Language Processing; Word embeddings (ID#:14-2989)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830586&isnumber=6830164
- Anastasakos, T.; Young-Bum Kim; Deoras, A, "Task Specific Continuous Word Representations For Mono And Multi-Lingual Spoken Language Understanding," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3246,3250, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854200 Models for statistical spoken language understanding (SLU) systems are conventionally trained using supervised discriminative training methods. In many cases, however, labeled data necessary for these supervised techniques is not readily available necessitating a laborious data collection and annotation effort. This often results into data sets that are not expansive enough to cover adequately all patterns of natural language phrases that occur in the target applications. Word embedding features alleviate data and feature sparsity issues by learning mathematical representation of words and word associations in the continuous space. In this work, we present techniques to obtain task and domain specific word embeddings and show their usefulness over those obtained from generic unsupervised data. We also show how we transfer these embeddings from one language to another enabling training of a multilingual spoken language understanding system.
Keywords: learning (artificial intelligence); natural language processing; SLU system; data annotation; data collection; domain specific word embeddings; monolingual spoken language understanding; multilingual spoken language understanding; natural language phrases; supervised discriminative training methods; task specific continuous word representation; Context; Encyclopedias; Games; Motion pictures; Semantics; Training; Vocabulary; named entity recognition; natural language processing; spoken language understanding; vector space models; word embedding}, (ID#:14-2990)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854200&isnumber=6853544
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Network Security Architecture
Network security is one of the main areas for cybersecurity research. The works cited here cover a range of transmission media, architectures, and data in transit. These works were presented or published in the first half of 2014.
- Zaalouk, A; Khondoker, R.; Marx, R.; Bayarou, K., "OrchSec: An Orchestrator-based Architecture For Enhancing Network-Security Using Network Monitoring And SDN Control Functions," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp.1,9, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838409 The original design of the Internet did not take network security aspects into consideration, instead it aimed to facilitate the process of information exchange between end-hosts. Consequently, many protocols that are part of the Internet infrastructure expose a set of vulnerabilities that can be exploited by attackers. To reduce these vulnerabilities, several security approaches were introduced as a form of add-ons to the existing Internet architecture. However, these approaches have their drawbacks (e.g., lack of centralized control, and automation). In this paper, to address these drawbacks, the features provided by Software Defined Networking (SDN) such as network-visibility, centralized management and control are considered for developing security applications. Although the SDN architecture provides features that can aid in the process of network security, it has some deficiencies when it comes to using SDN for security. To address these deficiencies, several architectural requirements are derived to adapt the SDN architecture for security use cases. For this purpose, OrchSec, an Orchestrator-based architecture that utilizes Network Monitoring and SDN Control functions to develop security applications is proposed. The functionality of the proposed architecture is demonstrated, tested, and validated using a security application.
Keywords: Internet; computer network security; Internet architecture; Internet infrastructure; OrchSec; SDN control functions; centralized control; network monitoring; network security aspects; network security enhancement; orchestrator based architecture; software defined networking; Monitoring; Prototypes; Switches (ID#:14-2991)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838409&isnumber=6838210
- Rengaraju, P.; Chung-Horng Lung; Srinivasan, A, "QoS-Aware Distributed Security Architecture for 4G Multihop Wireless Networks," Vehicular Technology, IEEE Transactions on, vol.63, no.6, pp.2886,2900, July 2014. doi: 10.1109/TVT.2013.2292882 Vehicular communications have received a great deal of attention in recent years due to the demand for multimedia applications during travel and for improvements in safety. Safety applications often require fast message exchanges but do not use much bandwidth. On the other hand, multimedia services require high bandwidth for vehicular users. Hence, to provide mobile broadband services at a vehicular speed of up to 350 km/h, Worldwide interoperable for Microwave Access (WiMAX) and Long-Term Evolution (LTE) are considered the best technologies for vehicular networks. WiMAX and LTE are Fourth-Generation (4G) wireless technologies that have well-defined quality of service (QoS) and security architectures. However, some security threats, such as denial of service (DoS), an introduction of rogue node, etc., still exist in WiMAX and LTE networks, particularly in multihop networks. Therefore, strong security architecture and hasty authentication methods are needed to mitigate the existing security threats in 4G multihop wireless networks. Conversely, the network QoS should not be degraded while enhancing security. Thus, we propose QoS-aware distributed security architecture using the elliptic curve Diffie-Hellman (ECDH) protocol that has proven security strength and low overhead for 4G wireless networks. In this paper, we first describe the current security standards and security threats in WiMAX and LTE networks. Then, the proposed distributed security architecture for 4G multihop wireless networks is presented. Finally, we compare and analyze the proposed solution using testbed implementation and simulation approaches for WiMAX. From the simulation and testbed results for WiMAX networks, it is evident that the proposed scheme provides strong security and hasty authentication for handover users without affecting the QoS performance. For LTE networks, we present the theoretical analysis of the proposed scheme to show that similar performance can also be achieved.
Keywords: Long Term Evolution; WiMax; broadband networks; cryptographic protocols; electronic messaging; message authentication; mobility management (mobile radio);multimedia communication; public key cryptography; quality of service; telecommunication security; vehicular ad hoc networks; 4G multihop wireless network; ECDH protocol; LTE networks; QoS; WiMax network; distributed security architecture; elliptic curve Diffie-Hellman protocol; handover user; hasty authentication; long term evolution; message exchange; mobile broadband services; multimedia application; multimedia service;quality of service; safety application; security standard; security threat mitigation; vehicular communication; vehicular network; vehicular user; worldwide interoperable for microwave access; Authentication; Long Term Evolution; Quality of service; Spread spectrum communication; WiMAX; Distributed security; ECDH; LTE ;Long-Term Evolution (LTE);Multihop; WiMAX; Worldwide interoperable for Microwave Access (WiMAX);elliptic curve Diffie??Hellman (ECDH); multihop (ID#:14-2992)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6675873&isnumber=6851966
- Zhang, Lei; An, Chengjin; Spinsante, Susanna; Tang, Chaojing, "Adaptive Link Layer Security Architecture For Telecommand Communications In Space Networks," Systems Engineering and Electronics, Journal of, vol.25, no.3, pp.357, 372, June 2014. doi: 10.1109/JSEE.2014.00041 Impressive advances in space technology are enabling complex missions, with potentially significant and long term impacts on human life and activities. In the vision of future space exploration, communication links among planets, satellites, spacecrafts and crewed vehicles will be designed according to a new paradigm, known as the disruption tolerant networking. In this scenario, space channel peculiarities impose a massive reengineering of many of the protocols usually adopted in terrestrial networks; among them, security solutions are to be deeply reviewed, and tailored to the specific space requirements. Security is to be provided not only to the payload data exchanged on the network, but also to the telecommands sent to a spacecraft, along possibly differentiated paths. Starting from the secure space telecommand design developed by the Consultative Committee for Space Data Systems as a response to agency-based requirements, an adaptive link layer security architecture is proposed to address some of the challenges for future space networks. Based on the analysis of the communication environment and the error diffusion properties of the authentication algorithms, a suitable mechanism is proposed to classify frame retransmission requests on the basis of the originating event (error or security attack) and reduce the impact of security operations. An adaptive algorithm to optimize the space control protocol, based on estimates of the time varying space channel, is also presented. The simulation results clearly demonstrate that the proposed architecture is feasible and efficient, especially when facing malicious attacks against frame transmission.
Keywords: Aerospace electronics; Authentication; Encryption; Network security; Protocols; Space technology; Space vehicles; adaptive estimate; misbehavior detection; performance optimization; space network; telecommand security (ID#:14-2993)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850213&isnumber=6850209
- Liu, Shuhao; Cai, Zhiping; Xu, Hong; Xu, Ming, "Security-aware Virtual Network Embedding," Communications (ICC), 2014 IEEE International Conference on, pp.834,840, 10-14 June 2014. doi: 10.1109/ICC.2014.6883423 Network virtualization is a promising technology to enable multiple architectures to run on a single network. However, virtualization also introduces additional security vulnerabilities that may be exploited by attackers. It is necessary to ensure that the security requirements of virtual networks are met by the physical substrate, which however has not received much attention thus far. This paper represents an early attempt to consider the security issue in virtual network embedding, the process of mapping virtual networks onto physical nodes and links. We model the security demands of virtual networks by proposing a simple taxonomy of abstractions, which is enough to meet the variations of security requirements. Based on the abstraction, we formulate security-aware virtual network embedding as an optimization problem, proposing objective functions and mathematical constraints which involve both resource and security restrictions. Then a heuristic algorithm is developed to solve this problem. Our simulation results indicate its high efficiency and effectiveness.
Keywords: Bandwidth; Heuristic algorithms; Mathematical model; Network topology; Security; Substrates; Virtualization (ID#:14-2994)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883423&isnumber=6883277
- Al-Anzi, F.S.; Salman, AA; Jacob, N.K.; Soni, J., "Towards Robust, Scalable And Secure Network Storage in Cloud Computing," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on, pp.51,55, 6-8 May 2014. doi: 10.1109/DICTAP.2014.6821656 The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers "house of resources" includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
Keywords: cloud computing; data integrity; data privacy; security of data; storage management; ubiquitous computing; virtualisation; Internet; adaptive security; authentication; built in security; client overhead; cloud computing environment; cloud service provider; compliance; confidentiality; data security; data sharing;data storage; information centric security; integrity; middleware; network storage architecture; networking; on-demand access; operating system; pay per use model; privacy; proactive security; remote application access remote service access; robust scalable secure network storage; server; service availability; service outsourcing; ubiquitous next generation information technology architecture; virtualization; Availability; Cloud computing; Computer architecture; Data security; Distributed databases; Servers; Cloud Computing; Data Storage; Data security; RAID (ID#:14-2995)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821656&isnumber=6821645
- Aiash, M.; Mapp, G.; Lasebae, A; Loo, J., "A Secure Framework for Communications in Heterogeneous Networks," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.841,846, 13-16 May 2014. doi: 10.1109/WAINA.2014.132 Heterogeneous Networks represent an open architecture in which two different domains need to cooperate in order to provide ubiquitous connectivity. The first is network operators domain, where multiple network operators share the core network to provide network accessibility over a wide variety of wireless technologies such as WiFi and mobile network technologies. The other is the Application-Service Providers domain, which launches various services ranging from the normal video streaming to the most confidential E-Commerce services. This highlights the fact that any efficient security solution for heterogeneous networks has to consider the security in these different domains. Therefore, this paper introduces security framework that comprises two Authentication and Key Agreement protocols to secure transactions at the network and service levels. The proposed protocols have been formally verified using formal methods approach based on Casper/FDR tool.
Keywords: computer network security; cryptographic protocols; formal verification; wireless LAN; Casper/FDR tool; E-commerce services; WiFi network technologies; application-service providers domain; authentication and key agreement protocols; communication security framework; formal methods; heterogeneous networks; mobile network technologies; multiple network operators; network accessibility; network operators domain; normal video streaming; ubiquitous connectivity; wireless technologies; Authentication; Communication system security; Mobile communication; Protocols; Quality of service; Wireless communication (ID#:14-2996)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844744&isnumber=6844560
- Bhar, C.; Das, G.; Dixit, A; Lannoo, B.; Colle, D.; Pickavet, M.; Demeester, P., "A Novel Hybrid WDM/TDM PON Architecture Using Cascaded AWGs and Tunable Components," Lightwave Technology, Journal of, vol.32, no.9, pp.1708, 1716, May1, 2014. doi: 10.1109/JLT.2014.2310653 The paper introduces a novel architecture for optical access networks that simultaneously provides complete flexibility and security. At the same time, the distribution architecture is completely passive. Unlike the other architectures in literature, the proposed architecture does not possess a security-flexibility tradeoff. Complete flexibility allows to switch OFF appropriate number of active components at low network loads making this design a green technology. The discussed architecture has a long reach, which is independent of the number of users in the network.
Keywords: arrayed waveguide gratings; optical tuning; passive optical networks; telecommunication security; time division multiplexing; wavelength division multiplexing; active components; arrayed waveguide gratings; cascaded AWG; distribution architecture; green technology; hybrid WDM-TDM PON architecture; network loads; optical access networks; passive optical networks; tunable components; Bandwidth; Laser applications; Laser tuning; Optical fiber networks; Ports (Computers);Security; Switches; Arrayed waveguide grating; bandwidth flexibility; network security; passive optical networks (ID#:14-2997)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6762824&isnumber=6781021
- Bakshi, K., "Secure Hybrid Cloud Computing: Approaches And Use Cases," Aerospace Conference, 2014 IEEE, pp.1, 8, 1-8 March 2014. doi: 10.1109/AERO.2014.6836198 Hybrid cloud is defined as a cloud infrastructure composed of two or more cloud infrastructures (private, public, and community clouds) that remain unique entities, but are bound together via technologies and approaches for the purposes of application and data portability. This paper will review a novel approach for implementing a secure hybrid cloud. Specifically, public and private cloud entities will be discussed for a hybrid cloud approach. The approach is based on extension of virtual Open Systems Interconnection (OSI) Layer 2 switching functions from a private cloud and to public clouds, tunneled on an OSI Layer 3 connection. As a result of this hybrid cloud approach, virtual workloads can be migrated from the private cloud to the public cloud and continue to be part of the same Layer 2 domain as in the private cloud, thereby maintaining consistent operational paradigms in bot the public and private cloud. This paper will introduce and discuss the virtual switching technologies which are fundamental underpinnings of the secure hybrid approach. This paper will not only discuss the virtual Layer 2 technical architecture of this approach, but also related security components. Specifically, data in motion security between the public and private clouds and interworkload secure communication in the public cloud will be reviewed. As part of the hybrid cloud approach, security aspects like encrypted communication tunnels, key management, and security management will be discussed. Moreover, management consoles, control points, and integration with cloud orchestration systems will also be discussed. Additionally, hybrid cloud consideration for network services like network firewall, server load balancers, application accelerators, and network routing functions will be examined. Finally, several practical use cases which can be applicable in the aerospace industry, like workload bursting, application development environments, and Disaster Recovery as a Service will be explored.
Keywords: cloud computing; open systems; security of data; OSI; aerospace industry; application accelerators; cloud infrastructure; cloud orchestration systems; community clouds; data portability; disaster recovery; encrypted communication tunnels; key management; motion security; network firewall; network routing functions; open systems interconnection; private clouds ;public clouds; secure hybrid cloud computing; security aspects; security components; security management; server load balancers; switching functions; virtual switching technologies; Cloud computing; Computer architecture; Switches; Virtual machine monitors; Virtual machining (ID#:14-2998)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836198&isnumber=6836156
- Manley, E.D., "Low Complexity All-Optical Network Coder Architecture," Computing, Networking and Communications (ICNC), 2014 International Conference o, pp.1046, 1050, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785482 Network coding, a networking paradigm in which different pieces of data are coded together at various points along a transmission, has been proposed for providing a number of benefits to networks including increased throughput, robustness, and security. For optical networks, the potential for using network coding to provide survivability is especially noteworthy as it may be possible to allow for the ultra-fast recovery time of dedicated protection schemes with the bandwidth efficiency of shared protection schemes. However, the need to perform computations at intermediate nodes along the optical route leads to the undesirable necessity of either electronically buffering and processing the data at intermediate nodes or outfitting the network with complex photonic circuits capable of performing the computations entirely within the optical domain. In this paper, we take the latter approach but attempt to mitigate the impact of the device complexity by proposing a low-complexity, all-optical network coder architecture. Our design provides easily scalable, powerful digital network coding capabilities at the optical layer, and we show that existing network coding algorithms can be adjusted to accommodate it.
Keywords: integrated optics; network coding; optical fibre networks; telecommunication network routing; telecommunication security; bandwidth efficiency; complex photonic circuits; digital network coding capabilities; electronic buffering; intermediate nodes; low complexity all-optical network coder architecture; optical layer; optical route; shared protection schemes; ultra-fast recovery time; Encoding; Logic gates; Network coding; Optical buffering; Optical fiber networks; Optical switches (ID#:14-2999)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785482&isnumber=6785290
- Jaic, K.; Smith, M.C.; Sarma, N., "A Practical Network Intrusion Detection System For Inline Fpgas On 10gbe Network Adapters," Application-specific Systems, Architectures and Processors (ASAP), 2014 IEEE 25th International Conference on, pp.180,181, 18-20 June 2014. doi: 10.1109/ASAP.2014.6868655 A network intrusion detection system (NIDS), such as SNORT, analyzes incoming packets to identify potential security threats. Pattern matching is arguably the most important and most computationally intensive component of a NIDS. Software-based NIDS implementations drop up to 90% of packets during increased network load even at lower network bandwidth. We propose an alternative hybrid-NIDS that couples an FPGA with a network adapter to provide hardware support for pattern matching and software support for post processing. The proposed system, SFAOENIDS, offers an extensible open-source NIDS for Solarflare AOE devices. The pattern matching engine-the primary component of the hardware architecture was designed based on the requirements of typical NIDS implementations. In testing on a real network environment, the SFAOENIDS hardware implementation, operating at 200 MHz, handles a 10Gbps data rate without dropping packets while simultaneously minimizing the server CPU load.
Keywords: field programmable gate arrays; security of data; SFAOENIDS; SNORT; Solarflare AOE devices; inline FPGA; lower network bandwidth; network adapters; network load; open-source NIDS; pattern matching; pattern matching engine; practical network intrusion detection system; real network environment; security threats; software based NIDS implementations; Engines; Field programmable gate arrays; Hardware; Intrusion detection; Memory management; Pattern matching; Software (ID#:14-3000)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868655&isnumber=6868606
- Silva Delgado, J.S.; Mendez Penuela, D.J.; Morales Medina, L.V.; Rueda Rodriguez, S.J., "Automatic Network Reconfiguration Because Of Security Events," Communications and Computing (COLCOM), 2014 IEEE Colombian Conference on, pp.1,6, 4-6 June 2014. doi: 10.1109/ColComCon.2014.6860412 Over the last years, networks have changed in size, traffic, and requirements. There are more nodes, the traffic has increased, and there are frequent requests that imply modifications to the underlying infrastructure. Some examples of these requirements are cloud computing, virtualized environments, and data centers. SDN has been developed to address some of these issues. By separating control and data planes, SDN enables the programming of the control plane and the dynamic reconfiguration of the data plane thus making it possible to automatize some tasks. SDN makes it possible to dynamically reconfigure a network as a response to a security event. This work studies the advantages and disadvantages of the platform for programming a network to react to security events. The number of security events that may happen in a network is considerable, therefore, we defined an architecture that may be used in different cases and implemented it to evaluate the behavior for two types of events: DoS attacks and intrusions. The platform offers several tools for programming and testing, but they are still in development. In fact, we found a problem with one tool and some inconveniences with others which we reported to the development team. The participation of the community by debugging and finding ways to improve the platform is key to SDN's development.
Keywords: computer debugging; computer network security; DoS attacks; SDN development; automatic network reconfiguration; debugging; intrusion detection; security events; software defined networks; Control systems; Hardware; P networks; Monitoring; Programming; Security; Software (ID#:14-3001)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860412&isnumber=6860394
- Premnath, AP.; Ju-Yeon Jo; Yoohwan Kim, "Application of NTRU Cryptographic Algorithm for SCADA Security," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.341,346, 7-9 April 2014. doi: 10.1109/ITNG.2014.38 Critical Infrastructure represents the basic facilities, services and installations necessary for functioning of a community, such as water, power lines, transportation, or communication systems. Any act or practice that causes a real-time Critical Infrastructure System to impair its normal function and performance will have debilitating impact on security and economy, with direct implication on the society. SCADA (Supervisory Control and Data Acquisition) system is a control system which is widely used in Critical Infrastructure System to monitor and control industrial processes autonomously. As SCADA architecture relies on computers, networks, applications and programmable controllers, it is more vulnerable to security threats/attacks. Traditional SCADA communication protocols such as IEC 60870, DNP3, IEC 61850, or Modbus did not provide any security services. Newer standards such as IEC 62351 and AGA-12 offer security features to handle the attacks on SCADA system. However there are performance issues with the cryptographic solutions of these specifications when applied to SCADA systems. This research is aimed at improving the performance of SCADA security standards by employing NTRU, a faster and light-weight NTRU public key algorithm for providing end-to-end security.
Keywords: SCADA systems; critical infrastructures; cryptographic protocols; process control; process monitoring; production engineering computing; programmable controllers; public key cryptography; transport protocols; AGA-12; DNP3; IEC 60870; IEC 61850; IEC 62351; Modbus; NTRU cryptographic algorithm; NTRU public key algorithm; SCADA architecture; SCADA communication protocols; SCADA security standards; TCP/IP; communication systems; end-to-end security; industrial process control; industrial process monitoring; power lines; programmable controllers; real-time critical infrastructure system; security threats-attacks; supervisory control and data acquisition system; transportation; water; Authentication; Digital signatures; Encryption; IEC standards; SCADA systems; AGA-12; Critical Infrastructure System; IEC 62351; NTRU cryptographic algorithm; SCADA communication protocols over TCP/IP (ID#:14-3002)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822221&isnumber=6822158
- Santamaria, Amilcare Francesco; Sottile, Cesare; Lupia, Andrea; Raimondo, Pierfrancesco, "An Efficient Traffic Management Protocol Based On IEEE802.11p Standard," Performance Evaluation of Computer and Telecommunication Systems (SPECTS 2014), International Symposium on, pp.634,641, 6-10 July 2014. doi: 10.1109/SPECTS.2014.6880004 Nowadays one of the hot themes in wireless environment research is the application of the newest technologies to road security problems. The interest of companies and researchers, with the cooperation of car manufactures, brought to life and promoted the Vehicular Ad-Hoc Network (VANET) technology. In this work an innovative security system based on VANET architecture is proposed. The system is capable of increasing road safety through the inter-communication among vehicles and road infrastructures, also known as Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I), matching market and manufactures requests in a convenient and useful way. We design a network protocol called Geocasting Wave (GeoWave) that takes advantages of IEEE802.11p standard and tries to enhance it adding useful messages in order to increase active and passive safety system. In the proposal protocol vehicles share information with neighbors and Road Side Unit (RSU)s. In this work, we propose a network infrastructure able to continuously gather information from environment, road conditions and traffic flows. Once one of these occurrences is detected, all gathered information are spread in the network. This knowledge make possible to take precautionary actions in time such as traveling speed decreasing or switching to a safer road path when a dangerous situation approaches. In addition, a smart traffic management is made exploiting gathered information by the Control and Management Center (CMC) in order to avoid traffic blocks trying to maintain a constant average speed inside city blocks. This can help to reduce vehicles' Carbon Dioxide (CO2) emissions in the city increasing air quality.
Keywords: Accidents; Cities and towns; Protocols; Roads; Safety; Sensors; Vehicles; Data Dissemination; Geocasting; IEEE 802.11p WAVE protocol; Road Safety; VANET (ID#:14-3003)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880004&isnumber=6879988
- Jin Cao; Maode Ma; Hui Li; Yueyu Zhang; Zhenxing Luo, "A Survey on Security Aspects for LTE and LTE-A Networks," Communications Surveys & Tutorials, IEEE, vol.16, no.1, pp.283,302, First Quarter 2014. doi: 10.1109/SURV.2013.041513.00174 High demands for broadband mobile wireless communications and the emergence of new wireless multimedia applications constitute the motivation to the development of broadband wireless access technologies in recent years. The Long Term Evolution/System Architecture Evolution (LTE/SAE) system has been specified by the Third Generation Partnership Project (3GPP) on the way towards fourth-generation (4G) mobile to ensure 3GPP keeping the dominance of the cellular communication technologies. Through the design and optimization of new radio access techniques and a further evolution of the LTE systems, the 3GPP is developing the future LTE-Advanced (LTE-A) wireless networks as the 4G standard of the 3GPP. Since the 3GPP LTE and LTE-A architecture are designed to support flat Internet Protocol (IP) connectivity and full interworking with heterogeneous wireless access networks, the new unique features bring some new challenges in the design of the security mechanisms. This paper makes a number of contributions to the security aspects of the LTE and LTE-A networks. First, we present an overview of the security functionality of the LTE and LTE-A networks. Second, the security vulnerabilities existing in the architecture and the design of the LTE and LTE-A networks are explored. Third, the existing solutions to these problems are classically reviewed. Finally, we show the potential research issues for the future research works.
Keywords: 4G mobile communication; IP networks; Long Term Evolution; broadband networks; cellular radio; multimedia communication; radio access networks; telecommunication security;3GPP; 4G mobile; LTE-A networks; LTE-Advanced; LTE/SAE; Long Term Evolution-system architecture evolution; broadband mobile wireless communications; broadband wireless access technology; cellular communication; flat Internet Protocol connectivity; security vulnerabilities ;telecommunication security aspects; wireless multimedia applications; Authentication; Handover; Long Term Evolution; Mobile communication; Servers; HeNB security; IMS security; LTE; LTE security; LTE-A; MTC security (ID#:14-3004)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6506141&isnumber=6734841
- Chan-Kyu Han; Hyoung-Kee Choi, "Security Analysis of Handover Key Management in 4G LTE/SAE Networks," Mobile Computing, IEEE Transactions on, vol.13, no.2, pp.457, 468, Feb. 2014. doi: 10.1109/TMC.2012.242 The goal of 3GPP Long Term Evolution/System Architecture Evolution (LTE/SAE) is to move mobile cellular wireless technology into its fourth generation. One of the unique challenges of fourth-generation technology is how to close a security gap through which a single compromised or malicious device can jeopardize an entire mobile network because of the open nature of these networks. To meet this challenge, handover key management in the 3GPP LTE/SAE has been designed to revoke any compromised key(s) and as a consequence isolate corrupted network devices. This paper, however, identifies and details the vulnerability of this handover key management to what are called desynchronization attacks; such attacks jeopardize secure communication between users and mobile networks. Although periodic updates of the root key are an integral part of handover key management, our work here emphasizes how essential these updates are to minimizing the effect of desynchronization attacks that, as of now, cannot be effectively prevented. Our main contribution, however, is to explore how network operators can determine for themselves an optimal interval for updates that minimizes the signaling load they impose while protecting the security of user traffic. Our analytical and simulation studies demonstrate the impact of the key update interval on such performance criteria as network topology and user mobility.
Keywords: 3G mobile communication; 4G mobile communication Long Term Evolution; cellular radio; mobility management (mobile radio) ;telecommunication network topology; telecommunication security; 3GPP Long Term Evolution-system architecture evolution; 4G LTE-SAE networks; communication security; compromised key; corrupted network devices; desynchronization attacks; fourth-generation technology; handover key management; key update interval; malicious device; mobile cellular wireless technology; mobile network; network operators; network topology; periodic updates; security analysis; security gap; signaling load; user mobility; user network; user traffic security protection; Base stations; Computer architecture; Mobile communication; Mobile computing; Security; Authentication and key agreement; evolved packet system; handover key management; long-term evolution security; mobile networks; system architecture evolution (ID#:14-3005)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6365188&isnumber=6689256
- Suto, K.; Nishiyama, H.; Kato, N.; Nakachi, T.; Fujii, T.; Takahara, A, "An Overlay Network Construction Technique For Minimizing The Impact Of Physical Network Disruption In Cloud Storage Systems," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.68,72, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785307 Cloud storage exploiting overlay networks is considered to be a scalable and autonomous architecture. While this technology can ensure the security of storage service, it requires addressing the "server breakdown" problem, which may arise due to malicious attacks on servers and mechanical troubles of servers. In existing literature, an overlay network based on bimodal degree distribution was proposed to achieve high connectivity to combat these two types of server breakdown. However, it cannot ensure the high connectivity against physical network disruption that removes numerous nodes from overlay network. To deal with this issue, in this paper, we propose a physical network aware overlay network, in which the neighboring nodes are connected with one another in the overlay. Moreover, the numerical analysis indicates that the proposed system considerably outperforms the conventional system in terms of service availability.
Keywords: cloud computing; computer network security; network servers; overlay networks; bimodal degree distribution; cloud storage systems; numerical analysis; overlay network construction technique; physical network disruption impact minimization; server breakdown; storage service, security;Cloud computing; Computer crime; Electric breakdown; Overlay networks; Peer-to-peer computing; Servers; Tin (ID#:14-3006)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785307&isnumber=6785290
- Zahid, A; Masood, R.; Shibli, M.A, "Security Of Sharded Nosql Databases: A Comparative Analysis," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.1,8, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861323 NoSQL databases are easy to scale-out because of their flexible schema and support for BASE (Basically Available, Soft State and Eventually Consistent) properties. The process of scaling-out in most of these databases is supported by sharding which is considered as the key feature in providing faster reads and writes to the database. However, securing the data sharded over various servers is a challenging problem because of the data being distributedly processed and transmitted over the unsecured network. Though, extensive research has been performed on NoSQL sharding mechanisms but no specific criterion has been defined to analyze the security of sharded architecture. This paper proposes an assessment criterion comprising various security features for the analysis of sharded NoSQL databases. It presents a detailed view of the security features offered by NoSQL databases and analyzes them with respect to proposed assessment criteria. The presented analysis helps various organizations in the selection of appropriate and reliable database in accordance with their preferences and security requirements.
Keywords: SQL; security of data; BASE; NoSQL sharding mechanisms assessment criterion; security features; sharded NoSQL databases; Access control; Authentication; Distributed databases Encryption; Servers; comparative Analysis; Data and Applications Security; Database Security; NoSQL; Sharding (ID#:14-3007)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861323&isnumber=6861314
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Policy Analysis
Policy-based access controls and security policies are intertwined in most commercial systems. Analytics use abstraction and reduction to improve policy-based security. The work cited here was presented in the first half of 2014.
- Gupta, P.; Stoller, S.; Xu, Z., "Abductive Analysis of Administrative Policies in Rule-based Access Control," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no.99, pp.1, 1, Jan 2014. doi: 10.1109/TDSC.2013.42 In large organizations, access control policies are managed by multiple users (administrators). An administrative policy specifies how each user in an enterprise may change the policy. Fully understanding the consequences of an administrative policy in an enterprise system can be difficult, because of the scale and complexity of the access control policy and the administrative policy, and because sequences of changes by different users may interact in unexpected ways. Administrative policy analysis helps by answering questions such as user-permission reachability, which asks whether specified users can together change the policy in a way that achieves a specified goal, namely, granting a specified permission to a specified user. This paper presents a rule-based access control policy language, a rule-based administrative policy model that controls addition and removal of facts and rules, and an abductive analysis algorithm for user-permission reachability. Abductive analysis means that the algorithm can analyze policy rules even if the facts initially in the policy (e.g., information about users) are unavailable. The algorithm does this by computing minimal sets of facts that, if present in the initial policy, imply reachability of the goal.
Keywords: Access control; Algorithm design and analysis; Grammar; Hospitals; Organizations; Semantics; Access controls; Verification (ID#:14-3008)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6616529&isnumber=4358699
- ang, Xiaoyan; Xia, Chunhe; Jiao, Jian; Hu, Junshun; Li, Xiaojian, "Modeling and Global Conflict Analysis Of Firewall Policy," Communications, China, vol.11, no.5, pp.124,135, May 2014. doi: 10.1109/CC.2014.6880468 The global view of firewall policy conflict is important for administrators to optimize the policy. It has been lack of appropriate firewall policy global conflict analysis, existing methods focus on local conflict detection. We research the global conflict detection algorithm in this paper. We presented a semantic model that captures more complete classifications of the policy using knowledge concept in rough set. Based on this model, we presented the global conflict formal model, and represent it with OBDD (Ordered Binary Decision Diagram). Then we developed GFPCDA (Global Firewall Policy Conflict Detection Algorithm) algorithm to detect global conflict. In experiment, we evaluated the usability of our semantic model by eliminating the false positives and false negatives caused by incomplete policy semantic model, of a classical algorithm. We compared this algorithm with GFPCDA algorithm. The results show that GFPCDA detects conflicts more precisely and independently, and has better performance.
Keywords: Algorithm design and analysis; Analytical models; Classification algorithms; Detection algorithms; Firewalls (computing); Semantics; conflict analysis; conflict detection; firewall policy; semantic model (ID#:14-3009)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880468&isnumber=6880452
- Prasad, S.; Raina, G., "Local Hopf bifurcation analysis of Compound TCP with an Exponential-RED queue management policy," Control and Decision Conference (2014 CCDC), The 26th Chinese , vol., no., pp.2588,2594, May 31 2014-June 2 2014 doi: 10.1109/CCDC.2014.6852610 Abstract: The analysis of TCP, along with queue management policies, forms an important aspect of performance evaluation for the Internet. In this paper, we analyse a non-linear fluid model for Compound TCP (C-TCP) coupled with an Exponential-RED (E-RED) queue management policy. Compound is an important flavor of TCP, as it is the default transport protocol in the Windows operating system. For the bifurcation analysis, we motivate an exogenous and non-dimensional bifurcation parameter. Using this parameter, we first derive the Hopf bifurcation condition for the underlying model. Then, employing Poincare normal forms and the center manifold theory, we outline the analysis to characterise the type of the Hopf bifurcation, and determine the stability of the bifurcating periodic solutions. Some numerical analysis and stability charts complement our theoretical analysis.
Keywords: Internet; bifurcation; computer network management; computer network performance evaluation; queueing theory; transport protocols; C-TCP ;E-RED; Internet; Poincare normal forms; Windows operating system; bifurcating periodic solution; center manifold theory; compound TCP analysis; default transport protocol; exogenous bifurcation parameter; exponential-RED queue management policy; local Hopf bifurcation analysis; nondimensional bifurcation parameter; nonlinear fluid model; numerical analysis; performance evaluation; stability charts; Bifurcation; Compounds; Delays; Mathematical model; Numerical stability; Stability analysis; Compound TCP; Exponential-RED; Hopf bifurcation; queue management; stability (ID#:14-3010)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852610&isnumber=6852105
- Ali, M.Q.; Al-Shaer, E.; Samak, T., "Firewall Policy Reconnaissance: Techniques and Analysis," Information Forensics and Security, IEEE Transactions on, vol.9, no.2, pp.296, 308, Feb. 2014. doi: 10.1109/TIFS.2013.2296874 In the past decade, scanning has been widely used as a reconnaissance technique to gather critical network information to launch a follow up attack. To combat, numerous intrusion detectors have been proposed. However, scanning methodologies have shifted to the next-generation paradigm to be evasive. The next-generation reconnaissance techniques are intelligent and stealthy. These techniques use a low volume packet sequence and intelligent calculation for the victim selection to be more evasive. Previously, we proposed models for firewall policy reconnaissance that are used to set bound for learning accuracy as well as to put minimum requirements on the number of probes. We presented techniques for reconstructing the firewall policy by intelligently choosing the probing packets based on the responses of previous probes. In this paper, we show the statistical analysis of these techniques and discuss their evasiveness along with the improvement. First, we present the previously proposed two techniques followed by the statistical analysis and their evasiveness to current detectors. Based on the statistical analysis, we show that these techniques still exhibit a pattern and thus can be detected. We then develop a hybrid approach to maximize the benefit by combining the two heuristics.
Keywords: firewalls; learning (artificial intelligence);statistical analysis ;critical network information; current detectors; firewall policy reconnaissance; firewall policy reconstruction; intelligent calculation; intrusion detectors; learning accuracy; next-generation paradigm; next-generation reconnaissance techniques; probing packets; scanning methodology; statistical analysis; victim selection; volume packet sequence; Adaptation models; Boolean functions; Detectors; Next generation networking; Ports (Computers); Probes; Reconnaissance; Security; intrusion detection; reconnaissance (ID#:14-3011)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6698376&isnumber=6705647
- Yuan Zhao; Shunfu Jin; Wuyi Yue, "A Novel Spectrum Access Strategy With A-Retry Policy In Cognitive Radio Networks: A Queueing-Based Analysis," Communications and Networks, Journal of, vol.16, no.2, pp.193, 201, April 2014. doi: 10.1109/JCN.2014.000030 In cognitive radio networks, the packet transmissions of the secondary users (SUs) can be interrupted randomly by the primary users (PUs). That is to say, the PU packets have preemptive priority over the SU packets. In order to enhance the quality of service (QoS) for the SUs, we propose a spectrum access strategy with an a-Retry policy. A buffer is deployed for the SU packets. An interrupted SU packet will return to the buffer with probability a for later retrial, or leave the system with probability (1 - a). For mathematical analysis, we build a preemptive priority queue and model the spectrum access strategy with an a-Retry policy as a two-dimensional discrete-time Markov chain (DTMC). We give the transition probability matrix of the Markov chain and obtain the steady-state distribution. Accordingly, we derive the formulas for the blocked rate, the forced dropping rate, the throughput and the average delay of the SU packets. With numerical results, we show the influence of the retrial probability for the strategy proposed in this paper on different performance measures. Finally, based on the tradeoff between different performance measures, we construct a cost function and optimize the retrial probabilities with respect to different system parameters by employing an iterative algorithm.
Keywords: Markov processes; cognitive radio; iterative methods; matrix algebra; optimisation; packet radio networks; probability; quality of service; queueing theory; radio spectrum management; a-retry policy;2D DTMC;PU packet; SU packets; buffer deployment; cognitive radio network; cost function; discrete time Markov chain; dropping rate ;iterative algorithm; mathematical analysis; packet transmission; preemptive priority queue; primary user; quality of service; queueing-based analysis; retrial probability optimization; secondary users; spectrum access strategy; steady-state distribution; system parameter; transition probability matrix; Analytical models; Cognitive radio; Delays; Markov processes; Queueing analysis; T hroughput; Vectors;??-Retry policy; cognitive radio networks; discrete-time Markov chain (DTMC); priority queue; spectrum access strategy}, (ID#:14-3012)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812083&isnumber=6812073
- Prasad, S.; Raina, G., "Stability and Hopf Bifurcation Analysis Of TCP With a RED-Like Queue Management Policy," Control and Decision Conference (2014 CCDC), The 26th Chinese, vol., no., pp.2599, 2605, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852612 We analyze a non-linear fluid model of TCP coupled with a Random Early Detection (RED)-like queue management policy. We first show that the conditions for local stability, as parameters vary, will be violated via a Hopf bifurcation. Thus, a stable equilibrium would give rise to limit cycles. To identify the type of the Hopf bifurcation, and to determine the stability of the bifurcating limit cycles, we apply the theory of normal forms and the center manifold analysis. Some numerical computations accompany our theoretical work.
Keywords: bifurcation; queueing theory; stability; telecommunication network management; transport protocols; Hopf bifurcation; RED-like queue management policy; TCP; bifurcating limit cycles stability; center manifold analysis; local stability; nonlinear fluid model; normal forms theory; random early detection-like queue management policy; Bifurcation; Compounds; Limit-cycles; Mathematical model; Numerical stability; Stability analysis; Hopf bifurcation; TCP; congestion control; queue management; stability (ID#:14-3013)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852612&isnumber=6852105
- Singh, R.; I-Hong Hou; Kumar, P.R., "Fluctuation Analysis Of Debt Based Policies For Wireless Networks With Hard Delay Constraints," INFOCOM, 2014 Proceedings IEEE, pp.2400,2408, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848185 Hou et al. have analyzed wireless networks where clients served by an access point require a timely-throughput of packets to be delivered by hard per-packet deadlines and also proved the timely-throughput optimality of certain debt-based policies. However, this is a weak notion of optimality; there might be long time intervals in which a client does not receive any packets, undesirable for real-time applications. Motivated by this, the authors, in an earlier work, introduced a pathwise cost function based on the law of the iterated logarithm, studied in fluctuation theory, which captures the deviation from a steady stream of packet deliveries and showed that a debt-based policy is optimal if the frame length is one. This work extends the analysis of debt-based policies to general frame lengths greater than one, as is important for general applications.
Keywords: iterative methods; radio networks; access point; debt fluctuation analysis; debt-based policy; hard delay constraint; iterated logarithm; pathwise cost function; steady packet delivery stream; wireless network; Computers; Conferences; Delays Limiting; Markov processes; Throughput; Vectors (ID#:14-3014)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848185&isnumber=6847911
- Lu Ge; Gaojie Chen; Yu Gong; Chambers, J., "Performance Analysis Of Multi-Antenna Selection Policies Using The Golden Code In Multiple-Input Multiple-Output Systems," Communications, IET, vol.8, no.12, pp.2147,2152, August 14, 2014. doi: 10.1049/iet-com.2013.0686 In multiple-input multiple-output (MIMO) systems, multiple-antenna selection has been proposed as a practical scheme for improving the signal transmission quality as well as reducing realisation cost because of minimising the number of radio-frequency chains. In this study, the authors investigate transmit antenna selection for MIMO systems with the Golden Code. Two antenna selection schemes are considered: max-min and max-sum approaches. The outage and pairwise error probability performance of the proposed approaches are analysed. Simulations are also given to verify the analysis. The results show the proposed methods provide useful schemes for antenna selection.
Keywords: Gold codes; MIMO communication; cost reduction; error statistics; transmitting antennas; MIMO systems; cost reduction; error probability; golden code; multiantenna selection policies; multiple-antenna selection; multiple-input-multiple-output systems; radio-frequency chains; signal transmission quality improvement; transmitting antenna selection (ID#:14-3015)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871477&isnumber=6871466
- Su, Shen; Zhang, Hongli; Fang, Binxing; Ye, Lin, "Quantifying AS-level Routing Policy Changes," Communications (ICC), 2014 IEEE International Conference on, pp.1148,1153, 10-14 June 2014. doi: 10.1109/ICC.2014.6883476 To study Internet's routing behavior on the granularity of Autonomous Systems (ASes), one needs to understand inter-domain routing policy. Routing policy changes over time, and may cause route oscillation, network congestion, and other problems. However, there are few works on routing policy changes and their impact on BGP's routing behaviors. In this paper, we model inter-domain routing policy as the preference to neighboring ASes, noted as neighbor preference, and propose an algorithm for quantifying routing policy changes based on neighbor preference. As a further analysis, we study the routing policy changes for the year of 2012, and find that generally an AS may experience a routing policy change for at least 20% prefixes within 6 months. An AS changes its routing policy mainly by exchanging two neighboring ASes' preference. In most cases, an AS changes a stable fraction of its prefixes' routing policy, but non-tier1 ASes may endure a large scale routing policy changing event. We also analyse the main reasons of routing policy changes, and exclude the possibilities of AS business relationship changes and topology changes.
Keywords: Business; Internet ; Monitoring; Quality of service; Reliability; Routing; Topology (ID#:14-3016)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883476&isnumber=6883277
- Anduo Wang; Gurney, AJ.T.; Xianglong Han; Jinyan Cao; Boon Thau Loo; Talcott, C.; Scedrov, A, "A Reduction-Based Approach Towards Scaling Up Formal Analysis Of Internet Configurations," INFOCOM, 2014 Proceedings IEEE, pp.637,645, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847989 The Border Gateway Protocol (BGP) is the single inter-domain routing protocol that enables network operators within each autonomous system (AS) to influence routing decisions by independently setting local policies on route filtering and selection. This independence leads to fragile networking and makes analysis of policy configurations very complex. To aid the systematic and efficient study of the policy configuration space, this paper presents network reduction, a scalability technique for policy-based routing systems. In network reduction, we provide two types of reduction rules that transform policy configurations by merging duplicate and complementary router configurations to simplify analysis. We show that the reductions are sound, dual of each other and are locally complete. The reductions are also computationally attractive, requiring only local configuration information and modification. We have developed a prototype of network reduction and demonstrated that it is applicable on various BGP systems and enables significant savings in analysis time. In addition to making possible safety analysis on large networks that would otherwise not complete within reasonable time, network reduction is also a useful tool for discovering possible redundancies in BGP systems.
Keywords: Internet; routing protocols; AS; BGP systems; Internet configurations; autonomous system; border gateway protocol; formal analysis; network reduction; policy based routing systems; policy configurations; reduction based approach; route filtering; route selection; safety analysis; scalability technique; single interdomain routing protocol; Computers; Conferences; Merging; Protocols; Redundancy; Routing; Safety (ID#:14-3017)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847989&isnumber=6847911
- Kan Yang; Xiaohua Jia; Kui Ren; Ruitao Xie; Liusheng Huang, "Enabling Efficient Access Control With Dynamic Policy Updating For Big Data In The Cloud," INFOCOM, 2014 Proceedings IEEE, pp.2013,2021, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848142 Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, because the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-Based Encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also design policy updating algorithms for different types of access policies. The analyses show that our scheme is correct, complete, secure and efficient.
Keywords: Big Data; authorisation; cloud computing; cryptography; ABE; Big Data; access control; access policy; attribute-based encryption; cloud; dynamic policy updating; end-to-end security; outsourced policy updating method; Access control; Big data; Encryption; Public key; Servers; ABE; Access Control; Big Data; Cloud; Policy Updating (ID#:14-3018)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848142&isnumber=6847911
- Li, B.; Li, R.; Eryilmaz, A, "Throughput-Optimal Scheduling Design with Regular Service Guarantees in Wireless Networks," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1, 1, July 2014 doi: 10.1109/TNET.2014.2333008 Motivated by the regular service requirements of video applications for improving quality of experience (QoE) of users, we consider the design of scheduling strategies in multihop wireless networks that not only maximize system throughput but also provide regular interservice times for all links. Since the service regularity of links is related to the higher-order statistics of the arrival process and the policy operation, it is challenging to characterize and analyze directly. We overcome this obstacle by introducing a new quantity, namely the time-since-last-service (TSLS), which tracks the time since the last service. By combining it with the queue length in the weight, we propose a novel maximum-weight-type scheduling policy, called Regular Service Guarantee (RSG) Algorithm. The unique evolution of the TSLS counter poses significant challenges for the analysis of the RSG Algorithm. To tackle these challenges, we first propose a novel Lyapunov function to show the throughput optimality of the RSG Algorithm. Then, we prove that the RSG Algorithm can provide service regularity guarantees by using the Lyapunov-drift-based analysis of the steady-state behavior of the stochastic processes. In particular, our algorithm can achieve a degree of service regularity within a factor of a fundamental lower bound we derive. This factor is a function of the system statistics and design parameters and can be as low as two in some special networks. Our results, both analytical and numerical, exhibit significant service regularity improvements over the traditional throughput-optimal policies, which reveals the importance of incorporating the metric of time-since-last-service into the scheduling policy for providing regulated service.
Keywords: Algorithm design and analysis; Delays; Quality of service; Steady-state; Throughput; Vectors; Wireless networks; Quality of experience (QoE);real-time traffic; service regularity; throughput optimality; wireless scheduling (ID#:14-3019)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6851952&isnumber=4359146
- Xiaonong Lu; Baoqun Yin; Haipeng Zhang, "Switching-POMDP Based Admission Control Policies For Service Systems With Distributed Architecture," Networking, Sensing and Control (ICNSC), 2014 IEEE 11th International Conference on, pp.209,214, 7-9 April 2014,. doi: 10.1109/ICNSC.2014.6819627 Many network systems with distributed structure today such as streaming media systems and resource-sharing systems can be modeled as the distributed network service system with multiple service nodes. Admission control technology is an essential way to enhance such systems. Model-based optimization approach as Markov decision process (MDP) is a good way to be applied to analyze and compute the optimal admission control policy that maximizes performance of system. However, due to the "curse of dimensionality", computing such optimal admission control policy for practical distributed systems is a rather difficult task. Therefore, we describe the admission control process of the distributed network service system as a switching partially observable Markov decision process (SPOMDP) with two-level structure. The upper level decides whether to switch the operation mode of system, and the lower level decides to admit or block new service requests. According to the partially observable Markov decision process (POMDP) model, a distributed admission control algorithm is presented which the service nodes in system make decisions without the knowledge of other service nodes. The randomized policy is applied to optimize system performance, and the policy-gradient iteration algorithm is used to compute the optimal admission control policy. Then, an operation mode switching mechanism is presented to detect the change of system and determine the switch epoch of operation mode. Through the numerical experiments, we demonstrate the efficiency of the presented approach.
Keywords: Markov processes; distributed algorithms; optimal control; SPOMDP model; admission control process; admission control technology; dimensionality; distributed admission control algorithm; distributed architecture; distributed network service system; distributed structure today; model-based optimization; multiple service nodes; network systems; operation mode switching mechanism; optimal admission control policy; policy-gradient iteration algorithm; randomized policy; resource-sharing systems; service systems; streaming media systems; switch epoch; switching partially observable Markov decision process; switching-POMDP based admission control policies; Artificial neural networks; Gain; Switches; SPOMDP; distributed admission control algorithm; distributed network service system; operation mode; policy-gradient iteration; two-level structure (ID#:14-3020)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819627&isnumber=6819588
- Huang Qinlong; Ma Zhaofeng; Yang Yixian; Niu Xinxin; Fu Jingyi, "Improving Security And Efficiency For Encrypted Data Sharing In Online Social Networks," Communications, China, vol.11, no.3, pp.104, 117, March 2014. doi: 10.1109/CC.2014.6825263 Despite that existing data sharing systems in online social networks (OSNs) propose to encrypt data before sharing, the multiparty access control of encrypted data has become a challenging issue. In this paper, we propose a secure data sharing scheme in OSNs based on ciphertext-policy attribute-based proxy re-encryption and secret sharing. In order to protect users' sensitive data, our scheme allows users to customize access policies of their data and then outsource encrypted data to the OSNs service provider. Our scheme presents a multiparty access control model, which enables the disseminator to update the access policy of ciphertext if their attributes satisfy the existing access policy. Further, we present a partial decryption construction in which the computation overhead of user is largely reduced by delegating most of the decryption operations to the OSNs service provider. We also provide checkability on the results returned from the OSNs service provider to guarantee the correctness of partial decrypted ciphertext. Moreover, our scheme presents an efficient attribute revocation method that achieves both forward and backward secrecy. The security and performance analysis results indicate that the proposed scheme is secure and efficient in OSNs.
Keywords: authorisation; cryptography; social networking (online); attribute based proxy reencryption; ciphertext policy; data security; decryption operations ;encrypted data sharing efficiency; multiparty access control model; online social networks; secret sharing; secure data sharing; Access control; Amplitude shift keying; Data sharing; Encryption; Social network services; attribute revocation; attribute-based encryption; data sharing; multiparty access control; online social networks (ID#:14-3021)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825263&isnumber=6825249
- Torrez Rojas, M.A; Takeo Ueda, E.; Melo de Brito Carvalho, T.C., "Modelling and Verification Of Security Rules In An Openflow Environment with Coloured Petri Nets," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp.1,7, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876890 The discussion of alternatives to the Internet architecture has been the subject of research for several years, resulting in a number of solutions and mechanisms that can help even the current approach. Within this context, the paradigm of Software Defined Networking (SDN) is becoming popular due to recent initiatives based on OpenFlow. This article presents an analysis of security policy rules applied in an environment based on OpenFlow. The analysis of the security policy rules is realized based on data obtained from a simulation of a scenario, modeled using Colored Petri Nets (CPN), and validated by the state space generated from the outputs of this model. The collected results are for a specific scenario. However, the approach is useful to analyze several types of systems. Thus, this research demonstrates that is feasible to employ CPN to model and validate security rules in an OpenFlow-based SDN.
Keywords: Petri nets; computer network security; protocols; CPN; Internet architecture; OpenFlow environment; OpenFlow-based SDN; coloured Petri nets; security policy rules analysis; security rule modelling; security rule validation ;security rule verification; software defined networking; state space; Analytical models; Computational modeling; Data models; Internet; Petri nets; Security; Software; Coloured Petri Nets; OpenFlow; SDN; Validate security rules (ID#:14-3022)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876890&isnumber=6876860
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Provenance
Provenance refers to information about the origin and activities of system data and processes. With the growth of shared services and systems, including social media, cloud computing, and service-oriented architectures, finding tamperproof methods for tracking files is a major challenge. Research into the security of software of unknown provenance (SOUP) is also included. The works cited here were presented between January and August 2014.
- Rezvani, M.; Ignjatovic, A; Bertino, E.; Jha, S., "Provenance-aware Security Risk Analysis For Hosts And Network Flows," Network Operations and Management Symposium (NOMS), 2014 IEEE, vol., no., pp. 1, 8, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838250 Detection of high risk network flows and high risk hosts is becoming ever more important and more challenging. In order to selectively apply deep packet inspection (DPI) one has to isolate in real time high risk network activities within a huge number of monitored network flows. To help address this problem, we propose an iterative methodology for a simultaneous assessment of risk scores for both hosts and network flows. The proposed approach measures the risk scores of hosts and flows in an interdependent manner; thus, the risk score of a flow influences the risk score of its source and destination hosts, and also the risk score of a host is evaluated by taking into account the risk scores of flows initiated by or terminated at the host. Our experimental results show that such an approach not only effective in detecting high risk hosts and flows but, when deployed in high throughput networks, is also more efficient than PageRank based algorithms.
Keywords: computer network security ;risk analysis; deep packet inspection; high risk hosts; high risk network flows; provenance aware security risk analysis; risk score; Computational modeling; Educational institutions; Iterative methods; Monitoring; Ports (Computers); Risk management; Security (ID#:14-3023)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838250&isnumber=6838210
- Beserra Sousa, R.; Cintra Cugler, D.; Gonzales Malaverri, J.E.; Bauzer Medeiros, C., "A Provenance-Based Approach To Manage Long Term Preservation Of Scientific Data," Data Engineering Workshops (ICDEW), 2014 IEEE 30th International Conference on , vol., no., pp.162,133, March 31 2014-April 4 2014. doi: 10.1109/ICDEW.2014.6818316 Long term preservation of scientific data goes beyond the data, and extends to metadata preservation and curation. While several researchers emphasize curation processes, our work is geared towards assessing the quality of scientific (meta)data. The rationale behind this strategy is that scientific data are often accessible via metadata - and thus ensuring metadata quality is a means to provide long term accessibility. This paper discusses our quality assessment architecture, presenting a case study on animal sound recording metadata. Our case study is an example of the importance of periodically assessing (meta)data quality, since knowledge about the world may evolve, and quality decrease with time, hampering long term preservation.
Keywords: {data handling; meta data; animal sound recording metadata; long term scientific data preservation management; metadata curation process; metadata preservation; metadata quality; provenance-based approach; quality assessment architecture; Animals; Biodiversity; Computer architecture; Data models; Measurement; Quality assessment; Software (ID#:14-3024)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818316&isnumber=6816871
- Rodes, B.D.; Knight, J.C., "Speculative Software Modification and its Use in Securing SOUP," Dependable Computing Conference (EDCC), 2014 Tenth European , vol., no., pp.210,221, 13-16 May 2014 doi: 10.1109/EDCC.2014.29 Abstract: We present an engineering process model for generating software modifications that is designed to be used when either most or all development artifacts about the software, including the source code, are unavailable. This kind of software, commonly called Software Of Unknown Provenance (SOUP), raises many doubts about the existence and adequacy of desired dependability properties, for example security. These doubts motivate some users to apply modifications to enhance dependability properties of the software, however, without necessary development artifacts, modifications are made in a state of uncertainty and risk. We investigate enhancing dependability through software modification in the presence of these risks as an engineering problem and introduce an engineering process for generating software modifications called Speculative Software Modification (SSM). We present the motivation and guiding principles of SSM, and a case study of SSM applied to protect software against buffer overflow attacks when only the binary is available.
Keywords: security of data; software reliability; source code (software);SOUP security; SSM; software dependability property; software development artifacts; software engineering process model; software of unknown provenance; source code; speculative software modification; Complexity theory; Hardware; Maintenance engineering; Measurement; Security; Software; Uncertainty; Assurance Case; Security; Software Modification; Software Of Unknown Provenance (SOUP) (ID#:14-3025)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821107&isnumber=6821069
- Dong Wang; Amin, M.T.; Shen Li; Abdelzaher, T.; Kaplan, L.; Siyu Gu; Chenji Pan; Liu, H.; Aggarwal, C.C.; Ganti, R.; Xinlei Wang; Mohapatra, P.; Szymanski, B.; Hieu Le, "Using Humans As Sensors: An Estimation-Theoretic Perspective," Information Processing in Sensor Networks, IPSN-14 Proceedings of the 13th International Symposium on , vol., no., pp.35,46, 15-17 April 2014. doi: 10.1109/IPSN.2014.6846739 The explosive growth in social network content suggests that the largest "sensor network" yet might be human. Extending the participatory sensing model, this paper explores the prospect of utilizing social networks as sensor networks, which gives rise to an interesting reliable sensing problem. In this problem, individuals are represented by sensors (data sources) who occasionally make observations about the physical world. These observations may be true or false, and hence are viewed as binary claims. The reliable sensing problem is to determine the correctness of reported observations. From a networked sensing standpoint, what makes this sensing problem formulation different is that, in the case of human participants, not only is the reliability of sources usually unknown but also the original data provenance may be uncertain. Individuals may report observations made by others as their own. The contribution of this paper lies in developing a model that considers the impact of such information sharing on the analytical foundations of reliable sensing, and embeds it into a tool called Apollo that uses Twitter as a "sensor network" for observing events in the physical world. Evaluation, using Twitter-based case-studies, shows good correspondence between observations deemed correct by Apollo and ground truth.
Keywords: Internet; estimation theory; sensors; social networking (online); Apollo; Twitter-based case-studies; estimation-theoretic perspective ;humans; information sharing; largest sensor network; networked sensing standpoint; participatory sensing model; reliable sensing problem; sensing problem formulation; sensors; social network content; Computer network reliability; Maximum likelihood estimation; Reliability; Sensors; Silicon; Twitter; data reliability; expectation maximization; humans as sensors; maximum likelihood estimation; social sensing; uncertain data provenance}, (ID#:14-3026)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846739&isnumber=6846727
- He, L.; Yue, P.; Di, L.; Zhang, M.; Hu, L., "Adding Geospatial Data Provenance into SDI--A Service-Oriented Approach," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol. PP, no.99, pp.1, 11, August 2014. doi: 10.1109/JSTARS.2014.2340737 Geospatial data provenance records the derivation history of a geospatial data product. It is important in evaluating the quality of data products. In a Geospatial Web Service environment where data are often disseminated and processed widely and frequently in an unpredictable way, it is even more important in identifying original data sources, tracing workflows, updating or reproducing scientific results, and evaluating reliability and quality of geospatial data products. Geospatial data provenance has become a fundamental issue in establishing the spatial data infrastructure (SDI). This paper investigates how to support provenance awareness in SDI. It addresses key issues including provenance modeling, capturing, and sharing in a SDI enabled by interoperable geospatial services. A reference architecture for provenance tracking is proposed, which can accommodate geospatial feature provenance at different levels of granularity. Open standards from ISO, World Wide Web Consortium (W3C), and OGC are leveraged to facilitate the interoperability. At the feature type level, this paper proposes extensions of W3C PROV-XML for ISO 19115 lineage and "Parent Level" provenance registration in the geospatial catalog service. At the feature instance level, light-weight lineage information entities for feature provenance are proposed and managed by Web Feature Services. Experiments demonstrate the applicability of the approach for creating provenance awareness in an interoperable geospatial service-oriented environment.
Keywords: Catalogs; Geospatial analysis; ISO standards; Interoperability; Remote sensing; Web services; Geoprocessing workflow; Geospatial Web Service; ISO 19115 lineage; World Wide Web Consortium (W3C) PROV; geospatial data provenance; spatial data infrastructure (ID#:14-3027)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873222&isnumber=4609444
- Zerva, P.; Zschaler, S.; Miles, S., "A Provenance Model of Composite Services in Service-Oriented Environments," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.1, 12, 7-11 April 2014. doi: 10.1109/SOSE.2014.8 Provenance awareness adds a new dimension to the engineering of service-oriented systems, requiring them to be able to answer questions about the provenance of any data produced. This need is even more evident where atomic services are aggregated into added-value composite services to be delivered with certain non-functional characteristics. Prior work in the area of provenance for service-oriented systems has primarily focused on the collection and storage infrastructure required for answering provenance questions. In contrast, in this paper we study the structure of the data thus collected considering the service's infrastructure as a whole and how this affects provenance collection for answering different types of provenance questions. In particular, we define an extension of W3Cs PROV ontological model with concepts that can be used to express the provenance of how services were discovered, selected, aggregated and executed. We demonstrate the conceptual adequacy of our model by reasoning over provenance instances for a composite service scenario.
Keywords: data structures; ontologies (artificial intelligence); service-oriented architecture; W3C PROV ontological model; added-value composite services; atomic services; collection infrastructure; conceptual adequacy; data structure; nonfunctional characteristics; provenance awareness; service-oriented environments; storage infrastructure; Data models; Informatics; Ontologies; Protocols; Servers; Service-oriented architecture; ontology; provenance model; service composition; service-oriented systems (ID#:14-3028)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825958&isnumber=6825948
- Imran, A; Nahar, N.; Sakib, K., "Watchword-oriented and Time-Stamped Algorithms For Tamper-Proof Cloud Provenance Cognition," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on, vol., no., pp.1,6, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850747 Provenance is derivative journal information about the origin and activities of system data and processes. For a highly dynamic system like the cloud, provenance can be accurately detected and securely used in cloud digital forensic investigation activities. This paper proposes watchword oriented provenance cognition algorithm for the cloud environment. Additionally time-stamp based buffer verifying algorithm is proposed for securing the access to the detected cloud provenance. Performance analysis of the novel algorithms proposed here yields a desirable detection rate of 89.33% and miss rate of 8.66%. The securing algorithm successfully rejects 64% of malicious requests, yielding a cumulative frequency of 21.43 for MR.
Keywords: cloud computing; digital forensics; formal verification; software performance evaluation; cloud digital forensic investigation activities; cloud security; derivative journal information; detection rate; miss rate; performance analysis; system data; system processes; tamper-proof cloud provenance cognition; time-stamp based buffer verifying algorithm; watchword oriented provenance cognition algorithm; Cloud computing; Cognition; Encryption; Informatics; Software as a service; Cloud computing; cloud security; empirical evaluation; provenance detection (ID#:14-3029)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850747&isnumber=6850678
- Dong Dai; Yong Chen; Kimpe, D.; Ross, R., "Provenance-Based Prediction Scheme for Object Storage System in HPC," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, pp.550,551, 26-29 May 2014. doi: 10.1109/CCGrid.2014.27 Object-based storage model is recently widely adopted both in industry and academia to support growingly data intensive applications in high-performance computing. However, the I/O prediction strategies which have been proven effective in traditional parallel file systems, have not been thoroughly studied under this new object-based storage model. There are new challenges introduced from object storage that make traditional prediction systems not work properly. In this paper, we propose a new I/O access prediction system based on provenance analysis on both applications and objects. We argue that the provenance, which contains metadata that describes the history of data, reveals the detailed information about applications and data sets, which can be used to capture the system status and provide accurate I/O prediction efficiently. Our current evaluations based on real-world trace data (Darshan datasets) simulation also confirm that provenance-based prediction system is able to provide accurate predictions for object storage systems.
Keywords: meta data; object-oriented databases; parallel processing; storage management; Darshan datasets simulation; HPC; I/O access prediction system; I/O prediction strategy; data intensive application; high-performance computing; metadata; object storage system; object-based storage model; parallel file system; provenance analysis; provenance-based prediction scheme; provenance-based prediction system; real-world trace data; Accuracy; Algorithm design and analysis; Buildings; Clustering algorithms; Computer architecture; History; Prediction algorithms; I/O Prediction; Object Storage; Provenance (ID#:14-3030)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846497&isnumber=6846423
- De Souza, L.; Marcon Gomes Vaz, M.S.; Sfair Sunye, M., "Modular Development of Ontologies for Provenance in Detrending Time Series," Information Technology: New Generations (ITNG), 2014 11th International Conference on, vol., no., pp.567, 572, 7-9 April 2014. doi: 10.1109/ITNG.2014.106 The scientific knowledge, in many areas, is obtained from time series analysis, which is usually done in two phases, preprocessing and data analysis. Trend extraction (detrending) is one important step of preprocessing phase, where many detrending software using different statistical methods can be applied for the same time series to correct them. In this context, the knowledge about time series data is relevant to the researcher to choose appropriate statistical methods to be used. Also the knowledge about how and how often the time series were corrected is essential for choice of detrending methods that can be applied to getting better results. This knowledge is not always explicit and easy to interpret. Provenance using Web Ontology Language - OWL ontologies contributes for helping the researcher to get knowledge about data and processes executed. Provenance information allows knowing as data were detrended, improving the decision making and contributing for generation of scientific knowledge. The main contribution of this paper is presenting the modular development of ontologies combined with Open Provenance Model - OPM, which is extended to facilitate the understanding about as detrending processes were executed in time series data, enriching semantically the preprocessing phase of time series analysis.
Keywords: data analysis; decision making; knowledge representation languages; ontologies (artificial intelligence); time series; OPM; OWL ontologies; Web Ontology Language; decision making; detrending software; open provenance model; preprocessing phase; provenance information; scientific knowledge; scientific knowledge generation; time series data analysis; trend extraction; Analytical models; Market research; OWL; Ontologies; Semantics; Statistical analysis; Time series analysis; OWL; modules; provenance model; time series analysis; trend extraction (ID#:14-3031)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822257&isnumber=6822158
- Hamadache, K.; Zerva, P., "Provenance of Feedback in Cloud Services," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on , vol., no., pp.23,34, 7-11 April 2014. doi: 10.1109/SOSE.2014.10 With the fast adoption of Services Computing, even more driven by the emergence of the Cloud, the need to ensure accountability for quality of service (QoS) for service-based systems/services has reached a critical level. This need has triggered numerous researches in the fields of trust, reputation and provenance. Most of the researches on trust and reputation have focused on their evaluation or computation. In case of provenance they have tried to track down how the service has processed and produced data during its execution. If some of them have investigated credibility models and mechanisms, only few have looked into the way reputation information is produced. In this paper we propose an innovative design for the evaluation of feedback authenticity and credibility by considering the feedback's provenance. This innovative consideration brings up a new level of security and trust in Services Computing, by fighting against malicious feedback and reducing the impact of irrelevant one.
Keywords: cloud computing; trusted computing; QoS; cloud services; credibility models; feedback authenticity; feedback credibility; feedback provenance innovative design; malicious feedback; quality of service; reputation information; security; service-based systems/services; services computing; trust; Context; Hospitals; Monitoring; Ontologies; Quality of service; Reliability; Schedules; cloud computing; credibility; feedback; provenance; reputation (ID#:14-3032)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825960&isnumber=6825948
- Dong Wang; Al Amin, M.T.; Abdelzaher, T.; Roth, D.; Voss, C.R.; Kaplan, L.M.; Tratz, S.; Laoudi, J.; Briesch, D., "Provenance-Assisted Classification in Social Networks," Selected Topics in Signal Processing, IEEE Journal of, vol.8, no.4, pp.624,637, Aug. 2014. doi: 10.1109/JSTSP.2014.2311586 Signal feature extraction and classification are two common tasks in the signal processing literature. This paper investigates the use of source identities as a common mechanism for enhancing the classification accuracy of social signals. We define social signals as outputs, such as microblog entries, geotags, or uploaded images, contributed by users in a social network. Many classification tasks can be defined on such outputs. For example, one may want to identify the dialect of a microblog contributed by an author, or classify information referred to in a user's tweet as true or false. While the design of such classifiers is application-specific, social signals share in common one key property: they are augmented by the explicit identity of the source. This motivates investigating whether or not knowing the source of each signal (in addition to exploiting signal features) allows the classification accuracy to be improved. We call it provenance-assisted classification. This paper answers the above question affirmatively, demonstrating how source identities can improve classification accuracy, and derives confidence bounds to quantify the accuracy of results. Evaluation is performed in two real-world contexts: (i) fact-finding that classifies microblog entries into true and false, and (ii) language classification of tweets issued by a set of possibly multi-lingual speakers. We also carry out extensive simulation experiments to further evaluate the performance of the proposed classification scheme over different problem dimensions. The results show that provenance features significantly improve classification accuracy of social signals, even when no information is known about the sources (besides their ID). This observation offers a general mechanism for enhancing classification results in social networks.
Keywords: computational linguistics; feature extraction; maximum likelihood estimation; pattern classification; social networking (online); application-specific classifiers; maximum likelihood estimation; microblog; multilingual speakers; provenance-assisted classification; signal classification; signal feature extraction; signal processing; social network; social signals; tweet language classification; Accuracy; Equations; Mathematical model; Maximum likelihood estimation; Signal processing algorithms; Social network services; Social signals; classification; expectation maximization; maximum likelihood estimation; signal feature extraction; uncertain provenance (ID#:14-3033)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766747&isnumber=6856242
- Gray, AJ.G., "Dataset Descriptions for Linked Data Systems," Internet Computing, IEEE , vol.18, no.4, pp.66,69, July-Aug. 2014. doi: 10.1109/MIC.2014.66 Linked data systems rely on the quality of, and linking between, their data sources. However, existing data is difficult to trace to its origin and provides no provenance for links. This article discusses the need for self-describing linked data.
Keywords: data handling; data sources quality; dataset descriptions; linked data systems; self-describing linked data; Data systems; Electronic mail; Facsimile; Heating; Resource description framework; Vocabulary; data publishing; dataset descriptions; linked data; provenance (ID#:14-3034)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841557&isnumber=6841503
- Jain, R.; Prabhakar, S., "Guaranteed Authenticity And Integrity Of Data From Untrusted Servers," Data Engineering (ICDE), 2014 IEEE 30th International Conference on, vol., no., pp.1282, 1285, March 31 2014-April 4 2014. doi: 10.1109/ICDE.2014.6816761 Data are often stored at untrusted database servers. The lack of trust arises naturally when the database server is owned by a third party, as in the case of cloud computing. It also arises if the server may have been compromised, or there is a malicious insider. Ensuring the trustworthiness of data retrieved from such untrusted database is of utmost importance. Trustworthiness of data is defined by faithful execution of valid and authorized transactions on the initial data. Earlier work on this problem is limited to cases where data are either not updated, or data are updated by a single trustworthy entity. However, for a truly dynamic database, multiple clients should be allowed to update data without having to route the updates through a central server. In this demonstration, we present a system to establish authenticity and integrity of data in a dynamic database where the clients can run transactions directly on the database server. Our system provides provable authenticity and integrity of data with absolutely no requirement for the server to be trustworthy. Our system also provides assured provenance of data. This demonstration is built using the solutions proposed in our previous work. Our system is built on top of Oracle with no modifications to the database internals. We show that the system can be easily adopted in existing databases without any internal changes to the database. We also demonstrate how our system can provide authentic provenance.
Keywords: data integrity; database management systems; trusted computing; Oracle; cloud computing; data authenticity; data integrity; data provenance; data transactions; data trustworthiness; database internals; database servers; dynamic database; malicious insider; trustworthy entity; Cloud computing; Hardware; Indexes; Protocols; Servers (ID#:14-3035)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816761&isnumber=6816620
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Web Browsers
Web browsers are vulnerable to a range of threats. The challenge of securing browsers against them is the subject of these research efforts. The works cited here were presented between January and August of 2014.
- Abgrall, E.; Le Traon, Y.; Gombault, S.; Monperrus, M., "Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.34,41, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.63 One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
Keywords: online front-ends; regression analysis; security of data; Web applications; Web browser attack surface; XSS vector testing; cross-site scripting; systematic security regression testing; Browsers; HTML; Mobile communication; Payloads; Security; Testing; Vectors; XSS; browser; regression; security; testing; web (ID#:14-3036)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825636&isnumber=6825623
- Xin Wu, "Secure Browser Architecture Based on Hardware Virtualization," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 495, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779009 Ensuring the entire code base of a browser to deal with the security concerns of integrity and confidentiality is a daunting task. The basic method is to split it into different components and place each of them in its own protection domain. OS processes are the prevalent isolation mechanism to implement the protection domain, which result in expensive context-switching overheads produced by Inter-Process Communication (TPC). Besides, the dependences of multiple web instance processes on a single set of privileged ones reduce the entire concurrency. In this paper, we present a secure browser architecture design based on processor virtualization technique. First, we divide the browser code base into privileged components and constrained components which consist of distrusted web page Tenderer components and plugins. All constrained components are in the form of shared object (SO) libraries. Second, we create an isolated execution environment for each distrusted shared object library using the hardware virtualization support available in modern Intel and AMD processors. Different from the current researches, we design a custom kernel module to gain the hardware virtualization capabilities. Third, to enhance the entire security of browser, we implement a validation mechanism to check the OS resources access from distrusted web page Tenderer to the privileged components. Our validation rules is similar with Google chrome. By utilizing VMENTER and VMEXIT which are both CPU instructions, our approach can gain a better system performance substantially.
Keywords: microprocessor chips; online front-ends; operating systems (computers); security of data; software libraries; virtualisation; AMD processors; CPU instructions; Google chrome; IPC; Intel processors; OS processes; OS resource checking; SO libraries; VMENTER; VMEXIT; browser security; context-switching overheads; distrusted Web page renderer components; distrusted shared object library; hardware virtualization capabilities; interprocess communication; isolated execution environment; isolation mechanism; multiple Web instance processes; processor virtualization technique; secure browser architecture design; validation mechanism; Browsers; Google; Hardware; Monitoring; Security; Virtualization; Web pages; Browser security; Component isolation; Hardware virtualization; System call interposition (ID#:14-3037)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779009&isnumber=6778899
- Wadkar, H.; Mishra, A; Dixit, A, "Prevention of Information Leakages In A Web Browser By Monitoring System Calls," Advance Computing Conference (IACC), 2014 IEEE International , pp.199,204, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779320 The web browser has become one of most accessed process/applications in recent years. The latest website security statistics report about 30% of vulnerability attacks happen due to the information leakage by browser application and its use by hackers to exploit privacy of an individual. This leaked information is one of the main sources for hackers to attack individual's PC or to make the PC a part of botnet. A software controller is proposed to track system calls invoked by the browser process. The designed prototype deals with the systems calls which perform operations related to read, write, access personal and/or system information. The objective of the controller is to confine the leakage of information by a browser process.
Keywords: Web sites; online front-ends; security of data; Web browser application; Web site security statistics report; botnet; browser process; monitoring system calls ;software controller; system information leakages; track system calls; vulnerability attacks; Browsers; Computer hacking; Monitoring; Privacy; Process control; Software ;browser security; confinement; information leakage} (ID#:14-3038)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779320&isnumber=6779283
- Shamsi, J.A; Hameed, S.; Rahman, W.; Zuberi, F.; Altaf, K.; Amjad, A, "Clicksafe: Providing Security Against Clickjacking Attacks," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on,pp.206,210, 9-11 Jan. 2014. doi: 10.1109/HASE.2014.36 Click jacking is an act of hijacking user clicks in order to perform undesired actions which are beneficial for the attacker. We propose Click safe, a browser-based tool to provide increased security and reliability against click jacking attacks. Click safe is based on three major components. The detection unit detects malicious components in a web page that redirect users to external links. The mitigation unit provides interception of user clicks and gives educated warnings to users who can then choose to continue or not. Click safe also incorporate a feedback unit which records the user's actions, converts them into ratings and allows future interactions to be more informed. Click safe is predominant from other similar tools as the detection and mitigation is based on a comprehensive framework which utilizes detection of malicious web components and incorporating user feedback. We explain the mechanism of click safe, describes its performance, and highlights its potential in providing safety against click jacking to a large number of users.
Keywords: Internet; online front-ends; security of data; Clicksafe; Web page; browser-based tool; click safe; clickjacking attacks; detection unit; feedback unit; malicious Web component detection; mitigation unit; Browsers; Communities; Computers; Context ;Loading; Safety; Security; Browser Security; Clickjacking; Safety; Security; Soft assurance of safe browsing (ID#:14-3039)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754607&isnumber=6754569
- Mohammad, R.M.; Thabtah, F.; McCluskey, L., "Intelligent Rule-Based Phishing Websites Classification," Information Security, IET, vol.8, no.3, pp.153,160, May 2014. doi: 10.1049/iet-ifs.2013.0202 Phishing is described as the art of echoing a website of a creditable firm intending to grab user's private information such as usernames, passwords and social security number. Phishing websites comprise a variety of cues within its content-parts as well as the browser-based security indicators provided along with the website. Several solutions have been proposed to tackle phishing. Nevertheless, there is no single magic bullet that can solve this threat radically. One of the promising techniques that can be employed in predicting phishing attacks is based on data mining, particularly the `induction of classification rules' since anti-phishing solutions aim to predict the website class accurately and that exactly matches the data mining classification technique goals. In this study, the authors shed light on the important features that distinguish phishing websites from legitimate ones and assess how good rule-based data mining classification techniques are in predicting phishing websites and which classification technique is proven to be more reliable.
Keywords: Web sites; data mining; data privacy; pattern classification; security of data; unsolicited e-mail; Web site echoing; Website class; antiphishing solutions; browser-based security indicators; creditable flrm; intelligent rule-based phishing Web site classification; phishing attack prediction; rule-based data mining classification techniques; social security number; user private information(ID#:14-3040)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786863&isnumber=6786849
- Byungho Min; Varadharajan, V., "A New Technique for Counteracting Web Browser Exploits," Software Engineering Conference (ASWEC), 2014 23rd Australian, pp.132, 141, 7-10 April 2014. doi: 10.1109/ASWEC.2014.28 Over the last few years, exploit kits have been increasingly used for system compromise and malware propagation. As they target the web browser which is one of the most commonly used software in the Internet era, exploit kits have become a major concern of security community. In this paper, we propose a proactive approach to protecting vulnerable systems from this prevalent cyber threat. Our technique intercepts communications between the web browser and web pages, and proactively blocks the execution of exploit kits using version information of web browser plugins. Our system, AFFAF, is a zero-configuration solution, and hence users do not need to do anything but just simply install it. Also, it is an easy-to-employ methodology from the perspective of plugin developers. We have implemented a lightweight prototype, which has demonstrated that AFFAF protected vulnerable systems can counteract 50 real-world and one locally deployed exploit kit URLs. Tested exploit kits include popular and well-maintained ones such as Blackhole 2.0, Redkit, Sakura, Cool and Bleeding Life 2. We have also shown that the false positive rate of AFFAF is virtually zero, and it is robust enough to be effective against real web browser plugin scanners.
Keywords: Internet; invasive software; online front-ends; AFFAF protected vulnerable systems; Internet; Web browser exploits; Web browser plugin scanners; Web pages; cyber threat; exploit kit URL; lightweight prototype; malware propagation; security community; system compromise; version information; zero-configuration solution; Browsers; Java; Malware; Prototypes; Software; Web sites; Defensive Techniques; Exploit Kits; Security Attacks (ID#:14-3041)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824118&isnumber=6824087
- Mewara, Bhawna; Bairwa, Sheetal; Gajrani, Jyoti, "Browser's Defenses Against Reflected Cross-Site Scripting Attacks," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on , vol., no., pp.662,667, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884928 Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
Keywords: JavaScript; Reflected XSS; XSS-Me; attacker; bypass; exploit; filter (ID#:14-3042)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884928&isnumber=6884878
- Manatunga, D.; Lee, J.H.; Kim, H., "Hardware Support for Safe Execution of Native Client Applications," Computer Architecture Letters, vol. PP, no.99, pp.1, 1, March 2014. doi: 10.1109/LCA.2014.2309601 Over the past few years, there has been vast growth in the area of the web browser as an applications platform. One example of this trend is Google's Native Client (NaCl) platform, which is a software-fault isolation mechanism that allows the running of native x86 or ARM code on the browser. One of the security mechanisms employed by NaCl is that all branches must jump to the start of a valid instruction. In order to achieve this criteria though, all return instructions are replaced by a specific branch instruction sequence, which we call NaCl returns, that are guaranteed to return to a valid instruction. However, these NaCl returns lose the advantage of the highly accurate return-address stack (RAS) in exchange for the less accurate indirect branch predictor. In this paper, we propose a NaCl-RAS mechanism that can identify and accurately predict 76.9% on average compared to the 39.5% of a traditional BTB predictor.
Keywords: Accuracy; Benchmark testing; Detectors; Google; Hardware; security; Software} (ID#:14-3043)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766786&isnumber=4357966
- Sayed, B.; Traore, I, "Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.261, 268, 13-16 May 2014. doi: 10.1109/WAINA.2014.52 The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
Keywords: Internet; computer crime; data protection; invasive software; IDS systems; Web 2.0 client-side Web attacks; antivirus solutions; botnets; cross-site request forgery; cross-site scripting; cyber-criminals; firewalls; information flow control ;information leakage; legitimate Web sites; malicious script injection; protection systems; secure browsing environment; social networks; Browsers; Feature extraction; Security; Semantics; Servers; Web 2.0;Web pages; AJAX; Client-side web attacks; Information Flow Control; Web 2.0 (ID#:14-3044)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844648&isnumber=6844560
- Khobragade, P.K.; Malik, L.G., "Data Generation and Analysis for Digital Forensic Application Using Data Mining," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.458,462, 7-9 April 2014. doi: 10.1109/CSNT.2014.97 In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
Keywords: computer crime; data analysis; data mining; digital forensics; firewalls; storage management; FTK 4.0;Web browser; cyber crime huge log data; cyber system; data analysis; data collection; data generation; data mining; data storage; digital forensic application; firewall logs; intrusion detection system; memory forensic analysis; network attack detection; network forensic analysis; network traces; network traffic; packet captures; remote system forensic; transactional data; Computers; Data mining; Data visualization; Databases; Digital forensics; Security; Clustering; Data Collection; Digital forensic tool; Log Data collection (ID#:14-3045)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821438&isnumber=6821334
- Hubbard, J.; Weimer, K.; Yu Chen, "A Study Of SSL Proxy Attacks On Android And Ios Mobile Applications," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th,pp.86,91, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866553 According to recent articles in popular technology websites, some mobile applications function in an insecure manner when presented with untrusted SSL certificates. These non-browser based applications seem to, in the absence of a standard way of alerting a user of an SSL error, accept any certificate presented to it. This paper intends to research these claims and show whether or not an invisible proxy based SSL attack can indeed steal user's credentials from mobile applications, and which types applications are most likely to be vulnerable to this attack vector. To ensure coverage of the most popular platforms, applications on both Android 4.2 and iOS 6 are tested. The results of our study showed that stealing credentials is indeed possible using invisible proxy man in the middle attacks.
Keywords: Android (operating system) ;iOS (operating system); mobile computing; security of data; Android 4.2;SSL error; SSL proxy attacks; attack vector; iOS 6;iOS mobile applications; invisible proxy man; middle attacks; untrusted SSL certificates; user credentials; Androids; Humanoid robots; Mobile communication; Security; Servers; Smart phones; Android; Man-in-the-middle; Mobile Devices; Proxy; SSL; Security; TLS; iOS (ID#:14-3046)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866553&isnumber=6866537
- Nikiforakis, N.; Acar, G.; Saelinger, D., "Browse at Your Own Risk," Spectrum, IEEE, vol.51, no.8, pp.30, 35, August 2014. doi: 10.1109/MSPEC.2014.6866435 The paper states that even without cookies, fingerprinting lets advertisers track your every online move. In the past, clearing cookies after each session or selecting your browser's "Do Not Track" setting could prevent third-party tracking. But the advent of browser fingerprinting makes it very difficult to prevent others from monitoring your online activities. The diagram at right outlines how an online advertising network can track the sites you visit using fingerprinting.
Keywords: advertising data processing; online front-ends; security of data; browser fingerprinting; cookies; online advertising network; third-party tracking; Access control; Authentication; Browsers; Fingerprint recognition; Internet; Privacy (ID#:14-3047)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866435&isnumber=6866354
- Kishore, K.R.; Mallesh, M.; Jyostna, G.; Eswari, P.R.L.; Sarma, S.S., "Browser JS Guard: Detects and Defends Against Malicious Javascript Injection Based Drive By Download Attacks," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, pp.92, 100, 17-19 Feb. 2014. doi: 10.1109/ICADIWT.2014.6814705 In the recent times, most of the systems connected to Internet are getting infected with the malware and some of these systems are becoming zombies for the attacker. When user knowingly or unknowingly visits a malware website, his system gets infected. Attackers do this by exploiting the vulnerabilities in the web browser and acquire control over the underlying operating system. Once attacker compromises the users web browser, he can instruct the browser to visit the attackers website by using number of redirections. During the process, users web browser downloads the malware without the intervention of the user. Once the malware is downloaded, it would be placed in the file system and responds as per the instructions of the attacker. These types of attacks are known as Drive by Download attacks. Now-a-days, Drive by Download is the major channel for delivering the Malware. In this paper, Browser JS Guard an extension to the browser is presented for detecting and defending against Drive by Download attacks via HTML tags and JavaScript.
Keywords: Java; Web sites; authoring languages; invasive software; online front-ends; operating systems (computers);security of data; HTML tags; Internet; browser JS guard; download attacks; drive by download attacks; file system; malicious JavaScript injection; malware Web site; operating system; user Web browser; Browsers; HTML; Malware; Monitoring; Web pages; Web servers; DOM Change Methods ;Drive by Download Attacks; HTML tags; JavaScript Functions; Malware; Web Browser; Web Browser Extensions (ID#:14-3048)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814705&isnumber=6814661
- Larson, D.; Jigang Liu; Yanjun Zuo, "Performance Analysis Of Javascript Injection Detection Techniques," Electro/Information Technology (EIT), 2014 IEEE International Conference on,pp.140,148, 5-7 June 2014. doi: 10.1109/EIT.2014.6871752 JavaScript injection is inserting unwanted JavaScript into Web pages with the intent on violating the security and privacy standards of the Web pages. There are a number of techniques that have been developed for the detection and prevention of JavaScript injection, and all have performance costs. While the performance issues of the JavaScript injection detection techniques have been mainly studied in running systems, we propose a simulation approach using UML SPT and JavaSim. The new approach not only reduces the cost for such analysis but also provides a framework for modeling injection detection techniques and analyzing the performance implications of design decisions.
Keywords: Java; security of data; JavaScript injection detection techniques; JavaSim; UML SPT; Web pages; privacy standards; security standards; Browsers; Instruction sets; Performance analysis; Time factors; Unified modeling language; Web servers; Computer Security Intrusion Detection; JavaScript; performance analysis (ID#:14-3049)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871752&isnumber=6871745
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.