Biblio
Random numbers represent one of the most sensible part of a cryptographic system, since the cryptographic keys must be entirely based on them. The security of a communication relies on the key that had been established between two users. If an attacker is able to deduce that key, the communication is compromised. This is why key generation must completely rely on random number generators, so that nobody can deduce the. This paper will describe a set of public and free Random Number Generators (RNG) within Android-based Smartphones by exploiting different sensors, along with the way of achieving this scope. Moreover, this paper will present some conclusive tests and results over them.
This paper reports the results and findings of a historical analysis of open source intelligence (OSINT) information (namely Twitter data) surrounding the events of the September 11, 2012 attack on the US Diplomatic mission in Benghazi, Libya. In addition to this historical analysis, two prototype capabilities were combined for a table top exercise to explore the effectiveness of using OSINT combined with a context aware handheld situational awareness framework and application to better inform potential responders as the events unfolded. Our experience shows that the ability to model sentiment, trends, and monitor keywords in streaming social media, coupled with the ability to share that information to edge operators can increase their ability to effectively respond to contingency operations as they unfold.
Traffic from mobile wireless networks has been growing at a fast pace in recent years and is expected to surpass wired traffic very soon. Service providers face significant challenges at such scales including providing seamless mobility, efficient data delivery, security, and provisioning capacity at the wireless edge. In the Mobility First project, we have been exploring clean slate enhancements to the network protocols that can inherently provide support for at-scale mobility and trustworthiness in the Internet. An extensible data plane using pluggable compute-layer services is a key component of this architecture. We believe these extensions can be used to implement in-network services to enhance mobile end-user experience by either off-loading work and/or traffic from mobile devices, or by enabling en-route service-adaptation through context-awareness (e.g., Knowing contemporary access bandwidth). In this work we present details of the architectural support for in-network services within Mobility First, and propose protocol and service-API extensions to flexibly address these pluggable services from end-points. As a demonstrative example, we implement an in network service that does rate adaptation when delivering video streams to mobile devices that experience variable connection quality. We present details of our deployment and evaluation of the non-IP protocols along with compute-layer extensions on the GENI test bed, where we used a set of programmable nodes across 7 distributed sites to configure a Mobility First network with hosts, routers, and in-network compute services.
Data is one of the most valuable assets for organization. It can facilitate users or organizations to meet their diverse goals, ranging from scientific advances to business intelligence. Due to the tremendous growth of data, the notion of big data has certainly gained momentum in recent years. Cloud computing is a key technology for storing, managing and analyzing big data. However, such large, complex, and growing data, typically collected from various data sources, such as sensors and social media, can often contain personally identifiable information (PII) and thus the organizations collecting the big data may want to protect their outsourced data from the cloud. In this paper, we survey our research towards development of efficient and effective privacy-enhancing (PE) techniques for management and analysis of big data in cloud computing.We propose our initial approaches to address two important PE applications: (i) privacy-preserving data management and (ii) privacy-preserving data analysis under the cloud environment. Additionally, we point out research issues that still need to be addressed to develop comprehensive solutions to the problem of effective and efficient privacy-preserving use of data.
Complexity is ever increasing within our information environment and organisations, as interdependent dynamic relationships within sociotechnical systems result in high variety and uncertainty from a lack of information or control. A net-centric approach is a strategy to improve information value, to enable stakeholders to extend their reach to additional data sources, share Situational Awareness (SA), synchronise effort and optimise resource use to deliver maximum (or proportionate) effect in support of goals. This paper takes a systems perspective to understand the dynamics within a net-centric information system. This paper presents the first stages of the Soft Systems Methodology (SSM), to develop a conceptual model of the human activity system and develop a system dynamics model to represent system behaviour, that will inform future research into a net-centric approach with information security. Our model supports the net-centric hypothesis that participation within a information sharing community extends information reach, improves organisation SA allowing proactive action to mitigate vulnerabilities and reduce overall risk within the community. The system dynamics model provides organisations with tools to better understand the value of a net-centric approach, a framework to determine their own maturity and evaluate strategic relationships with collaborative communities.
Very high resolution satellite imagery used to be a rare commodity, with infrequent satellite pass-over times over a specific area-of-interest obviating many useful applications. Today, more and more such satellite systems are available, with visual analysis and interpretation of imagery still important to derive relevant features and changes from satellite data. In order to allow efficient, robust and routine image analysis for humanitarian purposes, semi-automated feature extraction is of increasing importance for operational emergency mapping tasks. In the frame of the European Earth Observation program COPERNICUS and related research activities under the European Union's Seventh Framework Program, substantial scientific developments and mapping services are dedicated to satellite based humanitarian mapping and monitoring. In this paper, recent results in methodological research and development of routine services in satellite mapping for humanitarian situational awareness are reviewed and discussed. Ethical aspects of sensitivity and security of humanitarian mapping are deliberated. Furthermore methods for monitoring and analysis of refugee/internally displaced persons camps in humanitarian settings are assessed. Advantages and limitations of object-based image analysis, sample supervised segmentation and feature extraction are presented and discussed.
Conventional photoacoustic microscopy (PAM) involves detection of optically induced thermo-elastic waves using ultrasound transducers. This approach requires acoustic coupling and the spatial resolution is limited by the focusing properties of the transducer. We present an all-optical PAM approach that involved detection of the photoacoustically induced surface displacements using an adaptive, two-wave mixing interferometer. The interferometer consisted of a 532-nm, CW laser and a Bismuth Silicon Oxide photorefractive crystal (PRC) that was 5×5×5 mm3. The laser beam was expanded to 3 mm and split into two paths, a reference beam that passed directly through the PRC and a signal beam that was focused at the surface through a 100-X, infinity-corrected objective and returned to the PRC. The PRC matched the wave front of the reference beam to that of the signal beam for optimal interference. The interference of the two beams produced optical-intensity modulations that were correlated with surface displacements. A GHz-bandwidth photoreceiver, a low-noise 20-dB amplifier, and a 12-bit digitizer were employed for time-resolved detection of the surface-displacement signals. In combination with a 5-ns, 532-nm pump laser, the interferometric probe was employed for imaging ink patterns, such as a fingerprint, on a glass slide. The signal beam was focused at a reflective cover slip that was separated from the fingerprint by 5 mm of acoustic-coupling gel. A 3×5 mm2 area of the coverslip was raster scanned with 100-μm steps and surface-displacement signals at each location were averaged 20 times. Image reconstruction based on time reversal of the PA-induced displacement signals produced the photoacoustic image of the ink patterns. The reconstructed image of the fingerprint was consistent with its photograph, which demonstrated the ability of our system to resolve micron-scaled features at a depth of 5 mm.
Acoustic microscopy is characterized by relatively long scanning time, which is required for the motion of the transducer over the entire scanning area. This time may be reduced by using a multi-channel acoustical system which has several identical transducers arranged as an array and is mounted on a mechanical scanner so that each transducer scans only a fraction of the total area. The resulting image is formed as a combination of all acquired partial data sets. The mechanical instability of the scanner, as well as the difference in parameters of the individual transducers causes a misalignment of the image fractures. This distortion may be partially compensated for by the introduction of constant or dynamical signal leveling and data shift procedures. However, a reduction of the random instability component requires more advanced algorithms, including auto-adjustment of processing parameters. The described procedure was implemented into the prototype of an ultrasonic fingerprint reading system. The specialized cylindrical scanner provides a helical spiral lens trajectory which eliminates repeatable acceleration, reduces vibration and allows constant data flow on maximal rate. It is equipped with an array of four spherically focused 50 MHz acoustic lenses operating in pulse-echo mode. Each transducer is connected to a separate channel including pulser, receiver and digitizer. The output 3D data volume contains interlaced B-scans coming from each channel. Afterward, data processing includes pre-determined procedures of constant layer shift in order to compensate for the transducer displacement, phase shift and amplitude leveling for compensation of variation in transducer characteristics. Analysis of statistical parameters of individual scans allows adaptive eliminating of the axial misalignment and mechanical vibrations. Further 2D correlation of overlapping partial C-scans will realize an interpolative adjustment which essentially improves the output image. Implementation of this adaptive algorithm into a data processing sequence allows us to significantly reduce misreading due to hardware noise and finger motion during scanning. The system provides a high quality acoustic image of the fingerprint including different levels of information: fingerprint pattern, sweat porous locations, internal dermis structures. These additional features can effectively facilitate fingerprint based identification. The developed principles and algorithm implementations allow improved quality, stability and reliability of acoustical data obtained with the mechanical scanner, accommodating several transducers. General principles developed during this work can be applied to other configurations of advanced ultrasonic systems designed for various biomedical and NDE applications. The data processing algorithm, developed for a specific biometric task, can be adapted for the compensation of mechanical imperfections of the other devices.
It is expected that clean-slate network designs will be implemented for wide-area network applications. Multi-tenancy in OpenFlow networks is an effective method to supporting a clean-slate network design, because the cost-effectiveness is improved by the sharing of substrate networks. To guarantee the programmability of OpenFlow for tenants, a complete flow space (i.e., header values of the data packets) virtualization is necessary. Wide-area substrate networks typically have multiple administrators. We therefore need to implement a flow space virtualization over multiple administration networks. In existing techniques, a third party is solely responsible for managing the mapping of header values for flow space virtualization for substrate network administrators and tenants, despite the severity of a third party failure. In this paper, we propose an AutoVFlow mechanism that allows flow space virtualization in a wide-area networks without the need for a third party. Substrate network administrators implement a flow space virtualization autonomously. They are responsible for virtualizing a flow space involving switches in their own substrate networks. Using a prototype of AutoVFlow, we measured the virtualization overhead, the results of which show a negligible amount of overhead.
Power grids are monitored by gathering data through remote sensors and estimating the state of the grid. Bad data detection schemes detect and remove poor data. False data is a special type of data injection designed to evade typical bad data detection schemes and compromise state estimates, possibly leading to improper control of the grid. Topology perturbation is a situational awareness method that implements the use of distributed flexible AC transmission system devices to alter impedance on optimally chosen lines, updating the grid topology and exposing the presence of false data. The success of the topology perturbation for improving grid control and exposing false data in AC state estimation is demonstrated. A technique is developed for identifying the false data injection attack vector and quantifying the compromised measurements. The proposed method provides successful false data detection and identification in IEEE 14, 24, and 39-bus test systems using AC state estimation.
Over the past decade, we have witnessed a huge upsurge in social networking which continues to touch and transform our lives till present day. Social networks help us to communicate amongst our acquaintances and friends with whom we share similar interests on a common platform. Globally, there are more than 200 million visually impaired people. Visual impairment has many issues associated with it, but the one that stands out is the lack of accessibility to content for entertainment and socializing safely. This paper deals with the development of a keyboard less social networking website for visually impaired. The term keyboard less signifies minimum use of keyboard and allows the user to explore the contents of the website using assistive technologies like screen readers and speech to text (STT) conversion technologies which in turn provides a user friendly experience for the target audience. As soon as the user with minimal computer proficiency opens this website, with the help of screen reader, he/she identifies the username and password fields. The user speaks out his username and with the help of STT conversion (using Web Speech API), the username is entered. Then the control moves over to the password field and similarly, the password of the user is obtained and matched with the one saved in the website database. The concept of acoustic fingerprinting has been implemented for successfully validating the passwords of registered users and foiling intentions of malicious attackers. On successful match of the passwords, the user is able to enjoy the services of the website without any further hassle. Once the access obstacles associated to deal with social networking sites are successfully resolved and proper technologies are put to place, social networking sites can be a rewarding, fulfilling, and enjoyable experience for the visually impaired people.
In interconnected power systems, dynamic model reduction can be applied to generators outside the area of interest (i.e., study area) to reduce the computational cost associated with transient stability studies. This paper presents a method of deriving the reduced dynamic model of the external area based on dynamic response measurements. The method consists of three steps, namely dynamic-feature extraction, attribution, and reconstruction (DEAR). In this method, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal “basis” of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original system. The network model is unchanged in the DEAR method. Tests on several IEEE standard systems show that the proposed method yields better reduction ratio and response errors than the traditional coherency based reduction methods.
In the last decade, the request for Internet access in heterogeneous environments keeps on growing, principally in mobile platforms such as buses, airplanes and trains. Consequently, several extensions and schemes have been introduced to achieve seamless handoff of mobile networks from one subnet to another. Even with these enhancements, the problem of maintaining the security concerns and availability has not been resolved yet, especially, the absence of authentication mechanism between network entities in order to avoid vulnerability from attacks. To eliminate the threats on the interface between the mobile access gateway (MAG) and the mobile router (MR) in improving fast PMIPv6-based network mobility (IFP-NEMO) protocol, we propose a lightweight mutual authentication mechanism in improving fast PMIPv6-based network mobility scheme (LMAIFPNEMO). This scheme uses authentication, authorization and accounting (AAA) servers to enhance the security of the protocol IFP-NEMO which allows the integration of improved fast proxy mobile IPv6 (PMIPv6) in network mobility (NEMO). We use only symmetric cryptographic, generated nonces and hash operation primitives to ensure a secure authentication procedure. Then, we analyze the security aspect of the proposed scheme and evaluate it using the automated validation of internet security protocols and applications (AVISPA) software which has proved that authentication goals are achieved.
Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of probabilistic threat propagation on the tasks of detecting botnets and malicious web destinations.
Analysing cyber attack environments yield tremendous insight into adversory behavior, their strategy and capabilities. Designing cyber intensive games that promote offensive and defensive activities to capture or protect assets assist in the understanding of cyber situational awareness. There exists tangible metrics to characterizing games such as CTFs to resolve the intensity and aggression of a cyber attack. This paper synthesizes the characteristics of InCTF (India CTF) and provides an understanding of the types of vulnerabilities that have the potential to cause significant damage by trained hackers. The two metrics i.e. toxicity and effectiveness and its relation to the final performance of each team is detailed in this context.
We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.
This article talks about online deception, deception to them is considered as a deliberate act with the intent to mislead others while the recipients are not made aware or expect that such an act is taking place and that the goal of the deceiver is to transfer that false belief to the deceived ones. Understanding how online deception works through social media and future technologies remains a significant challenge. To address this challenge one needs to design social media applications with various rules and norms that our traditional physical space does not have.
Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.
Our vision in this paper is that agency, as the individual ability to intervene and tailor the system, is a crucial element in building trust in IoT technologies. Following up on this vision, we will first address the issue of agency, namely the individual capability to adopt free decisions, as a relevant driver in building trusted human-IoT relations, and how agency should be embedded in digital systems. Then we present the main challenges posed by existing approaches to implement this vision. We show then our proposal for a model-based approach that realizes the agency concept, including a prototype implementation.