Biblio
Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.
Metropolitan scale WiFi deployments face several challenges including controllability and management, which prohibit the provision of Seamless Access, Quality of Service (QoS) and Security to mobile users. Thus, they remain largely an untapped networking resource. In this work, a SDN-based network architecture is proposed; it is comprised of a distributed network-wide controller and a novel datapath for wireless access points. Virtualization of network functions is employed for configurable user access control as well as for supporting an IP-independent forwarding scheme. The proposed architecture is a flat network across the deployment area, providing seamless connectivity and reachability without the need of intermediary servers over the Internet, enabling thus a wide variety of localized applications, like for instance video surveillance. Also, the provided interface allows for transparent implementation of intra-network distributed cross-layer traffic control protocols that can optimize the multihop performance of the wireless network.
By representing large corpora with concise and meaningful elements, topic-based generative models aim to reduce the dimension and understand the content of documents. Those techniques originally analyze on words in the documents, but their extensions currently accommodate meta-data such as authorship information, which has been proved useful for textual modeling. The importance of learning authorship is to extract author interests and assign authors to anonymous texts. Author-Topic (AT) model, an unsupervised learning technique, successfully exploits authorship information to model both documents and author interests using topic representations. However, the AT model simplifies that each author has equal contribution on multiple-author documents. To overcome this limitation, we assumes that authors give different degrees of contributions on a document by using a Dirichlet distribution. This automatically transforms the unsupervised AT model to Supervised Author-Topic (SAT) model, which brings a novelty of authorship prediction on anonymous texts. The SAT model outperforms the AT model for identifying authors of documents written by either single authors or multiple authors with a better Receiver Operating Characteristic (ROC) curve and a significantly higher Area Under Curve (AUC). The SAT model not only achieves competitive performance to state-of-the-art techniques e.g. Random forests but also maintains the characteristics of the unsupervised models for information discovery i.e. Word distributions of topics, author interests, and author contributions.
When a user accesses a resource, the accounting process at the server side does the job of keeping track of the resource usage so as to charge the user. In cloud computing, a user may use more than one service provider and need two independent service providers to work together. In this user-centric context, the user is the owner of the information and has the right to authorize to a third party application to access the protected resource on the user's behalf. Therefore, the user also needs to monitor the authorized resource usage he granted to third party applications. However, the existing accounting protocols were proposed to monitor the resource usage in terms of how the user uses the resource from the service provider. This paper proposed the user-centric accounting model called AccAuth which designs an accounting layer to an OAuth protocol. Then the prototype was implemented, and the proposed model was evaluated against the standard requirements. The result showed that AccAuth passed all the requirements.
The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
High-speed IP address lookup is essential to achieve wire-speed packet forwarding in Internet routers. Ternary content addressable memory (TCAM) technology has been adopted to solve the IP address lookup problem because of its ability to perform fast parallel matching. However, the applicability of TCAMs presents difficulties due to cost and power dissipation issues. Various algorithms and hardware architectures have been proposed to perform the IP address lookup using ordinary memories such as SRAMs or DRAMs without using TCAMs. Among the algorithms, we focus on two efficient algorithms providing high-speed IP address lookup: parallel multiple-hashing (PMH) algorithm and binary search on level algorithm. This paper shows how effectively an on-chip Bloom filter can improve those algorithms. A performance evaluation using actual backbone routing data with 15,000-220,000 prefixes shows that by adding a Bloom filter, the complicated hardware for parallel access is removed without search performance penalty in parallel-multiple hashing algorithm. Search speed has been improved by 30-40 percent by adding a Bloom filter in binary search on level algorithm.
Software components are software units designed to interact with other independently developed software components. These components are assembled by third parties into software applications. The success of final software applications largely depends upon the selection of appropriate and easy to fit components in software application according to the need of customer. It is primary requirement to evaluate the quality of components before using them in the final software application system. All the quality characteristics may not be of same significance for a particular software application of a specific domain. Therefore, it is necessary to identify only those characteristics/ sub-characteristics, which may have higher importance over the others. Analytical Network Process (ANP) is used to solve the decision problem, where attributes of decision parameters form dependency networks. The objective of this paper is to propose ANP based model to prioritize the characteristics /sub-characteristics of quality and to o estimate the numeric value of software quality.
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
In this paper, the use of some of the most popular adaptive filtering algorithms for the purpose of linearizing power amplifiers by the well-known digital predistortion (DPD) technique is investigated. First, an introduction to the problem of power amplifier linearization is given, followed by a discussion of the model used for this purpose. Next, a variety of adaptive algorithms are used to construct the digital predistorter function for a highly nonlinear power amplifier and their performance is comparatively analyzed. Based on the simulations presented in this paper, conclusions regarding the choice of algorithm are derived.
Recently, threat of previously unknown cyber-attacks are increasing because existing security systems are not able to detect them. Past cyber-attacks had simple purposes of leaking personal information by attacking the PC or destroying the system. However, the goal of recent hacking attacks has changed from leaking information and destruction of services to attacking large-scale systems such as critical infrastructures and state agencies. In the other words, existing defence technologies to counter these attacks are based on pattern matching methods which are very limited. Because of this fact, in the event of new and previously unknown attacks, detection rate becomes very low and false negative increases. To defend against these unknown attacks, which cannot be detected with existing technology, we propose a new model based on big data analysis techniques that can extract information from a variety of sources to detect future attacks. We expect our model to be the basis of the future Advanced Persistent Threat(APT) detection and prevention system implementations.
Sophisticated technologies realized from applying the idea of biometric identification are increasingly applied in the entrance security management system, private document protection, and security access control. Common biometric identification involves voice, attitude, keystroke, signature, iris, face, palm or finger prints, etc. Still, there are novel identification technologies based on the individual's biometric features under development [1-4].
Surveillance video streams monitoring is an important task that the surveillance operators usually carry out. The distribution of video surveillance facilities over multiple premises and the mobility of surveillance users requires that they are able to view surveillance video seamlessly from their mobile devices. In order to satisfy this requirement, we propose a cloud-based IPTV (Internet Protocol Television) solution that leverages the power of cloud infrastructure and the benefits of IPTV technology to seamlessly deliver surveillance video content on different client devices anytime and anywhere. The proposed mechanism also supports user-controlled frame rate adjustment of video streams and sharing of these streams with other users. In this paper, we describe the overall approach of this idea, address and identify key technical challenges for its practical implementation. In addition, initial experimental results were presented to justify the viability of the proposed cloud-based IPTV surveillance framework over the traditional IPTV surveillance approach.
The security of Smart Grid, being one of the very important aspects of the Smart Grid system, is studied in this paper. We first discuss different pitfalls in the security of the Smart Grid system considering the communication infrastructure among the sensors, actuators, and control systems. Following that, we derive a mathematical model of the system and propose a robust security framework for power grid. To effectively estimate the variables of a wide range of state processes in the model, we adopt Kalman Filter in the framework. The Kalman Filter estimates and system readings are then fed into the χ2-square detectors and the proposed Euclidean detectors, which can detect various attacks and faults in the power system including False Data Injection Attacks. The χ2-detector is a proven-effective exploratory method used with Kalman Filter for the measurement of the relationship between dependent variables and a series of predictor variables. The χ2-detector can detect system faults/attacks such as replay and DoS attacks. However, the study shows that the χ2-detector detectors are unable to detect statistically derived False Data Injection Attacks while the Euclidean distance metrics can identify such sophisticated injection attacks.
Despite all the current controversies, the success of the email service is still valid. The ease of use of its various features contributed to its widespread adoption. In general, the email system provides for all its users the same set of features controlled by a single monolithic policy. Such solutions are efficient but limited because they grant no place for the concept of usage which denotes a user's intention of communication: private, professional, administrative, official, military. The ability to efficiently send emails from mobile devices creates new interesting opportunities. We argue that the context (location, time, device, operating system, access network...) of the email sender appears as a new dimension we have to take into account to complete the picture. Context is clearly orthogonal to usage because a same usage may require different features depending of the context. It is clear that there is no global policy meeting requirements of all possible usages and contexts. To address this problem, we propose to define a correspondence model which for a given usage and context allows to derive a correspondence type encapsulating the exact set of required features. With this model, it becomes possible to define an advanced email system which may cope with multiple policies instead of a single monolithic one. By allowing a user to select the exact policy coping with her needs, we argue that our approach reduces the risk-taking allowing the email system to slide from a trusted one to a confident one.
Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices.
Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices.
This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
IP spoofing based DDoS attack that relies on multiple compromised hosts in the network to attack the victim. In IP spoofing, IP addresses can be forged easily, thus, makes it difficult to filter illegitimate packets from legitimate one out of aggregated traffic. A number of mitigation techniques have been proposed in the literature by various researchers. The conventional Hop Count Filtering or probabilistic Hop Count Filtering based research work indicates the problems related to higher computational time and low detection rate of illegitimate packets. In this paper, DPHCF-RTT technique has been implemented and analysed for variable number of hops. Goal is to improve the limitations of Conventional HCF or Probabilistic HCF techniques by maximizing the detection rate of illegitimate packets and reducing the computation time. It is based on distributed probabilistic HCF using RTT. It has been used in an intermediate system. It has the advantage for resolving the problems of network bandwidth jam and host resources exhaustion. MATLAB 7 has been used for simulations. Mitigation of DDoS attacks have been done through DPHCF-RTT technique. It has been shown a maximum detection rate up to 99% of malicious packets.
Botnet is one of the most widespread and serious malware which occur frequently in today's cyber attacks. A botnet is a group of Internet-connected computer programs communicating with other similar programs in order to perform various attacks. HTTP-based botnet is most dangerous botnet among all the different botnets available today. In botnets detection, in particularly, behavioural-based approaches suffer from the unavailability of the benchmark datasets and this lead to lack of precise results evaluation of botnet detection systems, comparison, and deployment which originates from the deficiency of adequate datasets. Most of the datasets in the botnet field are from local environment and cannot be used in the large scale due to privacy problems and do not reflect common trends, and also lack some statistical features. To the best of our knowledge, there is not any benchmark dataset available which is infected by HTTP-based botnet (HBB) for performing Distributed Denial of Service (DDoS) attacks against Web servers by using HTTP-GET flooding method. In addition, there is no Web access log infected by botnet is available for researchers. Therefore, in this paper, a complete test-bed will be illustrated in order to implement a real time HTTP-based botnet for performing variety of DDoS attacks against Web servers by using HTTP-GET flooding method. In addition to this, Web access log with http bot traces are also generated. These real time datasets and Web access logs can be useful to study the behaviour of HTTP-based botnet as well as to evaluate different solutions proposed to detect HTTP-based botnet by various researchers.
This paper proposes a new network-based cyber intrusion detection system (NIDS) using multicast messages in substation automation systems (SASs). The proposed network-based intrusion detection system monitors anomalies and malicious activities of multicast messages based on IEC 61850, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Value (SV). NIDS detects anomalies and intrusions that violate predefined security rules using a specification-based algorithm. The performance test has been conducted for different cyber intrusion scenarios (e.g., packet modification, replay and denial-of-service attacks) using a cyber security testbed. The IEEE 39-bus system model has been used for testing of the proposed intrusion detection method for simultaneous cyber attacks. The false negative ratio (FNR) is the number of misclassified abnormal packets divided by the total number of abnormal packets. The results demonstrate that the proposed NIDS achieves a low fault negative rate.
A major issue to secure wireless sensor networks is key distribution. Current key distribution schemes are not fully adapted to the tiny, low-cost, and fragile sensors with limited computation capability, reduced memory size, and battery-based power supply. This paper investigates the design of an efficient key distribution and management scheme for wireless sensor networks. The proposed scheme can ensure the generation and distribution of different encryption keys intended to secure individual and group communications. This is performed based on elliptic curve public key encryption using Diffie-Hellman like key exchange and secret sharing techniques that are applied at different levels of the network topology. This scheme is more efficient and less complex than existing approaches, due to the reduced communication and processing overheads required to accomplish key exchange. Furthermore, few keys with reduced sizes are managed in sensor nodes which optimizes memory usage, and enhances scalability to large size networks.
Cloud Computing delivers the service to the users by having reliable internet connection. In the secure cloud, services are stored and shared by multiple users because of less cost and data maintenance. Sharing the data is the vital intention of cloud data centres. On the other hand, storing the sensitive information is the privacy concern of the cloud. Cloud service provider has to protect the stored client's documents and applications in the cloud by encrypting the data to provide data integrity. Designing proficient document sharing among the group members in the cloud is the difficult task because of group user membership change and conserving document and group user identity confidentiality. To propose the fortified data sharing scheme in secret manner for providing efficient group revocation Advanced Encryption Standard scheme is used. Proposed System contributes efficient group authorization, authentication, confidentiality and access control and document security. To provide more data security Advanced Encryption Standard algorithm is used to encrypt the document. By asserting security and confidentiality in this proficient method securely share the document among the multiple cloud user.
Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework.