Biblio
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
The need to protect big data, particularly those relating to information security (IS) maintenance (ISM) of an enterprise's IT infrastructure, is shown. A worldwide experience of addressing big data ISM issues is briefly summarized and a big data protection problem statement is formulated. An infrastructure for big data ISM is proposed. New applications areas for big data IT after addressing ISM issues are listed in conclusion.
Keeping up with rapid advances in research in various fields of Engineering and Technology is a challenging task. Decision makers including academics, program managers, venture capital investors, industry leaders and funding agencies not only need to be abreast of latest developments but also be able to assess the effect of growth in certain areas on their core business. Though analyst agencies like Gartner, McKinsey etc. Provide such reports for some areas, thought leaders of all organisations still need to amass data from heterogeneous collections like research publications, analyst reports, patent applications, competitor information etc. To help them finalize their own strategies. Text mining and data analytics researchers have been looking at integrating statistics, text analytics and information visualization to aid the process of retrieval and analytics. In this paper, we present our work on automated topical analysis and insight generation from large heterogeneous text collections of publications and patents. While most of the earlier work in this area provides search-based platforms, ours is an integrated platform for search and analysis. We have presented several methods and techniques that help in analysis and better comprehension of search results. We have also presented methods for generating insights about emerging and popular trends in research along with contextual differences between academic research and patenting profiles. We also present novel techniques to present topic evolution that helps users understand how a particular area has evolved over time.
The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.
This paper presents a novel visual analytics technique developed to support exploratory search tasks for event data document collections. The technique supports discovery and exploration by clustering results and overlaying cluster summaries onto coordinated timeline and map views. Users can also explore and interact with search results by selecting clusters to filter and re-cluster the data with animation used to smooth the transition between views. The technique demonstrates a number of advantages over alternative methods for displaying and exploring geo-referenced search results and spatio-temporal data. Firstly, cluster summaries can be presented in a manner that makes them easy to read and scan. Listing representative events from each cluster also helps the process of discovery by preserving the diversity of results. Also, clicking on visual representations of geo-temporal clusters provides a quick and intuitive way to navigate across space and time simultaneously. This removes the need to overload users with the display of too many event labels at any one time. The technique was evaluated with a group of nineteen users and compared with an equivalent text based exploratory search engine.
Security is becoming a major concern in computing. New techniques are evolving every day; one of these techniques is Hash Visualization. Hash Visualization uses complex random generated images for security, these images can be used to hide data (watermarking). This proposed new technique improves hash visualization by using genetic algorithms. Genetic algorithms are a search optimization technique that is based on the evolution of living creatures. The proposed technique uses genetic algorithms to improve hash visualization. The used genetic algorithm was away faster than traditional previous ones, and it improved hash visualization by evolving the tree that was used to generate the images, in order to obtain a better and larger tree that will generate images with higher security. The security was satisfied by calculating the fitness value for each chromosome based on a specifically designed algorithm.
Nowadays is increasingly used process bus for communication of equipments in substations. In addition to signaling various statuses of device using GOOSE messages it is possible to transmit measured values, which can be used for diagnostic of system or other advanced functions. Transmission of such values via Ethernet is well defined in protocol IEC 61850-9-2. Paper introduces a tool designed for verification of sampled values generated by various devices using this protocol.
In the pursuit of cyber security for organizations, there are tens of thousands of tools, guidelines, best practices, forensics, platforms, toolkits, diagnostics, and analytics available. However according to the Verizon 2014 Data Breach Report: “after analysing 10 years of data... organizations cannot keep up with cyber crime-and the bad guys are winning.” Although billions are expended worldwide on cyber security, organizations struggle with complexity, e.g., the NISTIR 7628 guidelines for cyber-physical systems are over 600 pages of text. And there is a lack of information visibility. Organizations must bridge the gap between technical cyber operations and the business/social priorities since both sides are essential for ensuring cyber security. Identifying visual structures for information synthesis could help reduce the complexity while increasing information visibility within organizations. This paper lays the foundation for investigating such visual structures by first identifying where current visual structures are succeeding or failing. To do this, we examined publicly available analyses related to three types of security issues: 1) epidemic, 2) cyber attacks on an industrial network, and 3) threat of terrorist attack. We found that existing visual structures are largely inadequate for reducing complexity and improving information visibility. However, based on our analysis, we identified a range of different visual structures, and their possible trade-offs/limitation is framing strategies for cyber policy. These structures form the basis of evolving visualization to support information synthesis for policy actions, which has rarely been done but is promising based on the efficacy of existing visualizations for cyber incident detection, attacks, and situation awareness.
As the centers of knowledge, discovery, and intellectual exploration, US universities provide appealing cybersecurity targets. Cyberattack origin patterns and relationships are not evident until data is visualized in maps and tested with statistical models. The current cybersecurity threat detection software utilized by University of North Florida's IT department records large amounts of attacks and attempted intrusions by the minute. This paper presents GIS mapping and spatial analysis of cybersecurity attacks on UNF. First, locations of cyberattack origins were detected by geographic Internet Protocol (GEO-IP) software. Second, GIS was used to map the cyberattack origin locations. Third, we used advanced spatial statistical analysis functions (exploratory spatial data analysis and spatial point pattern analysis) and R software to explore cyberattack patterns. The spatial perspective we promote is novel because there are few studies employing location analytics and spatial statistics in cyber-attack detection and prevention research.
Most network traffic analysis applications are designed to discover malicious activity by only relying on high-level flow-based message properties. However, to detect security breaches that are specifically designed to target one network (e.g., Advanced Persistent Threats), deep packet inspection and anomaly detection are indispensible. In this paper, we focus on how we can support experts in discovering whether anomalies at message level imply a security risk at network level. In SNAPS (Semantic Network traffic Analysis through Projection and Selection), we provide a bottom-up pixel-oriented approach for network traffic analysis where the expert starts with low-level anomalies and iteratively gains insight in higher level events through the creation of multiple selections of interest in parallel. The tight integration between visualization and machine learning enables the expert to iteratively refine anomaly scores, making the approach suitable for both post-traffic analysis and online monitoring tasks. To illustrate the effectiveness of this approach, we present example explorations on two real-world data sets for the detection and understanding of potential Advanced Persistent Threats in progress.
Intrusion Detection Systems (IDSs) are an important defense tool against the sophisticated and ever-growing network attacks. These systems need to be evaluated against high quality datasets for correctly assessing their usefulness and comparing their performance. We present an Intrusion Detection Dataset Toolkit (ID2T) for the creation of labeled datasets containing user defined synthetic attacks. The architecture of the toolkit is provided for examination and the example of an injected attack, in real network traffic, is visualized and analyzed. We further discuss the ability of the toolkit of creating realistic synthetic attacks of high quality and low bias.
Recently, the increase of interconnectivity has led to a rising amount of IoT enabled devices in botnets. Such botnets are currently used for large scale DDoS attacks. To keep track with these malicious activities, Honeypots have proven to be a vital tool. We developed and set up a distributed and highly-scalable WAN Honeypot with an attached backend infrastructure for sophisticated processing of the gathered data. For the processed data to be understandable we designed a graphical frontend that displays all relevant information that has been obtained from the data. We group attacks originating in a short period of time in one source as sessions. This enriches the data and enables a more in-depth analysis. We produced common statistics like usernames, passwords, username/password combinations, password lengths, originating country and more. From the information gathered, we were able to identify common dictionaries used for brute-force login attacks and other more sophisticated statistics like login attempts per session and attack efficiency.
Malware writers often develop malware with automated measures, so the number of malware has increased dramatically. Automated measures tend to repeatedly use significant modules, which form the basis for identifying malware variants and discriminating malware families. Thus, we propose a novel visualization analysis method for researching malware similarity. This method converts malicious Windows Portable Executable (PE) files into local entropy images for observing internal features of malware, and then normalizes local entropy images into entropy pixel images for malware classification. We take advantage of the Jaccard index to measure similarities between entropy pixel images and the k-Nearest Neighbor (kNN) classification algorithm to assign entropy pixel images to different malware families. Preliminary experimental results show that our visualization method can discriminate malware families effectively.
Power system simulation environments with appropriate time-fidelity are needed to enable rapid testing of new smart grid technologies and for coupled simulations of the underlying cyber infrastructure. This paper presents such an environment which operates with power system models in the PMU time frame, including data visualization and interactive control action capabilities. The flexible and extensible capabilities are demonstrated by interfacing with a cyber infrastructure simulation.
Due to improvements in defensive systems, network threats are becoming increasingly sophisticated and complex as cybercriminals are using various methods to cloak their actions. This, among others, includes the application of network steganography e.g. to hide the communication between an infected host and a malicious control server by embedding commands into innocent-looking traffic. Currently, a new subtype of such methods called inter-protocol steganography emerged. It utilizes relationships between two or more overt protocols to hide data. In this paper, we present new inter-protocol hiding techniques which are suitable for real-time services. Afterwards, we introduce and present preliminary results of a novel steganography detection approach which relies on network traffic coloring.
Network analysts have long used two-dimensional security visualizations to make sense of overwhelming amounts of network data. As networks grow larger and more complex, two-dimensional displays can become convoluted, compromising user cyber-threat perspective. Using augmented reality to display data with cyber-physical context creates a naturally intuitive interface that helps restore perspective and comprehension sacrificed by complicated two-dimensional visualizations. We introduce Mobile Augmented Reality for Cybersecurity, or MARCS, as a platform to visualize a diverse array of data in real time and space to improve user perspective and threat response. Early work centers around CovARVT and ConnectAR, two proof of concept, prototype applications designed to visualize intrusion detection and wireless association data, respectively.
This paper demonstrates how the Insider Threat Cybersecurity Framework (ITCF) web tool and methodology help provide a more dynamic, defense-in-depth security posture against insider cyber and cyber-physical threats. ITCF includes over 30 cybersecurity best practices to help organizations identify, protect, detect, respond and recover to sophisticated insider threats and vulnerabilities. The paper tests the efficacy of this approach and helps validate and verify ITCF's capabilities and features through various insider attacks use-cases. Two case-studies were explored to determine how organizations can leverage ITCF to increase their overall security posture against insider attacks. The paper also highlights how ITCF facilitates implementation of the goals outlined in two Presidential Executive Orders to improve the security of classified information and help owners and operators secure critical infrastructure. In realization of these goals, ITCF: provides an easy to use rapid assessment tool to perform an insider threat self-assessment; determines the current insider threat cybersecurity posture; defines investment-based goals to achieve a target state; connects the cybersecurity posture with business processes, functions, and continuity; and finally, helps develop plans to answer critical organizational cybersecurity questions. In this paper, the webtool and its core capabilities are tested by performing an extensive comparative assessment over two different high-profile insider threat incidents.