Biblio
The need for cybersecurity knowledge and skills is constantly growing as our lives become more integrated with the digital world. In order to meet this demand, educational institutions must continue to innovate within the field of cybersecurity education and make this educational process as effective and efficient as possible. We seek to accomplish this goal by taking an existing cybersecurity educational technology and adding automated grading and assessment functionality to it. This will reduce costs and maximize scalability by reducing or even eliminating the need for human graders.
This paper introduces a secured and distributed Big Data storage scheme with multiple authorizations. It divides the Big Data into small chunks and distributes them through multiple Cloud locations. The Shamir's Secret Sharing and Secure Hash Algorithm are employed to provide the security and authenticity of this work. The proposed methodology consists of two phases: the distribution and retrieving phases. The distribution phase comprises three operations of dividing, encrypting, and distribution. The retrieving phase performs collecting and verifying operations. To increase the security level, the encryption key is divided into secret shares using Shamir's Algorithm. Moreover, the Secure Hash Algorithm is used to verify the Big Data after retrieving from the Cloud. The experimental results show that the proposed design can reconstruct a distributed Big Data with good speed while conserving the security and authenticity properties.
With the recent boom in the cryptocurrency market, hackers have been on the lookout to find novel ways of commandeering users' machine for covert and stealthy mining operations. In an attempt to expose such under-the-hood practices, this paper explores the issue of browser cryptojacking, whereby miners are secretly deployed inside browser code without the knowledge of the user. To this end, we analyze the top 50k websites from Alexa and find a noticeable percentage of sites that are indulging in this exploitative exercise often using heavily obfuscated code. Furthermore, mining prevention plug-ins, such as NoMiner, fail to flag such cleverly concealed instances. Hence, we propose a machine learning solution based on hardware-assisted profiling of browser code in real-time. A fine-grained micro-architectural footprint allows us to classify mining applications with \textbackslashtextgreater99% accuracy and even flags them if the mining code has been heavily obfuscated or encrypted. We build our own browser extension and show that it outperforms other plug-ins. The proposed design has negligible overhead on the user's machine and works for all standard off-the-shelf CPUs.
In order to protect individuals' privacy, data have to be "well-sanitized" before sharing them, i.e. one has to remove any personal information before sharing data. However, it is not always clear when data shall be deemed well-sanitized. In this paper, we argue that the evaluation of sanitized data should be based on whether the data allows the inference of sensitive information that is specific to an individual, instead of being centered around the concept of re-identification. We propose a framework to evaluate the effectiveness of different sanitization techniques on a given dataset by measuring how much an individual's record from the sanitized dataset influences the inference of his/her own sensitive attribute. Our intent is not to accurately predict any sensitive attribute but rather to measure the impact of a single record on the inference of sensitive information. We demonstrate our approach by sanitizing two real datasets in different privacy models and evaluate/compare each sanitized dataset in our framework.
Growing amounts of research on IoT and its implications for security, privacy, economy and society has been carried out to inform policies and design. However, ordinary people who are citizens and users of these emerging technologies have rarely been involved in the processes that inform these policies, governance mechanisms and design due to the institutionalised processes that prioritise objective knowledge over subjective ones. People's subjective experiences are often discarded. This priority is likely to further widen the gap between people, technology policies and design as technologies advance towards delegated human agencies, which decreases human interfaces in technology-mediated relationships with objects, systems, services, trade and other (often) unknown third-party beneficiaries. Such a disconnection can have serious implications for policy implementation, especially when it involves human limitations. To address this disconnection, we argue that a space for people to meaningfully contribute their subjective knowledge — experience- to complex technology policies that, in turn, shape their experience and well-being needs to be constructed. To this end, our paper contributes the design and pilot implementation of a method to reconnect and involve people in IoT security policymaking and development.
False data injection is an on-going concern facing power system state estimation. In this work, a neural network is trained to detect the existence of false data in measurements. The proposed approach can make use of historical data, if available, by using them in the training sets of the proposed neural network model. However, the inputs of perceptron model in this work are the residual elements from the state estimation, which are highly correlated. Therefore, their dimension could be reduced by preserving the most informative features from the inputs. To this end, principal component analysis is used (i.e., a data preprocessing technique). This technique is especially efficient for highly correlated data sets, which is the case in power system measurements. The results of different perceptron models that are proposed for detection, are compared to a simple perceptron that produces identical result to the outlier detection scheme. For generating the training sets, state estimation was run for different false data on different measurements in 13-bus IEEE test system, and the residuals are saved as inputs of training sets. The testing results of the trained network show its good performance in detection of false data in measurements.
An emerging Internet business is residential proxy (RESIP) as a service, in which a provider utilizes the hosts within residential networks (in contrast to those running in a datacenter) to relay their customers' traffic, in an attempt to avoid server- side blocking and detection. With the prominent roles the services could play in the underground business world, little has been done to understand whether they are indeed involved in Cybercrimes and how they operate, due to the challenges in identifying their RESIPs, not to mention any in-depth analysis on them. In this paper, we report the first study on RESIPs, which sheds light on the behaviors and the ecosystem of these elusive gray services. Our research employed an infiltration framework, including our clients for RESIP services and the servers they visited, to detect 6 million RESIP IPs across 230+ countries and 52K+ ISPs. The observed addresses were analyzed and the hosts behind them were further fingerprinted using a new profiling system. Our effort led to several surprising findings about the RESIP services unknown before. Surprisingly, despite the providers' claim that the proxy hosts are willingly joined, many proxies run on likely compromised hosts including IoT devices. Through cross-matching the hosts we discovered and labeled PUP (potentially unwanted programs) logs provided by a leading IT company, we uncovered various illicit operations RESIP hosts performed, including illegal promotion, Fast fluxing, phishing, malware hosting, and others. We also reverse engi- neered RESIP services' internal infrastructures, uncovered their potential rebranding and reselling behaviors. Our research takes the first step toward understanding this new Internet service, contributing to the effective control of their security risks.
Routing on the Internet is defined among autonomous systems (ASes) based on a weak trust model where it is assumed that ASes are honest. While this trust model strengthens the connectivity among ASes, it results in an attack surface which is exploited by malicious entities to hijacking routing paths. One such attack is known as the BGP prefix hijacking, in which a malicious AS broadcasts IP prefixes that belong to a target AS, thereby hijacking its traffic. In this paper, we proposeRouteChain: a blockchain-based secure BGP routing system that counters BGP hijacking and maintains a consistent view of the Internet routing paths. Towards that, we leverage provenance assurance and tamper-proof properties of blockchains to augment trust among ASes. We group ASes based on their geographical (network) proximity and construct a bihierarchical blockchain model that detects false prefixes prior to their spread over the Internet. We validate strengths of our design by simulations and show its effectiveness by drawing a case study with the Youtube hijacking of 2008. Our proposed scheme is a standalone service that can be incrementally deployed without the need of a central authority.
Much recent work focuses on finding bugs and security vulnerabilities in smart contracts written in existing languages. Although this approach may be helpful, it does not address flaws in the underlying programming language, which can facilitate writing buggy code in the first place. We advocate a re-thinking of the blockchain software engineering tool set, starting with the programming language in which smart contracts are written. In this paper, we propose and justify requirements for a new generation of blockchain software development tools. New tools should (1) consider users' needs as a primary concern; (2) seek to facilitate safe development by detecting relevant classes of serious bugs at compile time; (3) as much as possible, be blockchain-agnostic, given the wide variety of different blockchain platforms available, and leverage the properties that are common among blockchain environments to improve safety and developer effectiveness.
Security vulnerabilities and software defects are prevalent in software systems, threatening every aspect of cyberspace. The complexity of modern software makes it hard to secure systems. Security vulnerabilities and software defects become a major target of cyberattacks which can lead to significant consequences. Manual identification of vulnerabilities and defects in software systems is very time-consuming and tedious. Many tools have been designed to help analyze software systems and to discover vulnerabilities and defects. However, these tools tend to miss various types of bugs. The bugs that are not caught by these tools usually include vulnerabilities and defects that are too complicated to find or do not fall inside of an existing rule-set for identification. It was hypothesized that these undiscovered vulnerabilities and defects do not occur randomly, rather, they share certain common characteristics. A methodology was proposed to detect the probability of a bug existing in a code structure. We used a comprehensive experimental evaluation to assess the methodology and report our findings.
The era of information technology has, unfortunately, contributed to the tremendous rise in the number of criminal activities. However, digital artifacts can be utilized in convicting cybercriminal and exposing their activities. The digital forensics science concerns about all aspects related to cybercrimes. It seeks digital evidence by following standard methodologies to be admitted in court rooms. This paper concerns about memory forensics for the unique artifacts it holds. Memory contains information about the current state of systems and applications. Moreover, an application's data explains how a criminal has been interacting the application just before the memory is acquired. Memory forensics at the application level is currently random and cumbersome. Targeting specific applications is what forensic researchers and practitioner are currently striving to provide. This paper suggests a general solution to investigate any application. Our solution aims to utilize an application's data structures and variables' information in the investigation process. This is because an application's data has to be stored and retrieved in the means of variables. Data structures and variables' information can be generated by compilers for debugging purposes. We show that an application's information is a valuable resource to the investigator.
Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, current operational conditions necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a collection of prior-developed cybersecurity techniques is reviewed for applicability to conditions presented by IoBT operational environments (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) through use of supporting case study examples. The research techniques covered focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) ensuring continued trust of known IoT assets and IoBT systems.
Scala programming language combines object-oriented and functional programming in one concise, high-level language, and the language supports static types that help to avoid bugs in complex programs. This paper proposes a dynamic taint analyzer called ScalaTaint for Scala applications. The analyzer traces the propagation of malicious inputs from untrusted sources to sensitive sink methods in programs that can be exploited by adversaries. In this work, we evaluated the accuracy of ScalaTaint with a security benchmark suite including 7 projects in Scala. As a result, our analyzer could report 49 vulnerabilities within 753,372 lines of code. Moreover, the result of our performance measurement on ScalaBench shows 67% runtime overhead that demonstrates the usefulness and efficiently of our technique in comparison with similar tools.
Military technology is ever-evolving to increase the safety and security of soldiers on the field while integrating Internet-of-Things solutions to improve operational efficiency in mission oriented tasks in the battlefield. Centralized communication technology is the traditional network model used for battlefields and is vulnerable to denial of service attacks, therefore suffers performance hazards. They also lead to a central point of failure, due to which, a flexible model that is mobile, resilient, and effective for different scenarios must be proposed. Blockchain offers a distributed platform that allows multiple nodes to update a distributed ledger in a tamper-resistant manner. The decentralized nature of this system suggests that it can be an effective tool for battlefields in securing data communication among Internet-of-Battlefield Things (IoBT). In this paper, we integrate a permissioned blockchain, namely Hyperledger Sawtooth, in IoBT context and evaluate its performance with the goal of determining whether it has the potential to serve the performance needs of IoBT environment. Using different testing parameters, the metric data would help in suggesting the best parameter set, network configuration and blockchain usability views in IoBT context. We show that a blockchain-integrated IoBT platform has heavy dependency on the characteristics of the underlying network such as topology, link bandwidth, jitter, and other communication configurations, that can be tuned up to achieve optimal performance.
The need for data exchange and storage is currently increasing. The increased need for data exchange and storage also increases the need for data exchange devices and media. One of the most commonly used media exchanges and data storage is the USB Flash Drive. USB Flash Drive are widely used because they are easy to carry and have a fairly large storage. Unfortunately, this increased need is not directly proportional to an increase in awareness of device security, both for USB flash drive devices and computer devices that are used as primary storage devices. This research shows the threats that can arise from the use of USB Flash Drive devices. The threat that is used in this research is the fork bomb implemented on an Arduino Pro Micro device that is converted to a USB Flash drive. The purpose of the Fork Bomb is to damage the memory performance of the affected devices. As a result, memory performance to execute the process will slow down. The use of a USB Flash drive as an attack vector with the fork bomb method causes users to not be able to access the operating system that was attacked. The results obtained indicate that the USB Flash Drive can be used as a medium of Fork Bomb attack on the Windows operating system.
Every so often Humans utilize non-verbal gestures (e.g. facial expressions) to express certain information or emotions. Moreover, countless face gestures are expressed throughout the day because of the capabilities possessed by humans. However, the channels of these expression/emotions can be through activities, postures, behaviors & facial expressions. Extensive research unveiled that there exists a strong relationship between the channels and emotions which has to be further investigated. An Automatic Facial Expression Recognition (AFER) framework has been proposed in this work that can predict or anticipate seven universal expressions. In order to evaluate the proposed approach, Frontal face Image Database also named as Japanese Female Facial Expression (JAFFE) is opted as input. This database is further processed with a frequency domain technique known as Discrete Cosine transform (DCT) and then classified using Artificial Neural Networks (ANN). So as to check the robustness of this novel strategy, the random trial of K-fold cross validation, leave one out and person independent methods is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.



