Biblio
This paper reviews the definitions and characteristics of military effects, the Internet of Battlefield Things (IoBT), and their impact on decision processes in a Multi-Domain Operating environment (MDO). The aspects of contemporary military decision-processes are illustrated and an MDO Effect Loop decision process is introduced. We examine the concept of IoBT effects and their implications in MDO. These implications suggest that when considering the concept of MDO, as a doctrine, the technological advances of IoBTs empower enhancements in decision frameworks and increase the viability of novel operational approaches and options for military effects.
Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, a multitude of operational conditions (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a set of research directions are proposed that aim to fundamentally address the issues of trust and trustworthiness in contested battlefield environments, building on prior research in the cybersecurity domain. These research directions focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) Ensuring continued trust of known IoBT assets and systems.
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
This Research Work in Progress paper presents a study on improving student learning performance in a virtual hands-on lab system in cybersecurity education. As the demand for cybersecurity-trained professionals rapidly increasing, virtual hands-on lab systems have been introduced into cybersecurity education as a tool to enhance students' learning. To improve learning in a virtual hands-on lab system, instructors need to understand: what learning activities are associated with students' learning performance in this system? What relationship exists between different learning activities? What instructors can do to improve learning outcomes in this system? However, few of these questions has been studied for using virtual hands-on lab in cybersecurity education. In this research, we present our recent findings by identifying that two learning activities are positively associated with students' learning performance. Notably, the learning activity of reading lab materials (p \textbackslashtextless; 0:01) plays a more significant role in hands-on learning than the learning activity of working on lab tasks (p \textbackslashtextless; 0:05) in cybersecurity education.In addition, a student, who spends longer time on reading lab materials, may work longer time on lab tasks (p \textbackslashtextless; 0:01).
The spotlight is on cybersecurity education programs to develop a qualified cybersecurity workforce to meet the demand of the professional field. The ACM CCECC (Committee for Computing Education in Community Colleges) is leading the creation of a set of guidelines for associate degree cybersecurity programs called Cyber2yr, formerly known as CSEC2Y. A task force of community college educators have created a student competency focused curriculum that will serve as a global cybersecurity guide for applied (AAS) and transfer (AS) degree programs to develop a knowledgeable and capable associate level cybersecurity workforce. Based on the importance of the Cyber2yr work; ABET a nonprofit, non-governmental agency that accredits computing programs has created accreditation criteria for two-year cybersecurity programs.
Cybersecurity competitions have been shown to be an effective approach for promoting student engagement through active learning in cybersecurity. Players can gain hands-on experience in puzzle-based or capture-the-flag type tasks that promote learning. However, novice players with limited prior knowledge in cybersecurity usually found difficult to have a clue to solve a problem and get frustrated at the early stage. To enhance student engagement, it is important to study the experiences of novices to better understand their learning needs. To achieve this goal, we conducted a 4-month longitudinal case study which involves 11 undergraduate students participating in a college-level cybersecurity competition, National Cyber League (NCL) competition. The competition includes two individual games and one team game. Questionnaires and in-person interviews were conducted before and after each game to collect the players' feedback on their experience, learning challenges and needs, and information about their motivation, interests and confidence level. The collected data demonstrate that the primary concern going into these competitions stemmed from a lack of knowledge regarding cybersecurity concepts and tools. Players' interests and confidence can be increased by going through systematic training.
Recent years, more and more testing criteria for deep learning systems has been proposed to ensure system robustness and reliability. These criteria were defined based on different perspectives of diversity. However, there lacks comprehensive investigation on what are the most essential diversities that should be considered by a testing criteria for deep learning systems. Therefore, in this paper, we conduct an empirical study to investigate the relation between test diversities and erroneous behaviors of deep learning models. We define five metrics to reflect diversities in neuron activities, and leverage metamorphic testing to detect erroneous behaviors. We investigate the correlation between metrics and erroneous behaviors. We also go further step to measure the quality of test suites under the guidance of defined metrics. Our results provided comprehensive insights on the essential diversities for testing criteria to exhibit good fault detection ability.
The high penetration of third-party intellectual property (3PIP) brings a high risk of malicious inclusions and data leakage in products due to the planted hardware Trojans, and system level security constraints have recently been proposed for MPSoCs protection against hardware Trojans. However, secret communication still can be established in the context of the proposed security constraints, and thus, another type of security constraints is also introduced to fully prevent such malicious inclusions. In addition, fulfilling the security constraints incurs serious overhead of schedule length, and a two-stage performance-constrained task scheduling algorithm is then proposed to maintain most of the security constraints. In the first stage, the schedule length is iteratively reduced by assigning sets of adjacent tasks into the same core after calculating the maximum weight independent set of a graph consisting of all timing critical paths. In the second stage, tasks are assigned to proper IP vendors and scheduled to time periods with a minimization of cores required. The experimental results show that our work reduces the schedule length of a task graph, while only a small number of security constraints are violated.
The extensive increase in the number of IoT devices and the massive data generated and sent to the cloud hinder the cloud abilities to handle it. Further, some IoT devices are latency-sensitive. Such sensitivity makes it harder for far clouds to handle the IoT needs in a timely manner. A new technology named "Fog computing" has emerged as a solution to such problems. Fog computing relies on close by computational devices to handle the conventional cloud load. However, Fog computing introduced additional problems related to the trustworthiness and safety of such devices. Unfortunately, the suggested architectures did not consider such problem. In this paper we present a novel self-configuring fog architecture to support IoT networks with security and trust in mind. We realize the concept of Moving-target defense by mobilizing the applications inside the fog using live migrations. Performance evaluations using a benchmark for mobilized applications showed that the added overhead of live migrations is very small making it deployable in real scenarios. Finally, we presented a mathematical model to estimate the survival probabilities of both static and mobile applications within the fog. Moreover, this work can be extended to other systems such as mobile ad-hoc networks (MANETS) or in vehicular cloud computing (VCC).
Cyber-physical systems are an integral component of weapons, sensors and autonomous vehicles, as well as cyber assets directly supporting tactical forces. Mission resilience of tactical networks affects command and control, which is important for successful military operations. Traditional engineering methods for mission assurance will not scale during battlefield operations. Commanders need useful mission resilience metrics to help them evaluate the ability of cyber assets to recover from incidents to fulfill mission essential functions. We develop 6 cyber resilience metrics for tactical network architectures. We also illuminate how psychometric modeling is necessary for future research to identify resilience metrics that are both applicable to the dynamic mission state and meaningful to commanders and planners.
Hardware implementation of many of today's applications such as those in automotive, telecommunication, bio, and security, require heavy repeated computations, and concurrency in the execution of these computations. These requirements are not easily satisfied by existing embedded systems. This paper proposes an embedded system architecture that is enhanced by an array of accelerators, and a bussing system that enables concurrency in operation of accelerators. This architecture is statically configurable to configure it for performing a specific application. The embedded system architecture and architecture of the configurable accelerators are discussed in this paper. A case study examines an automotive application running on our proposed system.
Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).
We propose a serverless computing mechanism for distributed computation based on polar codes. Serverless computing is an emerging cloud based computation model that lets users run their functions on the cloud without provisioning or managing servers. Our proposed approach is a hybrid computing framework that carries out computationally expensive tasks such as linear algebraic operations involving large-scale data using serverless computing and does the rest of the processing locally. We address the limitations and reliability issues of serverless platforms such as straggling workers using coding theory, drawing ideas from recent literature on coded computation. The proposed mechanism uses polar codes to ensure straggler-resilience in a computationally effective manner. We provide extensive evidence showing polar codes outperform other coding methods. We have designed a sequential decoder specifically for polar codes in erasure channels with full-precision input and outputs. In addition, we have extended the proposed method to the matrix multiplication case where both matrices being multiplied are coded. The proposed coded computation scheme is implemented for AWS Lambda. Experiment results are presented where the performance of the proposed coded computation technique is tested in optimization via gradient descent. Finally, we introduce the idea of partial polarization which reduces the computational burden of encoding and decoding at the expense of straggler-resilience.
Matrix factorization (MF) has been proved to be an effective approach to build a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness of user-item matrix. Moreover, these methods suffer from the scalability issues when applying on large-scale real-world tasks. To tackle these issues, in this paper a social regularization method called TrustRSNMF is proposed that incorporates the social trust information of users in nonnegative matrix factorization framework. The proposed method integrates trust statements along with user-item ratings as an additional information source into the recommendation model to deal with the data sparsity and cold-start issues. In order to evaluate the effectiveness of the proposed method, a number of experiments are performed on two real-world datasets. The obtained results demonstrate significant improvements of the proposed method compared to state-of-the-art recommendation methods.
We introduce the strictly in-order core (SIC), a timing-predictable pipelined processor core. SIC is provably timing compositional and free of timing anomalies. This enables precise and efficient worst-case execution time (WCET) and multi-core timing analysis. SIC's key underlying property is the monotonicity of its transition relation w.r.t. a natural partial order on its microarchitectural states. This monotonicity is achieved by carefully eliminating some of the dependencies between consecutive instructions from a standard in-order pipeline design. SIC preserves most of the benefits of pipelining: it is only about 6-7% slower than a conventional pipelined processor. Its timing predictability enables orders-of-magnitude faster WCET and multi-core timing analysis than conventional designs.
The answer selection task is one of the most important issues within the automatic question answering system, and it aims to automatically find accurate answers to questions. Traditional methods for this task use manually generated features based on tf-idf and n-gram models to represent texts, and then select the right answers according to the similarity between the representations of questions and the candidate answers. Nowadays, many question answering systems adopt deep neural networks such as convolutional neural network (CNN) to generate the text features automatically, and obtained better performance than traditional methods. CNN can extract consecutive n-gram features with fixed length by sliding fixed-length convolutional kernels over the whole word sequence. However, due to the complex semantic compositionality of the natural language, there are many phrases with variable lengths and be composed of non-consecutive words in natural language, such as these phrases whose constituents are separated by other words within the same sentences. But the traditional CNN is unable to extract the variable length n-gram features and non-consecutive n-gram features. In this paper, we propose a multi-scale deformable convolutional neural network to capture the non-consecutive n-gram features by adding offset to the convolutional kernel, and also propose to stack multiple deformable convolutional layers to mine multi-scale n-gram features by the means of generating longer n-gram in higher layer. Furthermore, we apply the proposed model into the task of answer selection. Experimental results on public dataset demonstrate the effectiveness of our proposed model in answer selection.