Biblio

Found 773 results

Filters: Keyword is Training  [Clear All Filters]
2022-07-05
Parizad, Ali, Hatziadoniu, Constantine.  2021.  Semi-Supervised False Data Detection Using Gated Recurrent Units and Threshold Scoring Algorithm. 2021 IEEE Power & Energy Society General Meeting (PESGM). :01—05.
In recent years, cyber attackers are targeting the power system and imposing different damages to the national economy and public safety. False Data Injection Attack (FDIA) is one of the main types of Cyber-Physical attacks that adversaries can manipulate power system measurements and modify system data. Consequently, it may result in incorrect decision-making and control operations and lead to devastating effects. In this paper, we propose a two-stage detection method. In the first step, Gated Recurrent Unit (GRU), as a deep learning algorithm, is employed to forecast the data for the future horizon. Meanwhile, hyperparameter optimization is implemented to find the optimum parameters (i.e., number of layers, epoch, batch size, β1, β2, etc.) in the supervised learning process. In the second step, an unsupervised scoring algorithm is employed to find the sequences of false data. Furthermore, two penalty factors are defined to prevent the objective function from greedy behavior. We assess the capability of the proposed false data detection method through simulation studies on a real-world data set (ComEd. dataset, Northern Illinois, USA). The results demonstrate that the proposed method can detect different types of attacks, i.e., scaling, simple ramp, professional ramp, and random attacks, with good performance metrics (i.e., recall, precision, F1 Score). Furthermore, the proposed deep learning method can mitigate false data with the estimated true values.
2022-04-12
Lavi, Bahram, Nascimento, José, Rocha, Anderson.  2021.  Semi-Supervised Feature Embedding for Data Sanitization in Real-World Events. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2495—2499.
With the rapid growth of data sharing through social media networks, determining relevant data items concerning a particular subject becomes paramount. We address the issue of establishing which images represent an event of interest through a semi-supervised learning technique. The method learns consistent and shared features related to an event (from a small set of examples) to propagate them to an unlabeled set. We investigate the behavior of five image feature representations considering low- and high-level features and their combinations. We evaluate the effectiveness of the feature embedding approach on five collected datasets from real-world events.
2022-08-12
Killedar, Vinayak, Pokala, Praveen Kumar, Sekhar Seelamantula, Chandra.  2021.  Sparsity Driven Latent Space Sampling for Generative Prior Based Compressive Sensing. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2895—2899.
We address the problem of recovering signals from compressed measurements based on generative priors. Recently, generative-model based compressive sensing (GMCS) methods have shown superior performance over traditional compressive sensing (CS) techniques in recovering signals from fewer measurements. However, it is possible to further improve the performance of GMCS by introducing controlled sparsity in the latent-space. We propose a proximal meta-learning (PML) algorithm to enforce sparsity in the latent-space while training the generator. Enforcing sparsity naturally leads to a union-of-submanifolds model in the solution space. The overall framework is named as sparsity driven latent space sampling (SDLSS). In addition, we derive the sample complexity bounds for the proposed model. Furthermore, we demonstrate the efficacy of the proposed framework over the state-of-the-art techniques with application to CS on standard datasets such as MNIST and CIFAR-10. In particular, we evaluate the performance of the proposed method as a function of the number of measurements and sparsity factor in the latent space using standard objective measures. Our findings show that the sparsity driven latent space sampling approach improves the accuracy and aids in faster recovery of the signal in GMCS.
2022-05-06
Chen, Liiie, Guan, Qihan, Chen, Ning, YiHang, Zhou.  2021.  A StackNet Based Model for Fraud Detection. 2021 2nd International Conference on Education, Knowledge and Information Management (ICEKIM). :328–331.
With the rapid development of e-commerce and the increasing popularity of credit cards, online transactions have become increasingly smooth and convenient. However, many online transactions suffer from credit card fraud, resulting in huge losses every year. Many financial organizations and e-commerce companies are devoted to developing advanced fraud detection algorithms. This paper presents an approach to detect fraud transactions using the IEEE-CIS Fraud Detection dataset provided by Kaggle. Our stacked model is based on Gradient Boosting, LightGBM, CatBoost, and Random Forest. Besides, implementing StackNet improves the classification accuracy significantly and provides expandability to the network architecture. Our final model achieved an AUC of 0.9578 for the training set and 0.9325 for the validation set, demonstrating excellent performance in classifying different transaction types.
2022-09-09
Khan, Aazar Imran, Jain, Samyak, Sharma, Purushottam, Deep, Vikas, Mehrotra, Deepti.  2021.  Stylometric Analysis of Writing Patterns Using Artificial Neural Networks. 2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT). :29—35.
Plagiarism checkers have been widely used to verify the authenticity of dissertation/project submissions. However, when non-verbatim plagiarism or online examinations are considered, this practice is not the best solution. In this work, we propose a better authentication system for online examinations that analyses the submitted text's stylometry for a match of writing pattern of the author by whom the text was submitted. The writing pattern is analyzed over many indicators (i.e., features of one's writing style). This model extracts 27 such features and stores them as the writing pattern of an individual. Stylometric Analysis is a better approach to verify a document's authorship as it doesn't check for plagiarism, but verifies if the document was written by a particular individual and hence completely shuts down the possibility of using text-convertors or translators. This paper also includes a brief comparative analysis of some simpler algorithms for the same problem statement. These algorithms yield results that vary in precision and accuracy and hence plotting a conclusion from the comparison shows that the best bet to tackle this problem is through Artificial Neural Networks.
2022-06-09
Hoarau, Kevin, Tournoux, Pierre Ugo, Razafindralambo, Tahiry.  2021.  Suitability of Graph Representation for BGP Anomaly Detection. 2021 IEEE 46th Conference on Local Computer Networks (LCN). :305–310.
The Border Gateway Protocol (BGP) is in charge of the route exchange at the Internet scale. Anomalies in BGP can have several causes (mis-configuration, outage and attacks). These anomalies are classified into large or small scale anomalies. Machine learning models are used to analyze and detect anomalies from the complex data extracted from BGP behavior. Two types of data representation can be used inside the machine learning models: a graph representation of the network (graph features) or a statistical computation on the data (statistical features). In this paper, we evaluate and compare the accuracy of machine learning models using graph features and statistical features on both large and small scale BGP anomalies. We show that statistical features have better accuracy for large scale anomalies, and graph features increase the detection accuracy by 15% for small scale anomalies and are well suited for BGP small scale anomaly detection.
2022-08-10
Sooraksa, Nanta.  2021.  A Survey of using Computational Intelligence (CI) and Artificial Intelligence (AI) in Human Resource (HR) Analytics. 2021 7th International Conference on Engineering, Applied Sciences and Technology (ICEAST). :129—132.
Human Resource (HR) Analytics has been increasingly attracted attention for a past decade. This is because the study field is adopted data-driven approaches to be processed and interpreted for meaningful insights in human resources. The field is involved in HR decision making helping to understand why people, organization, or other business performance behaved the way they do. Embracing the available tools for decision making and learning in the field of computational intelligence (CI) and Artificial Intelligence (AI) to the field of HR, this creates tremendous opportunities for HR Analytics in practical aspects. However, there are still inadequate applications in this area. This paper serves as a survey of using the tools and their applications in HR involving recruitment, retention, reward and retirement. An example of using CI and AI for career development and training in the era of disruption is conceptually proposed.
2022-03-23
Slevi, S. Tamil, Visalakshi, P..  2021.  A survey on Deep Learning based Intrusion Detection Systems on Internet of Things. 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :1488–1496.
The integration of IDS and Internet of Things (IoT) with deep learning plays a significant role in safety. Security has a strong role to play. Application of the IoT network decreases the time complexity and resources. In the traditional intrusion detection systems (IDS), this research work implements the cutting-edge methodologies in the IoT environment. This research is based on analysis, conception, testing and execution. Detection of intrusions can be performed by using the advanced deep learning system and multiagent. The NSL-KDD dataset is used to test the IoT system. The IoT system is used to test the IoT system. In order to detect attacks from intruders of transport layer, efficiency result rely on advanced deep learning idea. In order to increase the system performance, multi -agent algorithms could be employed to train communications agencies and to optimize the feedback training process. Advanced deep learning techniques such as CNN will be researched to boost system performance. The testing part an IoT includes data simulator which will be used to generate in continuous of research work finding with deep learning algorithms of suitable IDS in IoT network environment of current scenario without time complexity.
2022-04-13
Deepika, P., Kaliraj, S..  2021.  A Survey on Pest and Disease Monitoring of Crops. 2021 3rd International Conference on Signal Processing and Communication (ICPSC). :156–160.
Maintenance of Crop health is essential for the successful farming for both yield and product quality. Pest and disease in crops are serious problem to be monitored. pest and disease occur in different stages or phases of crop development. Due to introduction of genetically modified seeds the natural resistance of crops to prevent them from pest and disease is less. Major crop loss is due to pest and disease attack in crops. It damages the leaves, buds, flowers and fruits of the crops. Affected areas and damage levels of pest and diseases attacks are growing rapidly based on global climate change. Weather Conditions plays a major role in pest and disease attacks in crops. Naked eye inspection of pest and disease is complex and difficult for wide range of field. And at the same time taking lab samples to detect disease is also inefficient and time-consuming process. Early identification of diseases is important to take necessary actions for preventing crop loss and to avoid disease spreads. So, Timely and effective monitoring of crop health is important. Several technologies have been developed to detect pest and disease in crops. In this paper we discuss the various technologies implemented by using AI and Deep Learning for pest and disease detection. And also, briefly discusses their Advantages and limitations on using certain technology for monitoring of crops.
2022-02-22
Martin, Peter, Fan, Jian, Kim, Taejin, Vesey, Konrad, Greenwald, Lloyd.  2021.  Toward Effective Moving Target Defense Against Adversarial AI. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :993—998.
Deep learning (DL) models have been shown to be vulnerable to adversarial attacks. DL model security against adversarial attacks is critical to using DL-trained models in forward deployed systems, e.g. facial recognition, document characterization, or object detection. We provide results and lessons learned applying a moving target defense (MTD) strategy against iterative, gradient-based adversarial attacks. Our strategy involves (1) training a diverse ensemble of DL models, (2) applying randomized affine input transformations to inputs, and (3) randomizing output decisions. We report a primary lesson that this strategy is ineffective against a white-box adversary, which could completely circumvent output randomization using a deterministic surrogate. We reveal how our ensemble models lacked the diversity necessary for effective MTD. We also evaluate our MTD strategy against a black-box adversary employing an ensemble surrogate model. We conclude that an MTD strategy against black-box adversarial attacks crucially depends on lack of transferability between models.
2022-07-05
Schoneveld, Liam, Othmani, Alice.  2021.  Towards a General Deep Feature Extractor for Facial Expression Recognition. 2021 IEEE International Conference on Image Processing (ICIP). :2339—2342.
The human face conveys a significant amount of information. Through facial expressions, the face is able to communicate numerous sentiments without the need for verbalisation. Visual emotion recognition has been extensively studied. Recently several end-to-end trained deep neural networks have been proposed for this task. However, such models often lack generalisation ability across datasets. In this paper, we propose the Deep Facial Expression Vector ExtractoR (DeepFEVER), a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset. DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets. DeepFEVER’s extracted features also generalise extremely well to other datasets – even those unseen during training – namely, the Real-World Affective Faces (RAF) dataset.
2022-04-19
Shafique, Muhammad, Marchisio, Alberto, Wicaksana Putra, Rachmad Vidya, Hanif, Muhammad Abdullah.  2021.  Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework ICCAD Special Session Paper. 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). :1–9.
The security and privacy concerns along with the amount of data that is required to be processed on regular basis has pushed processing to the edge of the computing systems. Deploying advanced Neural Networks (NN), such as deep neural networks (DNNs) and spiking neural networks (SNNs), that offer state-of-the-art results on resource-constrained edge devices is challenging due to the stringent memory and power/energy constraints. Moreover, these systems are required to maintain correct functionality under diverse security and reliability threats. This paper first discusses existing approaches to address energy efficiency, reliability, and security issues at different system layers, i.e., hardware (HW) and software (SW). Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation. To address reliability threats (like permanent and transient faults), we highlight cost-effective mitigation techniques, like fault-aware training and mapping. Moreover, we briefly discuss effective detection and protection techniques to address security threats (like model and data corruption). Towards the end, we discuss how these techniques can be combined in an integrated cross-layer framework for realizing robust and energy-efficient Edge AI systems.
2022-10-12
Lim, Jaewan, Zhou, Lina, Zhang, Dongsong.  2021.  Verbal Deception Cue Training for the Detection of Phishing Emails. 2021 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—3.
Training on cues to deception is one of the promising ways of addressing humans’ poor performance in deception detection. However, the effect of training may be subject to the context of deception and the design of training. This study aims to investigate the effect of verbal cue training on the performance of phishing email detection by comparing different designs of training and examining the effect of topic familiarity. Based on the results of a lab experiment, we not only confirm the effect of training but also provide suggestions on how to design training to better facilitate the detection of phishing emails. In addition, our results also discover the effect of topic familiarity on phishing detection. The findings of this study have significant implications for the mitigation and intervention of online deception.
2022-04-19
Luo, Jing, Xu, Guoqing.  2021.  XSS Attack Detection Methods Based on XLNet and GRU. 2021 4th International Conference on Robotics, Control and Automation Engineering (RCAE). :171–175.
With the progress of science and technology and the development of Internet technology, Internet technology has penetrated into various industries in today’s society. But this explosive growth is also troubling information security. Among them, XSS (cross-site scripting vulnerability) is one of the most influential vulnerabilities in Internet applications in recent years. Traditional network security detection technology is becoming more and more weak in the new network environment, and deep learning methods such as CNN and RNN can only learn the spatial or timing characteristics of data samples in a single way. In this paper, a generalized self-regression pretraining model XLNet and GRU XSS attack detection method is proposed, the self-regression pretrained model XLNet is introduced and combined with GRU to learn the time series and spatial characteristics of the data, and the generalization capability of the model is improved by using dropout. Faced with the increasingly complex and ever-changing XSS payload, this paper refers to the character-level convolution to establish a dictionary to encode the data samples, thus preserving the characteristics of the original data and improving the overall efficiency, and then transforming it into a two-dimensional spatial matrix to meet XLNet’s input requirements. The experimental results on the Github data set show that the accuracy of this method is 99.92 percent, the false positive rate is 0.02 percent, the accuracy rate is 11.09 percent higher than that of the DNN method, the false positive rate is 3.95 percent lower, and other evaluation indicators are better than GRU, CNN and other comparative methods, which can improve the detection accuracy and system stability of the whole detection system. This multi-model fusion method can make full use of the advantages of each model to improve the accuracy of system detection, on the other hand, it can also enhance the stability of the system.
2022-02-22
Wink, Tobias, Nochta, Zoltan.  2021.  An Approach for Peer-to-Peer Federated Learning. 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :150—157.
We present a novel approach for the collaborative training of neural network models in decentralized federated environments. In the iterative process a group of autonomous peers run multiple training rounds to train a common model. Thereby, participants perform all model training steps locally, such as stochastic gradient descent optimization, using their private, e.g. mission-critical, training datasets. Based on locally updated models, participants can jointly determine a common model by averaging all associated model weights without sharing the actual weight values. For this purpose we introduce a simple n-out-of-n secret sharing schema and an algorithm to calculate average values in a peer-to-peer manner. Our experimental results with deep neural networks on well-known sample datasets prove the generic applicability of the approach, with regard to model quality parameters. Since there is no need to involve a central service provider in model training, the approach can help establish trustworthy collaboration platforms for businesses with high security and data protection requirements.
2022-07-15
Zhang, Dayin, Chen, Xiaojun, Shi, Jinqiao, Wang, Dakui, Zeng, Shuai.  2021.  A Differential Privacy Collaborative Deep Learning Algorithm in Pervasive Edge Computing Environment. 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :347—354.

With the development of 5G technology and intelligent terminals, the future direction of the Industrial Internet of Things (IIoT) evolution is Pervasive Edge Computing (PEC). In the pervasive edge computing environment, intelligent terminals can perform calculations and data processing. By migrating part of the original cloud computing model's calculations to intelligent terminals, the intelligent terminal can complete model training without uploading local data to a remote server. Pervasive edge computing solves the problem of data islands and is also successfully applied in scenarios such as vehicle interconnection and video surveillance. However, pervasive edge computing is facing great security problems. Suppose the remote server is honest but curious. In that case, it can still design algorithms for the intelligent terminal to execute and infer sensitive content such as their identity data and private pictures through the information returned by the intelligent terminal. In this paper, we research the problem of honest but curious remote servers infringing intelligent terminal privacy and propose a differential privacy collaborative deep learning algorithm in the pervasive edge computing environment. We use a Gaussian mechanism that meets the differential privacy guarantee to add noise on the first layer of the neural network to protect the data of the intelligent terminal and use analytical moments accountant technology to track the cumulative privacy loss. Experiments show that with the Gaussian mechanism, the training data of intelligent terminals can be protected reduction inaccuracy.

2022-05-10
Agarkhed, Jayashree, Pawar, Geetha.  2021.  Efficient Security Model for Pervasive Computing Using Multi-Layer Neural Network. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–6.

In new technological world pervasive computing plays the important role in data computing and communication. The pervasive computing provides the mobile environment for decentralized computational services at anywhere, anytime at any context and location. Pervasive computing is flexible and makes portable devices and computing surrounded us as part of our daily life. Devices like Laptop, Smartphones, PDAs, and any other portable devices can constitute the pervasive environment. These devices in pervasive environments are worldwide and can receive various communications including audio visual services. The users and the system in this pervasive environment face the challenges of user trust, data privacy and user and device node identity. To give the feasible determination for these challenges. This paper aims to propose a dynamic learning in pervasive computing environment refer the challenges proposed efficient security model (ESM) for trustworthy and untrustworthy attackers. ESM model also compared with existing generic models; it also provides better accuracy rate than existing models.

2022-06-08
Imtiaz, Sayem Mohammad, Sultana, Kazi Zakia, Varde, Aparna S..  2021.  Mining Learner-friendly Security Patterns from Huge Published Histories of Software Applications for an Intelligent Tutoring System in Secure Coding. 2021 IEEE International Conference on Big Data (Big Data). :4869–4876.

Security patterns are proven solutions to recurring problems in software development. The growing importance of secure software development has introduced diverse research efforts on security patterns that mostly focused on classification schemes, evolution and evaluation of the patterns. Despite a huge mature history of research and popularity among researchers, security patterns have not fully penetrated software development practices. Besides, software security education has not been benefited by these patterns though a commonly stated motivation is the dissemination of expert knowledge and experience. This is because the patterns lack a simple embodiment to help students learn about vulnerable code, and to guide new developers on secure coding. In order to address this problem, we propose to conduct intelligent data mining in the context of software engineering to discover learner-friendly software security patterns. Our proposed model entails knowledge discovery from large scale published real-world vulnerability histories in software applications. We harness association rule mining for frequent pattern discovery to mine easily comprehensible and explainable learner-friendly rules, mainly of the type "flaw implies fix" and "attack type implies flaw", so as to enhance training in secure coding which in turn would augment secure software development. We propose to build a learner-friendly intelligent tutoring system (ITS) based on the newly discovered security patterns and rules explored. We present our proposed model based on association rule mining in secure software development with the goal of building this ITS. Our proposed model and prototype experiments are discussed in this paper along with challenges and ongoing work.

2022-02-07
Or-Meir, Ori, Cohen, Aviad, Elovici, Yuval, Rokach, Lior, Nissim, Nir.  2021.  Pay Attention: Improving Classification of PE Malware Using Attention Mechanisms Based on System Call Analysis. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Malware poses a threat to computing systems worldwide, and security experts work tirelessly to detect and classify malware as accurately and quickly as possible. Since malware can use evasion techniques to bypass static analysis and security mechanisms, dynamic analysis methods are more useful for accurately analyzing the behavioral patterns of malware. Previous studies showed that malware behavior can be represented by sequences of executed system calls and that machine learning algorithms can leverage such sequences for the task of malware classification (a.k.a. malware categorization). Accurate malware classification is helpful for malware signature generation and is thus beneficial to antivirus vendors; this capability is also valuable to organizational security experts, enabling them to mitigate malware attacks and respond to security incidents. In this paper, we propose an improved methodology for malware classification, based on analyzing sequences of system calls invoked by malware in a dynamic analysis environment. We show that adding an attention mechanism to a LSTM model improves accuracy for the task of malware classification, thus outperforming the state-of-the-art algorithm by up to 6%. We also show that the transformer architecture can be used to analyze very long sequences with significantly lower time complexity for training and prediction. Our proposed method can serve as the basis for a decision support system for security experts, for the task of malware categorization.
2022-01-31
Liu, Yong, Zhu, Xinghua, Wang, Jianzong, Xiao, Jing.  2021.  A Quantitative Metric for Privacy Leakage in Federated Learning. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3065–3069.
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution.
2022-08-26
VanYe, Christopher M., Li, Beatrice E., Koch, Andrew T., Luu, Mai N., Adekunle, Rahman O., Moghadasi, Negin, Collier, Zachary A., Polmateer, Thomas L., Barnes, David, Slutzky, David et al..  2021.  Trust and Security of Embedded Smart Devices in Advanced Logistics Systems. 2021 Systems and Information Engineering Design Symposium (SIEDS). :1—6.

This paper addresses security and risk management of hardware and embedded systems across several applications. There are three companies involved in the research. First is an energy technology company that aims to leverage electric- vehicle batteries through vehicle to grid (V2G) services in order to provide energy storage for electric grids. Second is a defense contracting company that provides acquisition support for the DOD's conventional prompt global strike program (CPGS). These systems need protections in their production and supply chains, as well as throughout their system life cycles. Third is a company that deals with trust and security in advanced logistics systems generally. The rise of interconnected devices has led to growth in systems security issues such as privacy, authentication, and secure storage of data. A risk analysis via scenario-based preferences is aided by a literature review and industry experts. The analysis is divided into various sections of Criteria, Initiatives, C-I Assessment, Emergent Conditions (EC), Criteria-Scenario (C-S) relevance and EC Grouping. System success criteria, research initiatives, and risks to the system are compiled. In the C-I Assessment, a rating is assigned to signify the degree to which criteria are addressed by initiatives, including research and development, government programs, industry resources, security countermeasures, education and training, etc. To understand risks of emergent conditions, a list of Potential Scenarios is developed across innovations, environments, missions, populations and workforce behaviors, obsolescence, adversaries, etc. The C-S Relevance rates how the scenarios affect the relevance of the success criteria, including cost, schedule, security, return on investment, and cascading effects. The Emergent Condition Grouping (ECG) collates the emergent conditions with the scenarios. The generated results focus on ranking Initiatives based on their ability to negate the effects of Emergent Conditions, as well as producing a disruption score to compare a Potential Scenario's impacts to the ranking of Initiatives. The results presented in this paper are applicable to the testing and evaluation of security and risk for a variety of embedded smart devices and should be of interest to developers, owners, and operators of critical infrastructure systems.

2022-07-13
Angelogianni, Anna, Politis, Ilias, Polvanesi, Pier Luigi, Pastor, Antonio, Xenakis, Christos.  2021.  Unveiling the user requirements of a cyber range for 5G security testing and training. 2021 IEEE 26th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1—6.

Cyber ranges are proven to be effective towards the direction of cyber security training. Nevertheless, the existing literature in the area of cyber ranges does not cover, to our best knowledge, the field of 5G security training. 5G networks, though, reprise a significant field for modern cyber security, introducing a novel threat landscape. In parallel, the demand for skilled cyber security specialists is high and still rising. Therefore, it is of utmost importance to provide all means to experts aiming to increase their preparedness level in the case of an unwanted event. The EU funded SPIDER project proposes an innovative Cyber Range as a Service (CRaaS) platform for 5G cyber security testing and training. This paper aims to present the evaluation framework, followed by SPIDER, for the extraction of the user requirements. To validate the defined user requirements, SPIDER leveraged of questionnaires which included both closed and open format questions and were circulated among the personnel of telecommunication providers, vendors, security service providers, managers, engineers, cyber security personnel and researchers. Here, we demonstrate a selected set of the most critical questions and responses received. From the conducted analysis we reach to some important conclusions regarding 5G testing and training capabilities that should be offered by a cyber range, in addition to the analysis of the different perceptions between cyber security and 5G experts.

2022-01-12
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley, Jin, Zhi.  2021.  Hey! Preparing Humans to do Tasks in Self-adaptive Systems. Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual.
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. In this paper, informed by work in cognitive science on human attention and context management, we introduce a formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals. We illustrate our approach in a representative scenario of human-robot collaborative goods delivery.
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley, Jin, Zhi.  2021.  Hey! Preparing Humans to do Tasks in Self-adaptive Systems. Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual.
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. In this paper, informed by work in cognitive science on human attention and context management, we introduce a formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals. We illustrate our approach in a representative scenario of human-robot collaborative goods delivery.
Li, Nianyu, Cámara, Javier, Garlan, David, Schmerl, Bradley, Jin, Zhi.  2021.  Hey! Preparing Humans to do Tasks in Self-adaptive Systems. Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual.
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. In this paper, informed by work in cognitive science on human attention and context management, we introduce a formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals. We illustrate our approach in a representative scenario of human-robot collaborative goods delivery.