Visible to the public Biblio

Found 273 results

Filters: Keyword is Predictive models  [Clear All Filters]
2022-12-01
Kandaperumal, Gowtham, Pandey, Shikhar, Srivastava, Anurag.  2022.  AWR: Anticipate, Withstand, and Recover Resilience Metric for Operational and Planning Decision Support in Electric Distribution System. IEEE Transactions on Smart Grid. 13:179—190.

With the increasing number of catastrophic weather events and resulting disruption in the energy supply to essential loads, the distribution grid operators’ focus has shifted from reliability to resiliency against high impact, low-frequency events. Given the enhanced automation to enable the smarter grid, there are several assets/resources at the disposal of electric utilities to enhances resiliency. However, with a lack of comprehensive resilience tools for informed operational decisions and planning, utilities face a challenge in investing and prioritizing operational control actions for resiliency. The distribution system resilience is also highly dependent on system attributes, including network, control, generating resources, location of loads and resources, as well as the progression of an extreme event. In this work, we present a novel multi-stage resilience measure called the Anticipate-Withstand-Recover (AWR) metrics. The AWR metrics are based on integrating relevant ‘system characteristics based factors’, before, during, and after the extreme event. The developed methodology utilizes a pragmatic and flexible approach by adopting concepts from the national emergency preparedness paradigm, proactive and reactive controls of grid assets, graph theory with system and component constraints, and multi-criteria decision-making process. The proposed metrics are applied to provide decision support for a) the operational resilience and b) planning investments, and validated for a real system in Alaska during the entirety of the event progression.

2022-11-08
Mode, Gautam Raj, Calyam, Prasad, Hoque, Khaza Anuarul.  2020.  Impact of False Data Injection Attacks on Deep Learning Enabled Predictive Analytics. NOMS 2020 - 2020 IEEE/IFIP Network Operations and Management Symposium. :1–7.
Industry 4.0 is the latest industrial revolution primarily merging automation with advanced manufacturing to reduce direct human effort and resources. Predictive maintenance (PdM) is an industry 4.0 solution, which facilitates predicting faults in a component or a system powered by state-of-the- art machine learning (ML) algorithms (especially deep learning algorithms) and the Internet-of-Things (IoT) sensors. However, IoT sensors and deep learning (DL) algorithms, both are known for their vulnerabilities to cyber-attacks. In the context of PdM systems, such attacks can have catastrophic consequences as they are hard to detect due to the nature of the attack. To date, the majority of the published literature focuses on the accuracy of DL enabled PdM systems and often ignores the effect of such attacks. In this paper, we demonstrate the effect of IoT sensor attacks (in the form of false data injection attack) on a PdM system. At first, we use three state-of-the-art DL algorithms, specifically, Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Convolutional Neural Network (CNN) for predicting the Remaining Useful Life (RUL) of a turbofan engine using NASA's C-MAPSS dataset. The obtained results show that the GRU-based PdM model outperforms some of the recent literature on RUL prediction using the C-MAPSS dataset. Afterward, we model and apply two different types of false data injection attacks (FDIA), specifically, continuous and interim FDIAs on turbofan engine sensor data and evaluate their impact on CNN, LSTM, and GRU-based PdM systems. The obtained results demonstrate that FDI attacks on even a few IoT sensors can strongly defect the RUL prediction in all cases. However, the GRU-based PdM model performs better in terms of accuracy and resiliency to FDIA. Lastly, we perform a study on the GRU-based PdM model using four different GRU networks with different sequence lengths. Our experiments reveal an interesting relationship between the accuracy, resiliency and sequence length for the GRU-based PdM models.
2022-11-02
Song, Xiaozhuang, Zhang, Chenhan, Yu, James J.Q..  2021.  Learn Travel Time Distribution with Graph Deep Learning and Generative Adversarial Network. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). :1385–1390.
How to obtain accurate travel time predictions is among the most critical problems in Intelligent Transportation Systems (ITS). Recent literature has shown the effectiveness of machine learning models on travel time forecasting problems. However, most of these models predict travel time in a point estimation manner, which is not suitable for real scenarios. Instead of a determined value, the travel time within a future time period is a distribution. Besides, they all use grid structure data to obtain the spatial dependency, which does not reflect the traffic network's actual topology. Hence, we propose GCGTTE to estimate the travel time in a distribution form with Graph Deep Learning and Generative Adversarial Network (GAN). We convert the data into a graph structure and use a Graph Neural Network (GNN) to build its spatial dependency. Furthermore, GCGTTE adopts GAN to approximate the real travel time distribution. We test the effectiveness of GCGTTE with other models on a real-world dataset. Thanks to the fine-grained spatial dependency modeling, GCGTTE outperforms the models that build models on a grid structure data significantly. Besides, we also compared the distribution approximation performance with DeepGTT, a Variational Inference-based model which had the state-of-the-art performance on travel time estimation. The result shows that GCGTTE outperforms DeepGTT on metrics and the distribution generated by GCGTTE is much closer to the original distribution.
2022-10-16
Trautsch, Alexander, Herbold, Steffen, Grabowski, Jens.  2020.  Static source code metrics and static analysis warnings for fine-grained just-in-time defect prediction. 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). :127–138.
Software quality evolution and predictive models to support decisions about resource distribution in software quality assurance tasks are an important part of software engineering research. Recently, a fine-grained just-in-time defect prediction approach was proposed which has the ability to find bug-inducing files within changes instead of only complete changes. In this work, we utilize this approach and improve it in multiple places: data collection, labeling and features. We include manually validated issue types, an improved SZZ algorithm which discards comments, whitespaces and refactorings. Additionally, we include static source code metrics as well as static analysis warnings and warning density derived metrics as features. To assess whether we can save cost we incorporate a specialized defect prediction cost model. To evaluate our proposed improvements of the fine-grained just-in-time defect prediction approach we conduct a case study that encompasses 38 Java projects, 492,241 file changes in 73,598 commits and spans 15 years. We find that static source code metrics and static analysis warnings are correlated with bugs and that they can improve the quality and cost saving potential of just-in-time defect prediction models.
2022-09-29
Duman, Atahan, Sogukpinar, Ibrahim.  2021.  Deep Learning Based Event Correlation Analysis in Information Systems. 2021 6th International Conference on Computer Science and Engineering (UBMK). :209–214.
Information systems and applications provide indispensable services at every stage of life, enabling us to carry out our activities more effectively and efficiently. Today, information technology systems produce many alarm and event records. These produced records often have a relationship with each other, and when this relationship is captured correctly, many interruptions that will harm institutions can be prevented before they occur. For example, an increase in the disk I/O speed of a server or a problem may cause the business software running on that server to slow down and cause different results in this slowness. Here, an institution’s accurate analysis and management of all event records, and rule-based analysis of the resulting records in certain time periods and depending on certain rules will ensure efficient and effective management of millions of alarms. In addition, it will be possible to prevent possible problems by removing the relationships between events. Events that occur in IT systems are a kind of footprint. It is also vital to keep a record of the events in question, and when necessary, these event records can be analyzed to analyze the efficiency of the systems, harmful interferences, system failure tendency, etc. By understanding the undesirable situations such as taking the necessary precautions, possible losses can be prevented. In this study, the model developed for fault prediction in systems by performing event log analysis in information systems is explained and the experimental results obtained are given.
Yu, Zaifu, Shang, Wenqian, Lin, Weiguo, Huang, Wei.  2021.  A Collaborative Filtering Model for Link Prediction of Fusion Knowledge Graph. 2021 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Winter). :33–38.
In order to solve the problem that collaborative filtering recommendation algorithm completely depends on the interactive behavior information of users while ignoring the correlation information between items, this paper introduces a link prediction algorithm based on knowledge graph to integrate ItemCF algorithm. Through the linear weighted fusion of the item similarity matrix obtained by the ItemCF algorithm and the item similarity matrix obtained by the link prediction algorithm, the new fusion matrix is then introduced into ItemCF algorithm. The MovieLens-1M data set is used to verify the KGLP-ItemCF model proposed in this paper, and the experimental results show that the KGLP-ItemCF model effectively improves the precision, recall rate and F1 value. KGLP-ItemCF model effectively solves the problems of sparse data and over-reliance on user interaction information by introducing knowledge graph into ItemCF algorithm.
2022-09-20
Shaomei, Lv, Xiangyan, Zeng, Long, Huang, Lan, Wu, Wei, Jiang.  2021.  Passenger Volume Interval Prediction based on MTIGM (1,1) and BP Neural Network. 2021 33rd Chinese Control and Decision Conference (CCDC). :6013—6018.
The ternary interval number contains more comprehensive information than the exact number, and the prediction of the ternary interval number is more conducive to intelligent decision-making. In order to reduce the overfitting problem of the neural network model, a combination prediction method of the BP neural network and the matrix GM (1, 1) model for the ternary interval number sequence is proposed in the paper, and based on the proposed method to predict the passenger volume. The matrix grey model for the ternary interval number sequence (MTIGM (1, 1)) can stably predict the overall development trend of a time series. Considering the integrity of interval numbers, the BP neural network model is established by combining the lower, middle and upper boundary points of the ternary interval numbers. The combined weights of MTIGM (1, 1) and the BP neural network are determined based on the grey relational degree. The combined method is used to predict the total passenger volume and railway passenger volume of China, and the prediction effect is better than MTIGM (1, 1) and BP neural network.
Wood, Adrian, Johnstone, Michael N..  2021.  Detection of Induced False Negatives in Malware Samples. 2021 18th International Conference on Privacy, Security and Trust (PST). :1—6.
Malware detection is an important area of cyber security. Computer systems rely on malware detection applications to prevent malware attacks from succeeding. Malware detection is not a straightforward task, as new variants of malware are generated at an increasing rate. Machine learning (ML) has been utilised to generate predictive classification models to identify new malware variants which conventional malware detection methods may not detect. Machine learning, has however, been found to be vulnerable to different types of adversarial attacks, in which an attacker is able to negatively affect the classification ability of the ML model. Several defensive measures to prevent adversarial poisoning attacks have been developed, but they often rely on the use of a trusted clean dataset to help identify and remove adversarial examples from the training dataset. The defence in this paper does not require a trusted clean dataset, but instead, identifies intentional false negatives (zero day malware classified as benign) at the testing stage by examining the activation weights of the ML model. The defence was able to identify 94.07% of the successful targeted poisoning attacks.
Chen, Lei, Yuan, Yuyu, Jiang, Hongpu, Guo, Ting, Zhao, Pengqian, Shi, Jinsheng.  2021.  A Novel Trust-based Model for Collaborative Filtering Recommendation Systems using Entropy. 2021 8th International Conference on Dependable Systems and Their Applications (DSA). :184—188.
With the proliferation of false redundant information on various e-commerce platforms, ineffective recommendations and other untrustworthy behaviors have seriously hindered the healthy development of e-commerce platforms. Modern recommendation systems often use side information to alleviate these problems and also increase prediction accuracy. One such piece of side information, which has been widely investigated, is trust. However, it is difficult to obtain explicit trust relationship data, so researchers infer trust values from other methods, such as the user-to-item relationship. In this paper, addressing the problems, we proposed a novel trust-based recommender model called UITrust, which uses user-item relationship value to improve prediction accuracy. With the improvement the traditional similarity measures by employing the entropies of user and item history ratings to reflect the global rating behavior on both. We evaluate the proposed model using two real-world datasets. The proposed model performs significantly better than the baseline methods. Also, we can use the UITrust to alleviate the sparsity problem associated with correlation-based similarity. In addition to that, the proposed model has a better computational complexity for making predictions than the k-nearest neighbor (kNN) method.
Chen, Tong, Xiang, Yingxiao, Li, Yike, Tian, Yunzhe, Tong, Endong, Niu, Wenjia, Liu, Jiqiang, Li, Gang, Alfred Chen, Qi.  2021.  Protecting Reward Function of Reinforcement Learning via Minimal and Non-catastrophic Adversarial Trajectory. 2021 40th International Symposium on Reliable Distributed Systems (SRDS). :299—309.
Reward functions are critical hyperparameters with commercial values for individual or distributed reinforcement learning (RL), as slightly different reward functions result in significantly different performance. However, existing inverse reinforcement learning (IRL) methods can be utilized to approximate reward functions just based on collected expert trajectories through observing. Thus, in the real RL process, how to generate a polluted trajectory and perform an adversarial attack on IRL for protecting reward functions has become the key issue. Meanwhile, considering the actual RL cost, generated adversarial trajectories should be minimal and non-catastrophic for ensuring normal RL performance. In this work, we propose a novel approach to craft adversarial trajectories disguised as expert ones, for decreasing the IRL performance and realize the anti-IRL ability. Firstly, we design a reward clustering-based metric to integrate both advantages of fine- and coarse-grained IRL assessment, including expected value difference (EVD) and mean reward loss (MRL). Further, based on such metric, we explore an adversarial attack based on agglomerative nesting algorithm (AGNES) clustering and determine targeted states as starting states for reward perturbation. Then we employ the intrinsic fear model to predict the probability of imminent catastrophe, supporting to generate non-catastrophic adversarial trajectories. Extensive experiments of 7 state-of-the-art IRL algorithms are implemented on the Object World benchmark, demonstrating the capability of our proposed approach in (a) decreasing the IRL performance and (b) having minimal and non-catastrophic adversarial trajectories.
2022-09-16
Kozlov, Aleksandr, Noga, Nikolai.  2021.  Applying the Methods of Regression Analysis and Fuzzy Logic for Assessing the Information Security Risk of Complex Systems. 2021 14th International Conference Management of large-scale system development (MLSD). :1—5.
The proposed method allows us to determine the predicted value of the complex systems information security risk and its confidence interval using regression analysis and fuzzy logic in terms of the risk dependence on various factors: the value of resources, the level of threats, potential damage, the level of costs for creating and operating the system, the information resources control level.
2022-09-09
Liu, Pengcheng, Han, Zhen, Shi, Zhixin, Liu, Meichen.  2021.  Recognition of Overlapped Frequency Hopping Signals Based on Fully Convolutional Networks. 2021 28th International Conference on Telecommunications (ICT). :1—5.
Previous research on frequency hopping (FH) signal recognition utilizing deep learning only focuses on single-label signal, but can not deal with overlapped FH signal which has multi-labels. To solve this problem, we propose a new FH signal recognition method based on fully convolutional networks (FCN). Firstly, we perform the short-time Fourier transform (STFT) on the collected FH signal to obtain a two-dimensional time-frequency pattern with time, frequency, and intensity information. Then, the pattern will be put into an improved FCN model, named FH-FCN, to make a pixel-level prediction. Finally, through the statistics of the output pixels, we can get the final classification results. We also design an algorithm that can automatically generate dataset for model training. The experimental results show that, for an overlapped FH signal, which contains up to four different types of signals, our method can recognize them correctly. In addition, the separation of multiple FH signals can be achieved by a slight improvement of our method.
2022-08-26
Zhao, Yue, Shen, Yang, Qi, Yuanbo.  2021.  A Security Analysis of Chinese Robot Supply Chain Based on Open-Source Intelligence. 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence (DTPI). :219—222.

This paper argues that the security management of the robot supply chain would preferably focus on Sino-US relations and technical bottlenecks based on a comprehensive security analysis through open-source intelligence and data mining of associated discourses. Through the lens of the newsboy model and game theory, this study reconstructs the risk appraisal model of the robot supply chain and rebalances the process of the Sino-US competition game, leading to the prediction of China's strategic movements under the supply risks. Ultimately, this paper offers a threefold suggestion: increasing the overall revenue through cost control and scaled expansion, resilience enhancement and risk prevention, and outreach of a third party's cooperation for confrontation capabilities reinforcement.

Yuan, Quan, Ye, Yujian, Tang, Yi, Liu, Xuefei, Tian, Qidong.  2021.  Optimal Load Scheduling in Coupled Power and Transportation Networks. 2021 IEEE/IAS Industrial and Commercial Power System Asia (I&CPS Asia). :1512–1517.
As a part of the global decarbonization agenda, the electrification of the transport sector involving the large-scale integration of electric vehicles (EV) constitues one of the key initiatives. However, the introduction of EV loads results in more variable electrical demand profiles and higher demand peaks, challenging power system balancing, voltage and network congestion management. In this paper, a novel optimal load scheduling approach for a coupled power and transportation network is proposed. It employs an EV charging demand forecasting model to generate the temporal-spatial distribution of the aggregate EV loads taking into account the uncertainties stemmed from the traffic condition. An AC optimal power flow (ACOPF) problem is formulated and solved to determine the scheduling decisions for the EVs, energy storage units as well as other types of flexible loads, taking into account their operational characteristics. Convex relaxation is performed to convert the original non-convex ACOPF problem to a second order conic program. Case studies demonstrate the effectiveness of the proposed scheduling strategy in accurately forecasting the EV load distribution as well as effectively alleviating the voltage deviation and network congestion in the distribution network through optimal load scheduling control decisions.
Chawla, Kushal, Clever, Rene, Ramirez, Jaysa, Lucas, Gale, Gratch, Jonathan.  2021.  Towards Emotion-Aware Agents For Negotiation Dialogues. 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII). :1–8.
Negotiation is a complex social interaction that encapsulates emotional encounters in human decision-making. Virtual agents that can negotiate with humans are useful in pedagogy and conversational AI. To advance the development of such agents, we explore the prediction of two important subjective goals in a negotiation – outcome satisfaction and partner perception. Specifically, we analyze the extent to which emotion attributes extracted from the negotiation help in the prediction, above and beyond the individual difference variables. We focus on a recent dataset in chat-based negotiations, grounded in a realistic camping scenario. We study three degrees of emotion dimensions – emoticons, lexical, and contextual by leveraging affective lexicons and a state-of-the-art deep learning architecture. Our insights will be helpful in designing adaptive negotiation agents that interact through realistic communication interfaces.
2022-08-12
Berman, Maxwell, Adams, Stephen, Sherburne, Tim, Fleming, Cody, Beling, Peter.  2019.  Active Learning to Improve Static Analysis. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1322–1327.
Static analysis tools are programs that run on source code prior to their compilation to binary executables and attempt to find flaws or defects in the code during the early stages of development. If left unresolved, these flaws could pose security risks. While numerous static analysis tools exist, there is no single tool that is optimal. Therefore, many static analysis tools are often used to analyze code. Further, some of the alerts generated by the static analysis tools are low-priority or false alarms. Machine learning algorithms have been developed to distinguish between true alerts and false alarms, however significant man hours need to be dedicated to labeling data sets for training. This study investigates the use of active learning to reduce the number of labeled alerts needed to adequately train a classifier. The numerical experiments demonstrate that a query by committee active learning algorithm can be utilized to significantly reduce the number of labeled alerts needed to achieve similar performance as a classifier trained on a data set of nearly 60,000 labeled alerts.
Bendre, Nihar, Desai, Kevin, Najafirad, Peyman.  2021.  Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention. 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :3006–3012.
Visual Question Answering (VQA) models have achieved significant success in recent times. Despite the success of VQA models, they are mostly black-box models providing no reasoning about the predicted answer, thus raising questions for their applicability in safety-critical such as autonomous systems and cyber-security. Current state of the art fail to better complex questions and thus are unable to exploit compositionality. To minimize the black-box effect of these models and also to make them better exploit compositionality, we propose a Dynamic Neural Network (DMN), which can understand a particular question and then dynamically assemble various relatively shallow deep learning modules from a pool of modules to form a network. We incorporate compositional temporal attention to these deep learning based modules to increase compositionality exploitation. This results in achieving better understanding of complex questions and also provides reasoning as to why the module predicts a particular answer. Experimental analysis on the two benchmark datasets, VQA2.0 and CLEVR, depicts that our model outperforms the previous approaches for Visual Question Answering task as well as provides better reasoning, thus making it reliable for mission critical applications like safety and security.
2022-08-10
Usman, Ali, Rafiq, Muhammad, Saeed, Muhammad, Nauman, Ali, Almqvist, Andreas, Liwicki, Marcus.  2021.  Machine Learning Computational Fluid Dynamics. 2021 Swedish Artificial Intelligence Society Workshop (SAIS). :1—4.
Numerical simulation of fluid flow is a significant research concern during the design process of a machine component that experiences fluid-structure interaction (FSI). State-of-the-art in traditional computational fluid dynamics (CFD) has made CFD reach a relative perfection level during the last couple of decades. However, the accuracy of CFD is highly dependent on mesh size; therefore, the computational cost depends on resolving the minor feature. The computational complexity grows even further when there are multiple physics and scales involved making the approach time-consuming. In contrast, machine learning (ML) has shown a highly encouraging capacity to forecast solutions for partial differential equations. A trained neural network has offered to make accurate approximations instantaneously compared with conventional simulation procedures. This study presents transient fluid flow prediction past a fully immersed body as an integral part of the ML-CFD project. MLCFD is a hybrid approach that involves initialising the CFD simulation domain with a solution forecasted by an ML model to achieve fast convergence in traditional CDF. Initial results are highly encouraging, and the entire time-based series of fluid patterns past the immersed structure is forecasted using a deep learning algorithm. Prepared results show a strong agreement compared with fluid flow simulation performed utilising CFD.
2022-08-01
Pappu, Shiburaj, Kangane, Dhanashree, Shah, Varsha, Mandwiwala, Junaid.  2021.  AI-Assisted Risk Based Two Factor Authentication Method (AIA-RB-2FA). 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES). :1—5.
Authentication, forms an important step in any security system to allow access to resources that are to be restricted. In this paper, we propose a novel artificial intelligence-assisted risk-based two-factor authentication method. We begin with the details of existing systems in use and then compare the two systems viz: Two Factor Authentication (2FA), Risk-Based Two Factor Authentication (RB-2FA) with each other followed by our proposed AIA-RB-2FA method. The proposed method starts by recording the user features every time the user logs in and learns from the user behavior. Once sufficient data is recorded which could train the AI model, the system starts monitoring each login attempt and predicts whether the user is the owner of the account they are trying to access. If they are not, then we fallback to 2FA.
2022-07-28
Wang, Jingjing, Huang, Minhuan, Nie, Yuanping, Li, Jin.  2021.  Static Analysis of Source Code Vulnerability Using Machine Learning Techniques: A Survey. 2021 4th International Conference on Artificial Intelligence and Big Data (ICAIBD). :76—86.

With the rapid increase of practical problem complexity and code scale, the threat of software security is increasingly serious. Consequently, it is crucial to pay attention to the analysis of software source code vulnerability in the development stage and take efficient measures to detect the vulnerability as soon as possible. Machine learning techniques have made remarkable achievements in various fields. However, the application of machine learning in the domain of vulnerability static analysis is still in its infancy and the characteristics and performance of diverse methods are quite different. In this survey, we focus on a source code-oriented static vulnerability analysis method using machine learning techniques. We review the studies on source code vulnerability analysis based on machine learning in the past decade. We systematically summarize the development trends and different technical characteristics in this field from the perspectives of the intermediate representation of source code and vulnerability prediction model and put forward several feasible research directions in the future according to the limitations of the current approaches.

2022-07-15
Tao, Jing, Chen, A, Liu, Kai, Chen, Kailiang, Li, Fengyuan, Fu, Peng.  2021.  Recommendation Method of Honeynet Trapping Component Based on LSTM. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :952—957.
With the advancement of network physical social system (npss), a large amount of data privacy has become the targets of hacker attacks. Due to the complex and changeable attack methods of hackers, network security threats are becoming increasingly severe. As an important type of active defense, honeypots use the npss as a carrier to ensure the security of npss. However, traditional honeynet structures are relatively fixed, and it is difficult to trap hackers in a targeted manner. To bridge this gap, this paper proposes a recommendation method for LSTM prediction trap components based on attention mechanism. Its characteristic lies in the ability to predict hackers' attack interest, which increases the active trapping ability of honeynets. The experimental results show that the proposed prediction method can quickly and effectively predict the attacking behavior of hackers and promptly provide the trapping components that hackers are interested in.
Yuan, Rui, Wang, Xinna, Xu, Jiangmin, Meng, Shunmei.  2021.  A Differential-Privacy-based hybrid collaborative recommendation method with factorization and regression. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :389—396.
Recommender systems have been proved to be effective techniques to provide users with better experiences. However, when a recommender knows the user's preference characteristics or gets their sensitive information, then a series of privacy concerns are raised. A amount of solutions in the literature have been proposed to enhance privacy protection degree of recommender systems. Although the existing solutions have enhanced the protection, they led to a decrease in recommendation accuracy simultaneously. In this paper, we propose a security-aware hybrid recommendation method by combining the factorization and regression techniques. Specifically, the differential privacy mechanism is integrated into data pre-processing for data encryption. Firstly data are perturbed to satisfy differential privacy and transported to the recommender. Then the recommender calculates the aggregated data. However, applying differential privacy raises utility issues of low recommendation accuracy, meanwhile the use of a single model may cause overfitting. In order to tackle this challenge, we adopt a fusion prediction model by combining linear regression (LR) and matrix factorization (MF) for collaborative recommendation. With the MovieLens dataset, we evaluate the recommendation accuracy and regression of our recommender system and demonstrate that our system performs better than the existing recommender system under privacy requirement.
McDonnell, Serena, Nada, Omar, Abid, Muhammad Rizwan, Amjadian, Ehsan.  2021.  CyberBERT: A Deep Dynamic-State Session-Based Recommender System for Cyber Threat Recognition. 2021 IEEE Aerospace Conference (50100). :1—12.
Session-based recommendation is the task of predicting user actions during short online sessions. The user is considered to be anonymous in this setting, with no past behavior history available. Predicting anonymous users' next actions and their preferences in the absence of historical user behavior information is valuable from a cybersecurity and aerospace perspective, as cybersecurity measures rely on the prompt classification of novel threats. Our offered solution builds upon the previous representation learning work originating from natural language processing, namely BERT, which stands for Bidirectional Encoder Representations from Transformers (Devlin et al., 2018). In this paper we propose CyberBERT, the first deep session-based recommender system to employ bidirectional transformers to model the intent of anonymous users within a session. The session-based setting lends itself to applications in threat recognition, through monitoring of real-time user behavior using the CyberBERT architecture. We evaluate the efficiency of this dynamic state method using the Windows PE Malware API sequence dataset (Catak and Yazi, 2019), which contains behavior for 7107 API call sequences executed by 8 classes of malware. We compare the proposed CyberBERT solution to two high-performing benchmark algorithms on the malware dataset: LSTM (Long Short-term Memory) and transformer encoder (Vaswani et al., 2017). We also evaluate the method using the YOOCHOOSE 1/64 dataset, which is a session-based recommendation dataset that contains 37,483 items, 719,470 sessions, and 31,637,239 clicks. Our experiments demonstrate the advantage of a bidirectional architecture over the unidirectional approach, as well as the flexibility of the CyberBERT solution in modelling the intent of anonymous users in a session. Our system achieves state-of-the-art measured by F1 score on the Windows PE Malware API sequence dataset, and state-of-the-art for P@20 and MRR@20 on YOOCHOOSE 1/64. As CyberBERT allows for user behavior monitoring in the absence of behavior history, it acts as a robust malware classification system that can recognize threats in aerospace systems, where malicious actors may be interacting with a system for the first time. This work provides the backbone for systems that aim to protect aviation and aerospace applications from prospective third-party applications and malware.
2022-07-14
Taylor, Michael A., Larson, Eric C., Thornton, Mitchell A..  2021.  Rapid Ransomware Detection through Side Channel Exploitation. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :47–54.
A new method for the detection of ransomware in an infected host is described and evaluated. The method utilizes data streams from on-board sensors to fingerprint the initiation of a ransomware infection. These sensor streams, which are common in modern computing systems, are used as a side channel for understanding the state of the system. It is shown that ransomware detection can be achieved in a rapid manner and that the use of slight, yet distinguishable changes in the physical state of a system as derived from a machine learning predictive model is an effective technique. A feature vector, consisting of various sensor outputs, is coupled with a detection criteria to predict the binary state of ransomware present versus normal operation. An advantage of this approach is that previously unknown or zero-day version s of ransomware are vulnerable to this detection method since no apriori knowledge of the malware characteristics are required. Experiments are carried out with a variety of different system loads and with different encryption methods used during a ransomware attack. Two test systems were utilized with one having a relatively low amount of available sensor data and the other having a relatively high amount of available sensor data. The average time for attack detection in the "sensor-rich" system was 7.79 seconds with an average Matthews correlation coefficient of 0.8905 for binary system state predictions regardless of encryption method and system load. The model flagged all attacks tested.
2022-07-05
Mukherjee, Debottam, Chakraborty, Samrat, Banerjee, Ramashis, Bhunia, Joydeep.  2021.  A Novel Real-Time False Data Detection Strategy for Smart Grid. 2021 IEEE 9th Region 10 Humanitarian Technology Conference (R10-HTC). :1—6.
State estimation algorithm ensures an effective realtime monitoring of the modern smart grid leading to an accurate determination of the current operating states. Recently, a new genre of data integrity attacks namely false data injection attack (FDIA) has shown its deleterious effects by bypassing the traditional bad data detection technique. Modern grid operators must detect the presence of such attacks in the raw field measurements to guarantee a safe and reliable operation of the grid. State forecasting based FDIA identification schemes have recently shown its efficacy by determining the deviation of the estimated states due to an attack. This work emphasizes on a scalable deep learning state forecasting model which can accurately determine the presence of FDIA in real-time. An optimal set of hyper-parameters of the proposed architecture leads to an effective forecasting of the operating states with minimal error. A diligent comparison between other state of the art forecasting strategies have promoted the effectiveness of the proposed neural network. A comprehensive analysis on the IEEE 14 bus test bench effectively promotes the proposed real-time attack identification strategy.