Visible to the public Biblio

Filters: Keyword is prediction  [Clear All Filters]
2023-06-22
Kivalov, Serhii, Strelkovskaya, Irina.  2022.  Detection and prediction of DDoS cyber attacks using spline functions. 2022 IEEE 16th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET). :710–713.
The issues of development and legal regulation of cybersecurity in Ukraine are considered. The expediency of further improvement of the regulatory framework, its implementation and development of cybersecurity systems is substantiated. Further development of the theoretical base of cyber defense using spline functions is proposed. The characteristics of network traffic are considered from the point of view of detecting DDoS cyber attacks (SYN-Flood, ICMP-Flood, UDP-Flood) and predicting DDoS cyber-attacks using spline functions. The spline extrapolation method makes it possible to predict DDoS cyber attacks with great accuracy.
2023-03-31
Ankita, D, Khilar, Rashmita, Kumar, M. Naveen.  2022.  Accuracy Analysis for Predicting Human Behaviour Using Deep Belief Network in Comparison with Support Vector Machine Algorithm. 2022 14th International Conference on Mathematics, Actuarial Science, Computer Science and Statistics (MACS). :1–5.
To detect human behaviour and measure accuracy of classification rate. Materials and Methods: A novel deep belief network with sample size 10 and support vector machine with sample size of 10. It was iterated at different times predicting the accuracy percentage of human behaviour. Results: Human behaviour detection utilizing novel deep belief network 87.9% accuracy compared with support vector machine 87.0% accuracy. Deep belief networks seem to perform essentially better compared to support vector machines \$(\textbackslashmathrmp=0.55)(\textbackslashtextPiˆ0.05)\$. The deep belief algorithm in computer vision appears to perform significantly better than the support vector machine algorithm. Conclusion: Within this human behaviour detection novel deep belief network has more precision than support vector machine.
2022-08-26
LaMar, Suzanna, Gosselin, Jordan J, Caceres, Ivan, Kapple, Sarah, Jayasumana, Anura.  2021.  Congestion Aware Intent-Based Routing using Graph Neural Networks for Improved Quality of Experience in Heterogeneous Networks. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :477—481.
Making use of spectrally diverse communications links to re-route traffic in response to dynamic environments to manage network bottlenecks has become essential in order to guarantee message delivery across heterogeneous networks. We propose an innovative, proactive Congestion Aware Intent-Based Routing (CONAIR) architecture that can select among available communication link resources based on quality of service (QoS) metrics to support continuous information exchange between networked participants. The CONAIR architecture utilizes a Network Controller (NC) and artificial intelligence (AI) to re-route traffic based on traffic priority, fundamental to increasing end user quality of experience (QoE) and mission effectiveness. The CONAIR architecture provides network behavior prediction, and can mitigate congestion prior to its occurrence unlike traditional static routing techniques, e.g. Open Shortest Path First (OSPF), which are prone to congestion due to infrequent routing table updates. Modeling and simulation (M&S) was performed on a multi-hop network in order to characterize the resiliency and scalability benefits of CONAIR over OSPF routing-based frameworks. Results demonstrate that for varying traffic profiles, packet loss and end-to-end latency is minimized.
2022-07-29
Ganesh, Sundarakrishnan, Ohlsson, Tobias, Palma, Francis.  2021.  Predicting Security Vulnerabilities using Source Code Metrics. 2021 Swedish Workshop on Data Science (SweDS). :1–7.
Large open-source systems generate and operate on a plethora of sensitive enterprise data. Thus, security threats or vulnerabilities must not be present in open-source systems and must be resolved as early as possible in the development phases to avoid catastrophic consequences. One way to recognize security vulnerabilities is to predict them while developers write code to minimize costs and resources. This study examines the effectiveness of machine learning algorithms to predict potential security vulnerabilities by analyzing the source code of a system. We obtained the security vulnerabilities dataset from Apache Tomcat security reports for version 4.x to 10.x. We also collected the source code of Apache Tomcat 4.x to 10.x to compute 43 object-oriented metrics. We assessed four traditional supervised learning algorithms, i.e., Naive Bayes (NB), Decision Tree (DT), K-Nearest Neighbors (KNN), and Logistic Regression (LR), to understand their efficacy in predicting security vulnerabilities. We obtained the highest accuracy of 80.6% using the KNN. Thus, the KNN classifier was demonstrated to be the most effective of all the models we built. The DT classifier also performed well but under-performed when it came to multi-class classification.
2022-02-07
Acharya, Jatin, Chuadhary, Anshul, Chhabria, Anish, Jangale, Smita.  2021.  Detecting Malware, Malicious URLs and Virus Using Machine Learning and Signature Matching. 2021 2nd International Conference for Emerging Technology (INCET). :1–5.
Nowadays most of our data is stored on an electronic device. The risk of that device getting infected by Viruses, Malware, Worms, Trojan, Ransomware, or any unwanted invader has increased a lot these days. This is mainly because of easy access to the internet. Viruses and malware have evolved over time so identification of these files has become difficult. Not only by viruses and malware your device can be attacked by a click on forged URLs. Our proposed solution for this problem uses machine learning techniques and signature matching techniques. The main aim of our solution is to identify the malicious programs/URLs and act upon them. The core idea in identifying the malware is selecting the key features from the Portable Executable file headers using these features we trained a random forest model. This RF model will be used for scanning a file and determining if that file is malicious or not. For identification of the virus, we are using the signature matching technique which is used to match the MD5 hash of the file with the virus signature database containing the MD5 hash of the identified viruses and their families. To distinguish between benign and illegitimate URLs there is a logistic regression model used. The regression model uses a tokenizer for feature extraction from the URL that is to be classified. The tokenizer separates all the domains, sub-domains and separates the URLs on every `/'. Then a TfidfVectorizer (Term Frequency - Inverse Document Frequency) is used to convert the text into a weighted value. These values are used to predict if the URL is safe to visit or not. On the integration of all three modules, the final application will provide full system protection against malicious software.
2021-09-21
Zhao, Quanling, Sun, Jiawei, Ren, Hongjia, Sun, Guodong.  2020.  Machine-Learning Based TCP Security Action Prediction. 2020 5th International Conference on Mechanical, Control and Computer Engineering (ICMCCE). :1329–1333.
With the continuous growth of Internet technology and the increasingly broadening applications of The Internet, network security incidents as well as cyber-attacks are also showing a growing trend. Consequently, computer network security is becoming increasingly important. TCP firewall is a computer network security system, and it allows or denies the transmission of data according to specific rules for providing security for the computer network. Traditional firewalls rely on network administrators to set security rules for them, and network administrators sometimes need to choose to allow and deny packets to keep computer networks secure. However, due to the huge amount of data on the Internet, network administrators have a huge task. Therefore, it is particularly important to solve this problem by using the machine learning method of computer technology. This study aims to predict TCP security action based on the TCP transmission characteristics dataset provided by UCI machine learning repository by implementing machine learning models such as neural network, support vector machine (SVM), AdaBoost, and Logistic regression. Processes including evaluating various models and interpretability analysis. By utilizing the idea of ensemble-learning, the final result has an accuracy score of over 98%.
2021-09-07
Tirupathi, Chittibabu, Hamdaoui, Bechir, Rayes, Ammar.  2020.  HybridCache: AI-Assisted Cloud-RAN Caching with Reduced In-Network Content Redundancy. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–6.
The ever-increasing growth of urban populations coupled with recent mobile data usage trends has led to an unprecedented increase in wireless devices, services and applications, with varying quality of service needs in terms of latency, data rate, and connectivity. To cope with these rising demands and challenges, next-generation wireless networks have resorted to cloud radio access network (Cloud-RAN) technology as a way of reducing latency and network traffic. A concrete example of this is New York City's LinkNYC network infrastructure, which replaces the city's payphones with kiosk-like structures, called Links, to provide fast and free public Wi-Fi access to city users. When enabled with data storage capability, these Links can, for example, play the role of edge cloud devices to allow in-network content caching so that access latency and network traffic are reduced. In this paper, we propose HybridCache, a hybrid proactive and reactive in-network caching scheme that reduces content access latency and network traffic congestion substantially. It does so by first grouping edge cloud devices in clusters to minimize intra-cluster content access latency and then enabling cooperative-proactively and reactively-caching using LSTM-based prediction to minimize in-network content redundancy. Using the LinkNYC network as the backbone infrastructure for evaluation, we show that HybridCache reduces the number of hops that content needs to traverse and increases cache hit rates, thereby reducing both network traffic and content access latency.
2021-06-30
Asyaev, G. D., Antyasov, I. S..  2020.  Model for Providing Information Security of APCS Based on Predictive Maintenance Technology. 2020 Global Smart Industry Conference (GloSIC). :287—290.
In article the basic criteria of quality of work of the automated control system of technological process (APCS) are considered, the analysis of critical moments and level of information safety of APCS is spent. The model of maintenance of information safety of APCS on the basis of technology of predictive maintenance with application of intellectual methods of data processing is offered. The model allows to generate the list of actions at detection of new kinds of the threats connected with destructive influences on object, proceeding from acceptability of predicted consequences of work of APCS. In article with use of the system analysis the complex model of the technical object of automation is developed, allowing to estimate consequences from realization of threats of information safety at various system levels of APCS.
2021-03-09
Rojas-Dueñas, G., Riba, J., Kahalerras, K., Moreno-Eguilaz, M., Kadechkar, A., Gomez-Pau, A..  2020.  Black-Box Modelling of a DC-DC Buck Converter Based on a Recurrent Neural Network. 2020 IEEE International Conference on Industrial Technology (ICIT). :456–461.
Artificial neural networks allow the identification of black-box models. This paper proposes a method aimed at replicating the static and dynamic behavior of a DC-DC power converter based on a recurrent nonlinear autoregressive exogenous neural network. The method proposed in this work applies an algorithm that trains a neural network based on the inputs and outputs (currents and voltages) of a Buck converter. The approach is validated by means of simulated data of a realistic nonsynchronous Buck converter model programmed in Simulink and by means of experimental results. The predictions made by the neural network are compared to the actual outputs of the system, to determine the accuracy of the method, thus validating the proposed approach. Both simulation and experimental results show the feasibility and accuracy of the proposed black-box approach.
2020-05-08
Guan, Chengli, Yang, Yue.  2019.  Research of Computer Network Security Evaluation Based on Backpropagation Neural Network. 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :181—184.
In recent years, due to the invasion of virus and loopholes, computer networks in colleges and universities have caused great adverse effects on schools, teachers and students. In order to improve the accuracy of computer network security evaluation, Back Propagation (BP) neural network was trained and built. The evaluation index and target expectations have been determined based on the expert system, with 15 secondary evaluation index values taken as input layer parameters, and the computer network security evaluation level values taken as output layer parameter. All data were divided into learning sample sets and forecasting sample sets. The results showed that the designed BP neural network exhibited a fast convergence speed and the system error was 0.000999654. Furthermore, the predictive values of the network were in good agreement with the experimental results, and the correlation coefficient was 0.98723. These results indicated that the network had an excellent training accuracy and generalization ability, which effectively reflected the performance of the system for the computer network security evaluation.
2020-04-03
Bhamidipati, Venkata Siva Vijayendra, Chan, Michael, Jain, Arpit, Murthy, Ashok Srinivasa, Chamorro, Derek, Muralidhar, Aniruddh Kamalapuram.  2019.  Predictive Proof of Metrics – a New Blockchain Consensus Protocol. 2019 Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :498—505.
We present a new consensus protocol for Blockchain ecosystems - PPoM - Predictive Proof of Metrics. First, we describe the motivation for PPoM - why we need it. Then, we outline its architecture, components, and operation. As part of this, we detail our reputation and reward based approach to bring about consensus in the Blockchain. We also address security and scalability for a PPoM based Blockchain, and discuss potential improvements for future work. Finally, we present measurements for our short term Provider Prediction engine.
2019-06-10
Farooq, H. M., Otaibi, N. M..  2018.  Optimal Machine Learning Algorithms for Cyber Threat Detection. 2018 UKSim-AMSS 20th International Conference on Computer Modelling and Simulation (UKSim). :32-37.

With the exponential hike in cyber threats, organizations are now striving for better data mining techniques in order to analyze security logs received from their IT infrastructures to ensure effective and automated cyber threat detection. Machine Learning (ML) based analytics for security machine data is the next emerging trend in cyber security, aimed at mining security data to uncover advanced targeted cyber threats actors and minimizing the operational overheads of maintaining static correlation rules. However, selection of optimal machine learning algorithm for security log analytics still remains an impeding factor against the success of data science in cyber security due to the risk of large number of false-positive detections, especially in the case of large-scale or global Security Operations Center (SOC) environments. This fact brings a dire need for an efficient machine learning based cyber threat detection model, capable of minimizing the false detection rates. In this paper, we are proposing optimal machine learning algorithms with their implementation framework based on analytical and empirical evaluations of gathered results, while using various prediction, classification and forecasting algorithms.

2019-02-25
Khediri, Abderrazak, Laouar, Mohamed Ridda.  2018.  Deep-Belief Network Based Prediction Model for Power Outage in Smart Grid. Proceedings of the 4th ACM International Conference of Computing for Engineering and Sciences. :4:1-4:6.

The power outages of the last couple of years around the world introduce the indispensability of technological development to improve the traditional power grids. Early warnings of imminent failures represent one of the major required improvements. Costly blackouts throughout the world caused by the different severe incidents in traditional power grids have motivated researchers to diagnose and investigate previous blackouts and propose a prediction model that enables to prevent power outages. Although, in the new generation of power grid, the smart grid's (SG) real time data can be used from smart meters (SMs) and phasor measurement unit sensors (PMU) to prevent blackout, it demands high reliability and stability against power outages. This paper implements a proactive prediction model based on deep-belief networks that can predict imminent blackout. The proposed model is evaluated on a real smart grid dataset. Promising results are reported in the case study.

2018-02-02
Rotella, P., Chulani, S..  2017.  Predicting Release Reliability. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :39–46.

Customers need to know how reliable a new release is, and whether or not the new release has substantially different, either better or worse, reliability than the one currently in production. Customers are demanding quantitative evidence, based on pre-release metrics, to help them decide whether or not to upgrade (and thereby offer new features and capabilities to their customers). Finding ways to estimate future reliability performance is not easy - we have evaluated many prerelease development and test metrics in search of reliability predictors that are sufficiently accurate and also apply to a broad range of software products. This paper describes a successful model that has resulted from these efforts, and also presents both a functional extension and a further conceptual simplification of the extended model that enables us to better communicate key release information to internal stakeholders and customers, without sacrificing predictive accuracy or generalizability. Work remains to be done, but the results of the original model, the extended model, and the simplified version are encouraging and are currently being applied across a range of products and releases. To evaluate whether or not these early predictions are accurate, and also to compare releases that are available to customers, we use a field software reliability assessment mechanism that incorporates two types of customer experience metrics: field bug encounters normalized by usage, and field bug counts, also normalized by usage. Our 'release-overrelease' strategy combines the 'maturity assessment' component (i.e., estimating reliability prior to release to the field) and the 'reliability assessment' component (i.e., gauging actual reliability after release to the field). This overall approach enables us to both predict reliability and compare reliability results for recent releases for a product.

2017-12-28
Stanić, B., Afzal, W..  2017.  Process Metrics Are Not Bad Predictors of Fault Proneness. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :493–499.

The correct prediction of faulty modules or classes has a number of advantages such as improving the quality of software and assigning capable development resources to fix such faults. There have been different kinds of fault/defect prediction models proposed in literature, but a great majority of them makes use of static code metrics as independent variables for making predictions. Recently, process metrics have gained a considerable attention as alternative metrics to use for making trust-worthy predictions. The objective of this paper is to investigate different combinations of static code and process metrics for evaluating fault prediction performance. We have used publicly available data sets, along with a frequently used classifier, Naive Bayes, to run our experiments. We have, both statistically and visually, analyzed our experimental results. The statistical analysis showed evidence against any significant difference in fault prediction performances for a variety of different combinations of metrics. This reinforced earlier research results that process metrics are as good as predictors of fault proneness as static code metrics. Furthermore, the visual inspection of box plots revealed that the best set of metrics for fault prediction is a mix of both static code and process metrics. We also presented evidence in support of some process metrics being more discriminating than others and thus making them as good predictors to use.

2017-09-06
Rahman, Akond, Pradhan, Priysha, Partho, Asif, Williams, Laurie.  2017.  Predicting Android Application Security and Privacy Risk with Static Code Metrics. Proceedings of the 4th International Conference on Mobile Software Engineering and Systems. :149–153.

Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications.

2017-05-17
Dutt, Nikil, Jantsch, Axel, Sarma, Santanu.  2016.  Toward Smart Embedded Systems: A Self-aware System-on-Chip (SoC) Perspective. ACM Trans. Embed. Comput. Syst.. 15:22:1–22:27.

Embedded systems must address a multitude of potentially conflicting design constraints such as resiliency, energy, heat, cost, performance, security, etc., all in the face of highly dynamic operational behaviors and environmental conditions. By incorporating elements of intelligence, the hope is that the resulting “smart” embedded systems will function correctly and within desired constraints in spite of highly dynamic changes in the applications and the environment, as well as in the underlying software/hardware platforms. Since terms related to “smartness” (e.g., self-awareness, self-adaptivity, and autonomy) have been used loosely in many software and hardware computing contexts, we first present a taxonomy of “self-x” terms and use this taxonomy to relate major “smart” software and hardware computing efforts. A major attribute for smart embedded systems is the notion of self-awareness that enables an embedded system to monitor its own state and behavior, as well as the external environment, so as to adapt intelligently. Toward this end, we use a System-on-Chip perspective to show how the CyberPhysical System-on-Chip (CPSoC) exemplar platform achieves self-awareness through a combination of cross-layer sensing, actuation, self-aware adaptations, and online learning. We conclude with some thoughts on open challenges and research directions.

2017-02-27
Geng, J., Ye, D., Luo, P..  2015.  Forecasting severity of software vulnerability using grey model GM(1,1). 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :344–348.

Vulnerabilities usually represents the risk level of software, and it is of high value to forecast vulnerabilities so as to evaluate the security level of software. Current researches mainly focus on predicting the number of vulnerabilities or the occurrence time of vulnerabilities, however, to our best knowledge, there are no other researches focusing on the prediction of vulnerabilities' severity, which we think is an important aspect reflecting vulnerabilities and software security. To compensate for this deficiency, we borrows the grey model GM(1,1) from grey system theory to forecast the severity of vulnerabilities. The experiment is carried on the real data collected from CVE and proves the feasibility of our predicting method.

2017-02-21
S. R. Islam, S. P. Maity, A. K. Ray.  2015.  "On compressed sensing image reconstruction using linear prediction in adaptive filtering". 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). :2317-2323.

Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.

2014-09-17
Subramani, Shweta, Vouk, Mladen, Williams, Laurie.  2014.  An Analysis of Fedora Security Profile. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :35:1–35:2.

This paper examines security faults/vulnerabilities reported for Fedora. Results indicate that, at least in some situations, fault roughly constant may be used to guide estimation of residual vulnerabilities in an already released product, as well as possibly guide testing of the next version of the product.