Visible to the public Biblio

Found 560 results

Filters: Keyword is Monitoring  [Clear All Filters]
2017-02-27
Li-xiong, Z., Xiao-lin, X., Jia, L., Lu, Z., Xuan-chen, P., Zhi-yuan, M., Li-hong, Z..  2015.  Malicious URL prediction based on community detection. 2015 International Conference on Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC). :1–7.

Traditional Anti-virus technology is primarily based on static analysis and dynamic monitoring. However, both technologies are heavily depended on application files, which increase the risk of being attacked, wasting of time and network bandwidth. In this study, we propose a new graph-based method, through which we can preliminary detect malicious URL without application file. First, the relationship between URLs can be found through the relationship between people and URLs. Then the association rules can be mined with confidence of each frequent URLs. Secondly, the networks of URLs was built through the association rules. When the networks of URLs were finished, we clustered the date with modularity to detect communities and every community represents different types of URLs. We suppose that a URL has association with one community, then the URL is malicious probably. In our experiments, we successfully captured 82 % of malicious samples, getting a higher capture than using traditional methods.

Mulcahy, J. J., Huang, S..  2015.  An autonomic approach to extend the business value of a legacy order fulfillment system. 2015 Annual IEEE Systems Conference (SysCon) Proceedings. :595–600.

In the modern retailing industry, many enterprise resource planning (ERP) systems are considered legacy software systems that have become too expensive to replace and too costly to re-engineer. Countering the need to maintain and extend the business value of these systems is the need to do so in the simplest, cheapest, and least risky manner available. There are a number of approaches used by software engineers to mitigate the negative impact of evolving a legacy systems, including leveraging service-oriented architecture to automate manual tasks previously performed by humans. A relatively recent approach in software engineering focuses upon implementing self-managing attributes, or “autonomic” behavior in software applications and systems of applications in order to reduce or eliminate the need for human monitoring and intervention. Entire systems can be autonomic or they can be hybrid systems that implement one or more autonomic components to communicate with external systems. In this paper, we describe a commercial development project in which a legacy multi-channel commerce enterprise resource planning system was extended with service-oriented architecture an autonomic control loop design to communicate with an external third-party security screening provider. The goal was to reduce the cost of the human labor necessary to screen an ever-increasing volume of orders and to reduce the potential for human error in the screening process. The solution automated what was previously an inefficient, incomplete, and potentially error-prone manual process by inserting a new autonomic software component into the existing order fulfillment workflow.

2017-02-23
K. Xiangying, C. Yanhui.  2015.  "Dynamic Remote Attestation Based on Concerns". 2015 8th International Symposium on Computational Intelligence and Design (ISCID). 1:76-80.

Based on the analysis relationships of challenger and attestation in remote attestation process, we propose a dynamic remote attestation model based on concerns. By combines the trusted root and application of dynamic credible monitoring module, Convert the Measurement for all load module of integrity measurement architecture into the Attestation of the basic computing environments, dynamic credible monitoring module, and request service software module. Discuss the rationality of the model. The model used Merkel hash tree to storage applications software integrity metrics, both to protect the privacy of the other party application software, and also improves the efficiency of remote attestation. Experimental prototype system shows that the model can verify the dynamic behavior of the software, to make up for the lack of static measure.

2017-02-21
M. Clark, L. Lampe.  2015.  "Single-channel compressive sampling of electrical data for non-intrusive load monitoring". 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :790-794.

Non-intrusive load monitoring (NILM) extracts information about how energy is being used in a building from electricity measurements collected at a single location. Obtaining measurements at only one location is attractive because it is inexpensive and convenient, but it can result in large amounts of data from high frequency electrical measurements. Different ways to compress or selectively measure this data are therefore required for practical implementations of NILM. We explore the use of random filtering and random demodulation, techniques that are closely related to compressed sensing, to offer a computationally simple way of compressing the electrical data. We show how these techniques can allow one to reduce the sampling rate of the electricity measurements, while requiring only one sampling channel and allowing accurate NILM performance. Our tests are performed using real measurements of electrical signals from a public data set, thus demonstrating their effectiveness on real appliances and allowing for reproducibility and comparison with other data management strategies for NILM.

A. Bekan, M. Mohorcic, J. Cinkelj, C. Fortuna.  2015.  "An Architecture for Fully Reconfigurable Plug-and-Play Wireless Sensor Network Testbed". 2015 IEEE Global Communications Conference (GLOBECOM). :1-7.

In this paper we propose an architecture for fully-reconfigurable, plug-and-play wireless sensor network testbed. The proposed architecture is able to reconfigure and support easy experimentation and testing of standard protocol stacks (i.e. uIPv4 and uIPv6) as well as non-standardized clean-slate protocol stacks (e.g. configured using RIME). The parameters of the protocol stacks can be remotely reconfigured through an easy to use RESTful API. Additionally, we are able to fully reconfigure clean-slate protocol stacks at run-time. The architecture enables easy set-up of the network - plug - by using a protocol that automatically sets up a multi-hop network (i.e. RPL protocol) and it enables reconfiguration and experimentation - play - by using a simple, RESTful interaction with each node individually. The reference implementation of the architecture uses a dual-stack Contiki OS with the ProtoStack tool for dynamic composition of services.

2017-02-14
J. J. Li, P. Abbate, B. Vega.  2015.  "Detecting Security Threats Using Mobile Devices". 2015 IEEE International Conference on Software Quality, Reliability and Security - Companion. :40-45.

In our previous work [1], we presented a study of using performance escalation to automatic detect Distributed Denial of Service (DDoS) types of attacks. We propose to enhance the work of security threat detection by using mobile phones as the detector to identify outliers of normal traffic patterns as threats. The mobile solution makes detection portable to any services. This paper also shows that the same detection method works for advanced persistent threats.

C. H. Hsieh, C. M. Lai, C. H. Mao, T. C. Kao, K. C. Lee.  2015.  "AD2: Anomaly detection on active directory log data for insider threat monitoring". 2015 International Carnahan Conference on Security Technology (ICCST). :287-292.

What you see is not definitely believable is not a rare case in the cyber security monitoring. However, due to various tricks of camouflages, such as packing or virutal private network (VPN), detecting "advanced persistent threat"(APT) by only signature based malware detection system becomes more and more intractable. On the other hand, by carefully modeling users' subsequent behaviors of daily routines, probability for one account to generate certain operations can be estimated and used in anomaly detection. To the best of our knowledge so far, a novel behavioral analytic framework, which is dedicated to analyze Active Directory domain service logs and to monitor potential inside threat, is now first proposed in this project. Experiments on real dataset not only show that the proposed idea indeed explores a new feasible direction for cyber security monitoring, but also gives a guideline on how to deploy this framework to various environments.

J. Vukalović, D. Delija.  2015.  "Advanced Persistent Threats - detection and defense". 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1324-1330.

The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).

K. P. B. Anushka, Chamantha, A. P. Karunaweera, P. R. Priyashantha, H. D. R. Wickramasinghe, W. A. V. M. G. Wijethunge.  2015.  "Case study on exploitation, detection and prevention of user account DoS through Advanced Persistent Threats". 2015 Fifteenth International Conference on Advances in ICT for Emerging Regions (ICTer). :190-194.

Security analysts implement various security mechanisms to protect systems from attackers. Even though these mechanisms try to secure systems, a talented attacker may use these same techniques to launch a sophisticated attack. This paper discuss about such an attack called as user account Denial of Service (DoS) where an attacker uses user account lockout features of the application to lockout all user accounts causing an enterprise wide DoS. The attack has being simulated usingastealthy attack mechanism called as Advanced Persistent Threats (APT) using a XMPP based botnet. Through the simulation, researchers discuss about the patterns associated with the attack which can be used to detect the attack in real time and how the attack can be prevented from the perspective of developers, system engineers and security analysts.

2016-11-15
Keywhan Chung, University of Illinois at Urbana-Champaign, Charles A. Kamhoua, Air Force Research Laboratory, Kevin A. Kwiat, Air Force Research Laboratory, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2016.  Game Theory with Learning for Cyber Security Monitoring. IEEE High Assurance Systems Engineering Symposium (HASE 2016).

Recent attacks show that threats to cyber infrastructure are not only increasing in volume, but are getting more sophisticated. The attacks may comprise multiple actions that are hard to differentiate from benign activity, and therefore common detection techniques have to deal with high false positive rates. Because of the imperfect performance of automated detection techniques, responses to such attacks are highly dependent on human-driven decision-making processes. While game theory has been applied to many problems that require rational decisionmaking, we find limitation on applying such method on security games. In this work, we propose Q-Learning to react automatically to the adversarial behavior of a suspicious user to secure the system. This work compares variations of Q-Learning with a traditional stochastic game. Simulation results show the possibility of Naive Q-Learning, despite restricted information on opponents.

2016-04-07
Aron Laszka, Yevgeniy Vorobeychik, Xenofon Koutsoukos.  2015.  Resilient Observation Selection in Adversarial Settings. 54th IEEE Conference on Decision and Control (CDC).

Monitoring large areas using sensors is fundamental in a number of applications, including electric power grid, traffic networks, and sensor-based pollution control systems. However, the number of sensors that can be deployed is often limited by financial or technological constraints. This problem is further complicated by the presence of strategic adversaries, who may disable some of the deployed sensors in order to impair the operator's ability to make predictions. Assuming that the operator employs a Gaussian-process-based regression model, we formulate the problem of attack-resilient sensor placement as the problem of selecting a subset from a set of possible observations, with the goal of minimizing the uncertainty of predictions. We show that both finding an optimal resilient subset and finding an optimal attack against a given subset are NP-hard problems. Since both the design and the attack problems are computationally complex, we propose efficient heuristic algorithms for solving them and present theoretical approximability results. Finally, we show that the proposed algorithms perform exceptionally well in practice using numerical results based on real-world datasets.

2015-11-16
Cuong Pham, University of Illinois at Urbana-Champaign, Zachary J. Estrada, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2014.  Reliability and Security Monitoring of Virtual Machines using Hardware Architectural Invariants. 44th International Conference on Dependable Systems and Networks.

This paper presents a solution that simultaneously addresses both reliability and security (RnS) in a monitoring framework. We identify the commonalities between reliability and security to guide the design of HyperTap, a hypervisor-level framework that efficiently supports both types of monitoring in virtualization environments. In HyperTap, the logging of system events and states is common across monitors and constitutes the core of the framework. The audit phase of each monitor is implemented and operated independently. In addition, HyperTap relies on hardware invariants to provide a strongly isolated root of trust. HyperTap uses active monitoring, which can be adapted to enforce a wide spectrum of RnS policies. We validate Hy- perTap by introducing three example monitors: Guest OS Hang Detection (GOSHD), Hidden RootKit Detection (HRKD), and Privilege Escalation Detection (PED). Our experiments with fault injection and real rootkits/exploits demonstrate that HyperTap provides robust monitoring with low performance overhead.

Winner of the William C. Carter Award for Best Paper based on PhD work and Best Paper Award voted by conference participants.

Cuong Pham, University of Illinois at Urbana-Champaign, Zachary J. Estrada, University of Illinois at Urbana-Champaign, Phuong Cao, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2014.  Building Reliable and Secure Virtual Machines using Architectural Invariants. IEEE Security and Privacy. 12(5):82-85.

Reliability and security tend to be treated separately because they appear orthogonal: reliability focuses on accidental failures, security on intentional attacks. Because of the apparent dissimilarity between the two, tools to detect and recover from different classes of failures and attacks are usually designed and implemented differently. So, integrating support for reliability and security in a single framework is a significant challenge.

Here, we discuss how to address this challenge in the context of cloud computing, for which reliability and security are growing concerns. Because cloud deployments usually consist of commodity hardware and software, efficient monitoring is key to achieving resiliency. Although reliability and security monitoring might use different types of analytics, the same sensing infrastructure can provide inputs to monitoring modules.

We split monitoring into two phases: logging and auditing. Logging captures data or events; it constitutes the framework’s core and is common to all monitors. Auditing analyzes data or events; it’s implemented and operated independently by each monitor. To support a range of auditing policies, logging must capture a complete view, including both actions and states of target systems. It must also provide useful, trustworthy information regarding the captured view.

We applied these principles when designing HyperTap, a hypervisor-level monitoring framework for virtual machines (VMs). Unlike most VM-monitoring techniques, HyperTap employs hardware architectural invariants (hardware invariants, for short) to establish the root of trust for logging. Hardware invariants are properties defined and enforced by a hardware platform (for example, the x86 instruction set architecture). Additionally, HyperTap supports continuous, event-driven VM monitoring, which enables both capturing the system state and responding rapidly to actions of interest.

2015-05-06
Kaur, R., Singh, M..  2014.  A Survey on Zero-Day Polymorphic Worm Detection Techniques. Communications Surveys Tutorials, IEEE. 16:1520-1549.

Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.

Pandey, S.K., Mehtre, B.M..  2014.  A Lifecycle Based Approach for Malware Analysis. Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on. :767-771.

Most of the detection approaches like Signature based, Anomaly based and Specification based are not able to analyze and detect all types of malware. Signature-based approach for malware detection has one major drawback that it cannot detect zero-day attacks. The fundamental limitation of anomaly based approach is its high false alarm rate. And specification-based detection often has difficulty to specify completely and accurately the entire set of valid behaviors a malware should exhibit. Modern malware developers try to avoid detection by using several techniques such as polymorphic, metamorphic and also some of the hiding techniques. In order to overcome these issues, we propose a new approach for malware analysis and detection that consist of the following twelve stages Inbound Scan, Inbound Attack, Spontaneous Attack, Client-Side Exploit, Egg Download, Device Infection, Local Reconnaissance, Network Surveillance, & Communications, Peer Coordination, Attack Preparation, and Malicious Outbound Propagation. These all stages will integrate together as interrelated process in our proposed approach. This approach had solved the limitations of all the three approaches by monitoring the behavioral activity of malware at each any every stage of life cycle and then finally it will give a report of the maliciousness of the files or software's.

Chunhui Zhao.  2014.  Fault subspace selection and analysis of relative changes based reconstruction modeling for multi-fault diagnosis. Control and Decision Conference (2014 CCDC), The 26th Chinese. :235-240.

Online fault diagnosis has been a crucial task for industrial processes. Reconstruction-based fault diagnosis has been drawing special attentions as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific fault subspace that can well eliminate alarming signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus can not be well corrected by single fault subspace archived in the historical fault library. In the present work, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarming monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature which will sort and select the informative fault subspaces that are most likely to be responsible for the concerned abnormalities. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multi-faults using data from the Tennessee Eastman (TE) benchmark process.
 

Dong-Hoon Shin, Shibo He, Junshan Zhang.  2014.  Robust, Secure, and Cost-Effective Design for Cyber-Physical Systems. Intelligent Systems, IEEE. 29:66-69.

Cyber-physical systems (CPS) can potentially benefit a wide array of applications and areas. Here, the authors look at some of the challenges surrounding CPS, and consider a feasible solution for creating a robust, secure, and cost-effective architecture.

Sanandaji, B.M., Bitar, E., Poolla, K., Vincent, T.L..  2014.  An abrupt change detection heuristic with applications to cyber data attacks on power systems. American Control Conference (ACC), 2014. :5056-5061.

We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
 

Sung-Hwan Ahn, Nam-Uk Kim, Tai-Myoung Chung.  2014.  Big data analysis system concept for detecting unknown attacks. Advanced Communication Technology (ICACT), 2014 16th International Conference on. :269-272.

Recently, threat of previously unknown cyber-attacks are increasing because existing security systems are not able to detect them. Past cyber-attacks had simple purposes of leaking personal information by attacking the PC or destroying the system. However, the goal of recent hacking attacks has changed from leaking information and destruction of services to attacking large-scale systems such as critical infrastructures and state agencies. In the other words, existing defence technologies to counter these attacks are based on pattern matching methods which are very limited. Because of this fact, in the event of new and previously unknown attacks, detection rate becomes very low and false negative increases. To defend against these unknown attacks, which cannot be detected with existing technology, we propose a new model based on big data analysis techniques that can extract information from a variety of sources to detect future attacks. We expect our model to be the basis of the future Advanced Persistent Threat(APT) detection and prevention system implementations.

Fachkha, C., Bou-Harb, E., Debbabi, M..  2014.  Fingerprinting Internet DNS Amplification DDoS Activities. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.
 

Hammi, B., Khatoun, R., Doyen, G..  2014.  A Factorial Space for a System-Based Detection of Botcloud Activity. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.

Schaefer, J..  2014.  A semantic self-management approach for service platforms. Network Operations and Management Symposium (NOMS), 2014 IEEE. :1-4.

Future personal living environments feature an increasing number of convenience-, health- and security-related applications provided by distributed services, which do not only support users but require tasks such as installation, configuration and continuous administration. These tasks are becoming tiresome, complex and error-prone. One way to escape this situation is to enable service platforms to configure and manage themselves. The approach presented here extends services with semantic descriptions to enable platform-independent autonomous service level management using model driven architecture and autonomic computing concepts. It has been implemented as a OSGi-based semantic autonomic manager, whose concept, prototypical implementation and evaluation are presented.
 

Barrere, M., Badonnel, R., Festor, O..  2014.  Vulnerability Assessment in Autonomic Networks and Services: A Survey. Communications Surveys Tutorials, IEEE. 16:988-1004.

Autonomic networks and services are exposed to a large variety of security risks. The vulnerability management process plays a crucial role for ensuring their safe configurations and preventing security attacks. We focus in this survey on the assessment of vulnerabilities in autonomic environments. In particular, we analyze current methods and techniques contributing to the discovery, the description and the detection of these vulnerabilities. We also point out important challenges that should be faced in order to fully integrate this process into the autonomic management plane.
 

2015-05-05
Okathe, T., Heydari, S.S., Sood, V., El-khatib, K..  2014.  Unified multi-critical infrastructure communication architecture. Communications (QBSC), 2014 27th Biennial Symposium on. :178-183.

Recent events have brought to light the increasingly intertwined nature of modern infrastructures. As a result much effort is being put towards protecting these vital infrastructures without which modern society suffers dire consequences. These infrastructures, due to their intricate nature, behave in complex ways. Improving their resilience and understanding their behavior requires a collaborative effort between the private sector that operates these infrastructures and the government sector that regulates them. This collaboration in the form of information sharing requires a new type of information network whose goal is in two parts to enable infrastructure operators share status information among interdependent infrastructure nodes and also allow for the sharing of vital information concerning threats and other contingencies in the form of alerts. A communication model that meets these requirements while maintaining flexibility and scalability is presented in this paper.
 

Kaci, A., Kamwa, I., Dessaint, L.-A., Guillon, S..  2014.  Phase angles as predictors of network dynamic security limits and further implications. PES General Meeting | Conference Exposition, 2014 IEEE. :1-6.

In the United States, the number of Phasor Measurement Units (PMU) will increase from 166 networked devices in 2010 to 1043 in 2014. According to the Department of Energy, they are being installed in order to “evaluate and visualize reliability margin (which describes how close the system is to the edge of its stability boundary).” However, there is still a lot of debate in academia and industry around the usefulness of phase angles as unambiguous predictors of dynamic stability. In this paper, using 4-year of actual data from Hydro-Québec EMS, it is shown that phase angles enable satisfactory predictions of power transfer and dynamic security margins across critical interface using random forest models, with both explanation level and R-squares accuracy exceeding 99%. A generalized linear model (GLM) is next implemented to predict phase angles from day-ahead to hour-ahead time frames, using historical phase angles values and load forecast. Combining GLM based angles forecast with random forest mapping of phase angles to power transfers result in a new data-driven approach for dynamic security monitoring.