Visible to the public Biblio

Found 570 results

Filters: Keyword is Data models  [Clear All Filters]
2020-09-28
Oya, Simon, Troncoso, Carmela, Pèrez-Gonzàlez, Fernando.  2019.  Rethinking Location Privacy for Unknown Mobility Behaviors. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :416–431.
Location Privacy-Preserving Mechanisms (LPPMs) in the literature largely consider that users' data available for training wholly characterizes their mobility patterns. Thus, they hardwire this information in their designs and evaluate their privacy properties with these same data. In this paper, we aim to understand the impact of this decision on the level of privacy these LPPMs may offer in real life when the users' mobility data may be different from the data used in the design phase. Our results show that, in many cases, training data does not capture users' behavior accurately and, thus, the level of privacy provided by the LPPM is often overestimated. To address this gap between theory and practice, we propose to use blank-slate models for LPPM design. Contrary to the hardwired approach, that assumes known users' behavior, blank-slate models learn the users' behavior from the queries to the service provider. We leverage this blank-slate approach to develop a new family of LPPMs, that we call Profile Estimation-Based LPPMs. Using real data, we empirically show that our proposal outperforms optimal state-of-the-art mechanisms designed on sporadic hardwired models. On non-sporadic location privacy scenarios, our method is only better if the usage of the location privacy service is not continuous. It is our hope that eliminating the need to bootstrap the mechanisms with training data and ensuring that the mechanisms are lightweight and easy to compute help fostering the integration of location privacy protections in deployed systems.
Chertchom, Prajak, Tanimoto, Shigeaki, Konosu, Tsutomu, Iwashita, Motoi, Kobayashi, Toru, Sato, Hiroyuki, Kanai, Atsushi.  2019.  Data Management Portfolio for Improvement of Privacy in Fog-to-cloud Computing Systems. 2019 8th International Congress on Advanced Applied Informatics (IIAI-AAI). :884–889.
With the challenge of the vast amount of data generated by devices at the edge of networks, new architecture needs a well-established data service model that accounts for privacy concerns. This paper presents an architecture of data transmission and a data portfolio with privacy for fog-to-cloud (DPPforF2C). We would like to propose a practical data model with privacy from a digitalized information perspective at fog nodes. In addition, we also propose an architecture for implicating the privacy of DPPforF2C used in fog computing. Technically, we design a data portfolio based on the Message Queuing Telemetry Transport (MQTT) and the Advanced Message Queuing Protocol (AMQP). We aim to propose sample data models with privacy architecture because there are some differences in the data obtained from IoT devices and sensors. Thus, we propose an architecture with the privacy of DPPforF2C for publishing data from edge devices to fog and to cloud servers that could be applied to fog architecture in the future.
2020-09-21
Akbay, Abdullah Basar, Wang, Weina, Zhang, Junshan.  2019.  Data Collection from Privacy-Aware Users in the Presence of Social Learning. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :679–686.
We study a model where a data collector obtains data from users through a payment mechanism to learn the underlying state from the elicited data. The private signal of each user represents her individual knowledge about the state. Through social interactions, each user can also learn noisy versions of her friends' signals, which is called group signals. Based on both her private signal and group signals, each user makes strategic decisions to report a privacy-preserved version of her data to the data collector. We develop a Bayesian game theoretic framework to study the impact of social learning on users' data reporting strategies and devise the payment mechanism for the data collector accordingly. Our findings reveal that, the Bayesian-Nash equilibrium can be in the form of either a symmetric randomized response (SR) strategy or an informative non-disclosive (ND) strategy. A generalized majority voting rule is applied by each user to her noisy group signals to determine which strategy to follow. When a user plays the ND strategy, she reports privacy-preserving data completely based on her group signals, independent of her private signal, which indicates that her privacy cost is zero. Both the data collector and the users can benefit from social learning which drives down the privacy costs and helps to improve the state estimation at a given payment budget. We derive bounds on the minimum total payment required to achieve a given level of state estimation accuracy.
Zhang, Xianzhen, Chen, Zhanfang, Gong, Yue, Liu, Wen.  2019.  A Access Control Model of Associated Data Sets Based on Game Theory. 2019 International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI). :1–4.
With the popularity of Internet applications and rapid development, data using and sharing process may lead to the sensitive information divulgence. To deal with the privacy protection issue more effectively, in this paper, we propose the associated data sets protection model based on game theory from the point of view of realizing benefits from the access of privacy is about happen, quantify the extent to which visitors gain sensitive information, then compares the tolerance of the sensitive information owner and finally decides whether to allow the visitor to make an access request.
2020-09-18
Zhang, Fan, Kodituwakku, Hansaka Angel Dias Edirisinghe, Hines, J. Wesley, Coble, Jamie.  2019.  Multilayer Data-Driven Cyber-Attack Detection System for Industrial Control Systems Based on Network, System, and Process Data. IEEE Transactions on Industrial Informatics. 15:4362—4369.
The growing number of attacks against cyber-physical systems in recent years elevates the concern for cybersecurity of industrial control systems (ICSs). The current efforts of ICS cybersecurity are mainly based on firewalls, data diodes, and other methods of intrusion prevention, which may not be sufficient for growing cyber threats from motivated attackers. To enhance the cybersecurity of ICS, a cyber-attack detection system built on the concept of defense-in-depth is developed utilizing network traffic data, host system data, and measured process parameters. This attack detection system provides multiple-layer defense in order to gain the defenders precious time before unrecoverable consequences occur in the physical system. The data used for demonstrating the proposed detection system are from a real-time ICS testbed. Five attacks, including man in the middle (MITM), denial of service (DoS), data exfiltration, data tampering, and false data injection, are carried out to simulate the consequences of cyber attack and generate data for building data-driven detection models. Four classical classification models based on network data and host system data are studied, including k-nearest neighbor (KNN), decision tree, bootstrap aggregating (bagging), and random forest (RF), to provide a secondary line of defense of cyber-attack detection in the event that the intrusion prevention layer fails. Intrusion detection results suggest that KNN, bagging, and RF have low missed alarm and false alarm rates for MITM and DoS attacks, providing accurate and reliable detection of these cyber attacks. Cyber attacks that may not be detectable by monitoring network and host system data, such as command tampering and false data injection attacks by an insider, are monitored for by traditional process monitoring protocols. In the proposed detection system, an auto-associative kernel regression model is studied to strengthen early attack detection. The result shows that this approach detects physically impactful cyber attacks before significant consequences occur. The proposed multiple-layer data-driven cyber-attack detection system utilizing network, system, and process data is a promising solution for safeguarding an ICS.
2020-09-11
Shukla, Ankur, Katt, Basel, Nweke, Livinus Obiora.  2019.  Vulnerability Discovery Modelling With Vulnerability Severity. 2019 IEEE Conference on Information and Communication Technology. :1—6.
Web browsers are primary targets of attacks because of their extensive uses and the fact that they interact with sensitive data. Vulnerabilities present in a web browser can pose serious risk to millions of users. Thus, it is pertinent to address these vulnerabilities to provide adequate protection for personally identifiable information. Research done in the past has showed that few vulnerability discovery models (VDMs) highlight the characterization of vulnerability discovery process. In these models, severity which is one of the most crucial properties has not been considered. Vulnerabilities can be categorized into different levels based on their severity. The discovery process of each kind of vulnerabilities is different from the other. Hence, it is essential to incorporate the severity of the vulnerabilities during the modelling of the vulnerability discovery process. This paper proposes a model to assess the vulnerabilities present in the software quantitatively with consideration for the severity of the vulnerabilities. It is possible to apply the proposed model to approximate the number of vulnerabilities along with vulnerability discovery rate, future occurrence of vulnerabilities, risk analysis, etc. Vulnerability data obtained from one of the major web browsers (Google Chrome) is deployed to examine goodness-of-fit and predictive capability of the proposed model. Experimental results justify the fact that the model proposed herein can estimate the required information better than the existing VDMs.
2020-09-08
Chen, Yu-Cheng, Mooney, Vincent, Grijalva, Santiago.  2019.  A Survey of Attack Models for Cyber-Physical Security Assessment in Electricity Grid. 2019 IFIP/IEEE 27th International Conference on Very Large Scale Integration (VLSI-SoC). :242–243.
This paper surveys some prior work regarding attack models in a cyber-physical system and discusses the potential benefits. For comparison, the full paper will model a bad data injection attack scenario in power grid using the surveyed prior work.
Chen, Yu-Cheng, Gieseking, Tim, Campbell, Dustin, Mooney, Vincent, Grijalva, Santiago.  2019.  A Hybrid Attack Model for Cyber-Physical Security Assessment in Electricity Grid. 2019 IEEE Texas Power and Energy Conference (TPEC). :1–6.
A detailed model of an attack on the power grid involves both a preparation stage as well as an execution stage of the attack. This paper introduces a novel Hybrid Attack Model (HAM) that combines Probabilistic Learning Attacker, Dynamic Defender (PLADD) model and a Markov Chain model to simulate the planning and execution stages of a bad data injection attack in power grid. We discuss the advantages and limitations of the prior work models and of our proposed Hybrid Attack Model and show that HAM is more effective compared to individual PLADD or Markov Chain models.
Isnan Imran, Muh. Ikhdar, Putrada, Aji Gautama, Abdurohman, Maman.  2019.  Detection of Near Field Communication (NFC) Relay Attack Anomalies in Electronic Payment Cases using Markov Chain. 2019 Fourth International Conference on Informatics and Computing (ICIC). :1–4.
Near Field Communication (NFC) is a short- range wireless communication technology that supports several features, one of which is an electronic payment. NFC works at a limited distance to exchange information. In terms of security, NFC technology has a gap for attackers to carry out attacks by forwarding information illegally using the target NFC network. A relay attack that occurs due to the theft of some data by an attacker by utilizing close communication from NFC is one of them. Relay attacks can cause a lot of loss in terms of material sacrifice. It takes countermeasures to overcome the problem of electronic payments with NFC technology. Detection of anomalous data is one way that can be done. In an attack, several abnormalities can be detected which can be used to prevent an attack. Markov Chain is one method that can be used to detect relay attacks that occur in electronic payments using NFC. The result shows Markov chain can detect anomalies in relay attacks in the case of electronic payment.
2020-09-04
Usama, Muhammad, Qayyum, Adnan, Qadir, Junaid, Al-Fuqaha, Ala.  2019.  Black-box Adversarial Machine Learning Attack on Network Traffic Classification. 2019 15th International Wireless Communications Mobile Computing Conference (IWCMC). :84—89.

Deep machine learning techniques have shown promising results in network traffic classification, however, the robustness of these techniques under adversarial threats is still in question. Deep machine learning models are found vulnerable to small carefully crafted adversarial perturbations posing a major question on the performance of deep machine learning techniques. In this paper, we propose a black-box adversarial attack on network traffic classification. The proposed attack successfully evades deep machine learning-based classifiers which highlights the potential security threat of using deep machine learning techniques to realize autonomous networks.

2020-08-28
Mulinka, Pavol, Casas, Pedro, Vanerio, Juan.  2019.  Continuous and Adaptive Learning over Big Streaming Data for Network Security. 2019 IEEE 8th International Conference on Cloud Networking (CloudNet). :1—4.

Continuous and adaptive learning is an effective learning approach when dealing with highly dynamic and changing scenarios, where concept drift often happens. In a continuous, stream or adaptive learning setup, new measurements arrive continuously and there are no boundaries for learning, meaning that the learning model has to decide how and when to (re)learn from these new data constantly. We address the problem of adaptive and continual learning for network security, building dynamic models to detect network attacks in real network traffic. The combination of fast and big network measurements data with the re-training paradigm of adaptive learning imposes complex challenges in terms of data processing speed, which we tackle by relying on big data platforms for parallel stream processing. We build and benchmark different adaptive learning models on top of a novel big data analytics platform for network traffic monitoring and analysis tasks, and show that high speed-up computations (as high as × 6) can be achieved by parallelizing off-the-shelf stream learning approaches.

Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

Yee, George O.M..  2019.  Modeling and Reducing the Attack Surface in Software Systems. 2019 IEEE/ACM 11th International Workshop on Modelling in Software Engineering (MiSE). :55—62.

In today's world, software is ubiquitous and relied upon to perform many important and critical functions. Unfortunately, software is riddled with security vulnerabilities that invite exploitation. Attackers are particularly attracted to software systems that hold sensitive data with the goal of compromising the data. For such systems, this paper proposes a modeling method applied at design time to identify and reduce the attack surface, which arises due to the locations containing sensitive data within the software system and the accessibility of those locations to attackers. The method reduces the attack surface by changing the design so that the number of such locations is reduced. The method performs these changes on a graphical model of the software system. The changes are then considered for application to the design of the actual system to improve its security.

Yee, George O. M..  2019.  Attack Surface Identification and Reduction Model Applied in Scrum. 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1—8.

Today's software is full of security vulnerabilities that invite attack. Attackers are especially drawn to software systems containing sensitive data. For such systems, this paper presents a modeling approach especially suited for Serum or other forms of agile development to identify and reduce the attack surface. The latter arises due to the locations containing sensitive data within the software system that are reachable by attackers. The approach reduces the attack surface by changing the design so that the number of such locations is reduced. The approach performs these changes on a visual model of the software system. The changes are then considered for application to the actual system to improve its security.

2020-08-24
Jeon, Joohyung, Kim, Junhui, Kim, Joongheon, Kim, Kwangsoo, Mohaisen, Aziz, Kim, Jong-Kook.  2019.  Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks – Supplemental Volume (DSN-S). :3–4.
This paper proposes a distributed deep learning framework for privacy-preserving medical data training. In order to avoid patients' data leakage in medical platforms, the hidden layers in the deep learning framework are separated and where the first layer is kept in platform and others layers are kept in a centralized server. Whereas keeping the original patients' data in local platforms maintain their privacy, utilizing the server for subsequent layers improves learning performance by using all data from each platform during training.
Yuan, Xu, Zhang, Jianing, Chen, Zhikui, Gao, Jing, Li, Peng.  2019.  Privacy-Preserving Deep Learning Models for Law Big Data Feature Learning. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :128–134.
Nowadays, a massive number of data, referred as big data, are being collected from social networks and Internet of Things (IoT), which are of tremendous value. Many deep learning-based methods made great progress in the extraction of knowledge of those data. However, the knowledge extraction of the law data poses vast challenges on the deep learning, since the law data usually contain the privacy information. In addition, the amount of law data of an institution is not large enough to well train a deep model. To solve these challenges, some privacy-preserving deep learning are proposed to capture knowledge of privacy data. In this paper, we review the emerging topics of deep learning for the feature learning of the privacy data. Then, we discuss the problems and the future trend in deep learning for privacy-preserving feature learning on law data.
Cuzzocrea, Alfredo, Damiani, Ernesto.  2019.  Making the Pedigree to Your Big Data Repository: Innovative Methods, Solutions, and Algorithms for Supporting Big Data Privacy in Distributed Settings via Data-Driven Paradigms. 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). 2:508–516.
Starting from our previous research where we in- troduced a general framework for supporting data-driven privacy-preserving big data management in distributed environments, such as emerging Cloud settings, in this paper we further and significantly extend our past research contributions, and provide several novel contributions that complement our previous work in the investigated research field. Our proposed framework can be viewed as an alternative to classical approaches where the privacy of big data is ensured via security-inspired protocols that check several (protocol) layers in order to achieve the desired privacy. Unfortunately, this injects considerable computational overheads in the overall process, thus introducing relevant challenges to be considered. Our approach instead tries to recognize the “pedigree” of suitable summary data representatives computed on top of the target big data repositories, hence avoiding computational overheads due to protocol checking. We also provide a relevant realization of the framework above, the so- called Data-dRIven aggregate-PROvenance privacy-preserving big Multidimensional data (DRIPROM) framework, which specifically considers multidimensional data as the case of interest. Extensions and discussion on main motivations and principles of our proposed research, two relevant case studies that clearly state the need-for and covered (related) properties of supporting privacy- preserving management and analytics of big data in modern distributed systems, and an experimental assessment and analysis of our proposed DRIPROM framework are the major results of this paper.
Liu, Hongling.  2019.  Research on Feasibility Path of Technology Supervision and Technology Protection in Big Data Environment. 2019 International Conference on Intelligent Transportation, Big Data Smart City (ICITBS). :293–296.
Big data will bring revolutionary changes from life to thinking for society as a whole. At the same time, the massive data and potential value of big data are subject to many security risks. Aiming at the above problems, a data privacy protection model for big data platform is proposed. First, the data privacy protection model of big data for data owners is introduced in detail, including protocol design, logic design, complexity analysis and security analysis. Then, the query privacy protection model of big data for ordinary users is introduced in detail, including query protocol design and query mode design. Complexity analysis and safety analysis are performed. Finally, a stand-alone simulation experiment is built for the proposed privacy protection model. Experimental data is obtained and analyzed. The feasibility of the privacy protection model is verified.
Raghavan, Pradheepan, Gayar, Neamat El.  2019.  Fraud Detection using Machine Learning and Deep Learning. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :334–339.
Frauds are known to be dynamic and have no patterns, hence they are not easy to identify. Fraudsters use recent technological advancements to their advantage. They somehow bypass security checks, leading to the loss of millions of dollars. Analyzing and detecting unusual activities using data mining techniques is one way of tracing fraudulent transactions. transactions. This paper aims to benchmark multiple machine learning methods such as k-nearest neighbor (KNN), random forest and support vector machines (SVM), while the deep learning methods such as autoencoders, convolutional neural networks (CNN), restricted boltzmann machine (RBM) and deep belief networks (DBN). The datasets which will be used are the European (EU) Australian and German dataset. The Area Under the ROC Curve (AUC), Matthews Correlation Coefficient (MCC) and Cost of failure are the 3-evaluation metrics that would be used.
Gupta, Nitika, Traore, Issa, de Quinan, Paulo Magella Faria.  2019.  Automated Event Prioritization for Security Operation Center using Deep Learning. 2019 IEEE International Conference on Big Data (Big Data). :5864–5872.
Despite their popularity, Security Operation Centers (SOCs) are facing increasing challenges and pressure due to the growing volume, velocity and variety of the IT infrastructure and security data observed on a daily basis. Due to the mixed performance of current technological solutions, e.g. IDS and SIEM, there is an over-reliance on manual analysis of the events by human security analysts. This creates huge backlogs and slow down considerably the resolution of critical security events. Obvious solutions include increasing accuracy and efficiency in the automation of crucial aspects of the SOC workflow, such as the event classification and prioritization. In the current paper, we present a new approach for SOC event classification by identifying a set of new features using graphical analysis and classifying using a deep neural network model. Experimental evaluation using real SOC event log data yields very encouraging results in terms of classification accuracy.
2020-08-13
Razaque, Abdul, Frej, Mohamed Ben Haj, Yiming, Huang, Shilin, Yan.  2019.  Analytical Evaluation of k–Anonymity Algorithm and Epsilon-Differential Privacy Mechanism in Cloud Computing Environment. 2019 IEEE Cloud Summit. :103—109.

Expected and unexpected risks in cloud computing, which included data security, data segregation, and the lack of control and knowledge, have led to some dilemmas in several fields. Among all of these dilemmas, the privacy problem is even more paramount, which has largely constrained the prevalence and development of cloud computing. There are several privacy protection algorithms proposed nowadays, which generally include two categories, Anonymity algorithm, and differential privacy mechanism. Since many types of research have already focused on the efficiency of the algorithms, few of them emphasized the different orientation and demerits between the two algorithms. Motivated by this emerging research challenge, we have conducted a comprehensive survey on the two popular privacy protection algorithms, namely K-Anonymity Algorithm and Differential Privacy Algorithm. Based on their principles, implementations, and algorithm orientations, we have done the evaluations of these two algorithms. Several expectations and comparisons are also conducted based on the current cloud computing privacy environment and its future requirements.

Augusto, Cristian, Morán, Jesús, De La Riva, Claudio, Tuya, Javier.  2019.  Test-Driven Anonymization for Artificial Intelligence. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest). :103—110.
In recent years, data published and shared with third parties to develop artificial intelligence (AI) tools and services has significantly increased. When there are regulatory or internal requirements regarding privacy of data, anonymization techniques are used to maintain privacy by transforming the data. The side-effect is that the anonymization may lead to useless data to train and test the AI because it is highly dependent on the quality of the data. To overcome this problem, we propose a test-driven anonymization approach for artificial intelligence tools. The approach tests different anonymization efforts to achieve a trade-off in terms of privacy (non-functional quality) and functional suitability of the artificial intelligence technique (functional quality). The approach has been validated by means of two real-life datasets in the domains of healthcare and health insurance. Each of these datasets is anonymized with several privacy protections and then used to train classification AIs. The results show how we can anonymize the data to achieve an adequate functional suitability in the AI context while maintaining the privacy of the anonymized data as high as possible.
2020-08-10
Wasi, Sarwar, Shams, Sarmad, Nasim, Shahzad, Shafiq, Arham.  2019.  Intrusion Detection Using Deep Learning and Statistical Data Analysis. 2019 4th International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1–5.
Innovation and creativity have played an important role in the development of every field of life, relatively less but it has created several problems too. Intrusion detection is one of those problems which became difficult with the advancement in computer networks, multiple researchers with multiple techniques have come forward to solve this crucial issue, but network security is still a challenge. In our research, we have come across an idea to detect intrusion using a deep learning algorithm in combination with statistical data analysis of KDD cup 99 datasets. Firstly, we have applied statistical analysis on the given data set to generate a simplified form of data, so that a less complex binary classification model of artificial neural network could apply for data classification. Our system has decreased the complexity of the system and has improved the response time.
Kwon, Hyun, Yoon, Hyunsoo, Park, Ki-Woong.  2019.  Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error. 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). :136–139.

Deep neural networks (DNNs) provide good performance for image recognition, speech recognition, and pattern recognition. However, a poisoning attack is a serious threat to DNN's security. The poisoning attack is a method to reduce the accuracy of DNN by adding malicious training data during DNN training process. In some situations such as a military, it may be necessary to drop only a chosen class of accuracy in the model. For example, if an attacker does not allow only nuclear facilities to be selectively recognized, it may be necessary to intentionally prevent UAV from correctly recognizing nuclear-related facilities. In this paper, we propose a selective poisoning attack that reduces the accuracy of only chosen class in the model. The proposed method reduces the accuracy of a chosen class in the model by training malicious training data corresponding to a chosen class, while maintaining the accuracy of the remaining classes. For experiment, we used tensorflow as a machine learning library and MNIST and CIFAR10 as datasets. Experimental results show that the proposed method can reduce the accuracy of the chosen class to 43.2% and 55.3% in MNIST and CIFAR10, while maintaining the accuracy of the remaining classes.

2020-08-07
Berady, Aimad, Viet Triem Tong, Valerie, Guette, Gilles, Bidan, Christophe, Carat, Guillaume.  2019.  Modeling the Operational Phases of APT Campaigns. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :96—101.
In the context of Advanced Persistent Threat (APT) attacks, this paper introduces a model, called Nuke, which tries to provide a more operational reading of the attackers' lifecycle in a compromised network. It allows to consider the notions of regression; and repetitiveness of final objectives achievement. By confronting this model with examples of recent attacks (Equifax data breach and TV5Monde sabotage), we emphasize the importance of the attack chronology in the Cyber Threat Intelligence (CTI) reports, as well as the Tactics, Techniques and Procedures (TTP) used by the attacker during his progression.