Biblio
Filters: Keyword is Metrics [Clear All Filters]
Enhancing Microgrid Resiliency Against Cyber Vulnerabilities. 2018 IEEE Industry Applications Society Annual Meeting (IAS). :1—8.
.
2018. Recent cyber attacks on the power grid have been of increasing complexity and sophistication. In order to understand the impact of cyber-attacks on the power system resiliency, it is important to consider an holistic cyber-physical system specially with increasing industrial automation. In this work, device level resilience properties of the various controllers and their impact on the microgrid resiliency is studied. In addition, a cyber-physical resiliency metric considering vulnerabilities, system model, and device level properties is proposed. A use case is presented inspired by the recent Ukraine cyber-attack. A use case has been presented to demonstrate application of the developed cyber-physical resiliency metric to enhance situational awareness of the operator, and enable better control actions to improve resiliency.
Ensuring Deception Consistency for FTP Services Hardened Against Advanced Persistent Threats. Proceedings of the 5th ACM Workshop on Moving Target Defense. :69–79.
.
2018. As evidenced by numerous high-profile security incidents such as the Target data breach and the Equifax hack, APTs (Advanced Persistent Threats) can significantly compromise the trustworthiness of cyber space. This work explores how to improve the effectiveness of cyber deception in hardening FTP (File Transfer Protocol) services against APTs. The main objective of our work is to ensure deception consistency: when the attackers are trapped, they can only make observations that are consistent with what they have seen already so that they cannot recognize the deceptive environment. To achieve deception consistency, we use logic constraints to characterize an attacker's best knowledge (either positive, negative, or uncertain). When migrating the attacker's FTP connection into a contained environment, we use these logic constraints to instantiate a new FTP file system that is guaranteed free of inconsistency. We performed deception experiments with student participants who just completed a computer security course. Following the design of Turing tests, we find that the participants' chances of recognizing deceptive environments are close to random guesses. Our experiments also confirm the importance of observation consistency in identifying deception.
Enterprise WiFi Hotspot Authentication with Hybrid Encryption on NFC- Enabled Smartphones. 2018 8th International Conference on Electronics Information and Emergency Communication (ICEIEC). :247–250.
.
2018. Nowadays, some workplaces have adopted the policy of BYOD (bring your own device) that permits employees to bring personally owned devices, and to use those devices to access company information and applications. Especially, small devices like smartphones are widely used due to the greater mobility and connectivity. A majority of organizations provide the wireless local area network which is necessary for small devices and business data transmission. The resources access through Wi-Fi network of the organization needs intense restriction. WPA2 Enterprise with 802.1X standard is typically introduced to handle user authentication on the network using the EAP framework. However, credentials management for all users is a hassle for administrators. Strong authentication provides higher security whereas the difficulty of deployment is still open issues. This research proposes the utility of Near Field Communication to securely transmit certificate data that rely on the hybrid cryptosystem. The approach supports enterprise Wi-Fi hotspot authentication based on WPA2-802.1X model with the EAP-TLS method. It also applies multi-factor authentication for enhancing the security of networks and users. The security analysis and experiment on establishing connection time were conducted to evaluate the presented approach.
Evaluating an ECA with a Cognitive-Affective Architecture. Proceedings of the XIX International Conference on Human Computer Interaction. :22:1–22:8.
.
2018. In this paper, we present an embodied conversational agent (ECA) that includes a cognitive-affective architecture based on the Soar cognitive architecture, integrates an emotion model based on ALMA that uses a three-layered model of emotions, mood and personality, from the point of view of the user and the agent. These features allow to modify the behavior and personality of the agent to achieve a more realistic and believable interaction with the user. This ECA works as a virtual assistant to search information from Wikipedia and show personalized results to the user. It is only a prototipe, but can be used to show some of the possibilities of the system. A first evaluation was conducted to prove these possibilities, with satisfactory results that also give guidance for some future work that can be done with this ECA.
Evaluating Social Spammer Detection Systems. Proceedings of the Australasian Computer Science Week Multiconference. :18:1–18:7.
.
2018. The rising popularity of social network services, such as Twitter, has attracted many spammers and created a large number of fake accounts, overwhelming legitimate users with advertising, malware and unwanted and disruptive information. This not only inconveniences the users' social activities but causes financial loss and privacy issues. Identifying social spammers is challenging because spammers continually change their strategies to fool existing anti-spamming systems. Thus, many researchers have tried to propose new classification systems using various types of features extracted from the content and user's information. However, no comprehensive comparative study has been done to compare the effectiveness and the efficiency of the existing systems. At this stage, it is hard to know what the best anti spamming system is and why. This paper proposes a unified evaluation workbench that allows researchers to access various user and content-based features, implement new features, and evaluate and compare the performance of their systems against existing systems. Through our analysis, we can identify the most effective and efficient social spammer detection features and help develop a faster and more accurate classifier model that has higher true positives and lower false positives.
Evaluation of Channels Blacklists in TSCH Networks with Star and Tree Topologies. Proceedings of the 14th ACM International Symposium on QoS and Security for Wireless and Mobile Networks. :116-123.
.
2018. The Time-Slotted Channel Hopping (TSCH) mode, defined by the IEEE 802.15.4e protocol, aims to reduce the effects of narrowband interference and multipath fading on some channels through the frequency hopping method. To work satisfactorily, this method must be based on the evaluation of the channel quality through which the packets will be transmitted to avoid packet losses. In addition to the estimation, it is necessary to manage channel blacklists, which prevents the sensors from hopping to bad quality channels. The blacklists can be applied locally or globally, and this paper evaluates the use of a local blacklist through simulation of a TSCH network in a simulated harsh industrial environment. This work evaluates two approaches, and both use a developed protocol based on TSCH, called Adaptive Blacklist TSCH (AB-TSCH), that considers beacon packets and includes a link quality estimation with blacklists. The first approach uses the protocol to compare a simple version of TSCH to configurations with different sizes of blacklists in star topology. In this approach, it is possible to analyze the channel adaption method that occurs when the blacklist has 15 channels. The second approach uses the protocol to evaluate blacklists in tree topology, and discusses the inherent problems of this topology. The results show that, when the estimation is performed continuously, a larger blacklist leads to an increase of performance in star topology. In tree topology, due to the simultaneous transmissions among some nodes, the use of smaller blacklist showed better performance.
Evolution of Network Enumeration Strategies in Emulated Computer Networks. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :1640–1647.
.
2018. Successful attacks on computer networks today do not often owe their victory to directly overcoming strong security measures set up by the defender. Rather, most attacks succeed because the number of possible vulnerabilities are too large for humans to fully protect without making a mistake. Regardless of the security elsewhere, a skilled attacker can exploit a single vulnerability in a defensive system and negate the benefits of those security measures. This paper presents an evolutionary framework for evolving attacker agents in a real, emulated network environment using genetic programming, as a foundation for coevolutionary systems which can automatically discover and mitigate network security flaws. We examine network enumeration, an initial network reconnaissance step, through our framework and present results demonstrating its success, indicating a broader applicability to further cyber-security tasks.
Evolving AL-FEC Application Towards 5G NGMN. 2018 9th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1–5.
.
2018. The fifth generation of mobile technology (5G) is positioned to address the demands and business contexts of 2020 and beyond. Therefore, in 5G, there is a need to push the envelope of performance to provide, where needed, for example, much greater throughput, much lower latency, ultra-high reliability, much higher connectivity density, and higher mobility range. A crucial point in the effective provisioning of 5G Next Generation Mobile Networks (NGMN) lies in the efficient error control and in more details in the utilization of Forward Error Correction (FEC) codes on the application layer. FEC is a method for error control of data transmission adopted in several mobile multicast standards. FEC is a feedback free error recovery method where the sender introduces redundant data in advance with the source data enabling the recipient to recover from different arbitrary packet losses. Recently, the adoption of FEC error control method has been boosted by the introduction of powerful Application Layer FEC (AL-FEC) codes. Furthermore, several works have emerged aiming to address the efficient application of AL-FEC protection introducing deterministic or randomized online algorithms. In this work we propose a novel AL-FEC scheme based on online algorithms forced by the well stated AL-FEC policy online problem. We present an algorithm which exploits feedback capabilities of the mobile users regarding the outcome of a transmission, and adapts the introduced protection respectively. Moreover, we provide an extensive analysis of the proposed AL-FEC algorithm accompanied by a performance evaluation against common error protection schemes.
An Experience Sampling Study of User Reactions to Browser Warnings in the Field. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. :512:1–512:13.
.
2018. Web browser warnings should help protect people from malware, phishing, and network attacks. Adhering to warnings keeps people safer online. Recent improvements in warning design have raised adherence rates, but they could still be higher. And prior work suggests many people still do not understand them. Thus, two challenges remain: increasing both comprehension and adherence rates. To dig deeper into user decision making and comprehension of warnings, we performed an experience sampling study of web browser security warnings, which involved surveying over 6,000 Chrome and Firefox users in situ to gather reasons for adhering or not to real warnings. We find these reasons are many and vary with context. Contrary to older prior work, we do not find a single dominant failure in modern warning design—like habituation—that prevents effective decisions. We conclude that further improvements to warnings will require solving a range of smaller contextual misunderstandings.
Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2060–2069.
.
2018. The widescale use of machine learning algorithms to drive decision-making has highlighted the critical importance of ensuring the interpretability of such models in order to engender trust in their output. The state-of-the-art recommendation systems use black-box latent factor models that provide no explanation of why a recommendation has been made, as they abstract their decision processes to a high-dimensional latent space which is beyond the direct comprehension of humans. We propose a novel approach for extracting explanations from latent factor recommendation systems by training association rules on the output of a matrix factorisation black-box model. By taking advantage of the interpretable structure of association rules, we demonstrate that predictive accuracy of the recommendation model can be maintained whilst yielding explanations with high fidelity to the black-box model on a unique industry dataset. Our approach mitigates the accuracy-interpretability trade-off whilst avoiding the need to sacrifice flexibility or use external data sources. We also contribute to the ill-defined problem of evaluating interpretability.
Exploring Ensemble Classifiers for Detecting Attacks in the Smart Grids. Proceedings of the Fifth Cybersecurity Symposium. :13:1–13:4.
.
2018. The advent of machine learning has made it a popular tool in various areas. It has also been applied in network intrusion detection. However, machine learning hasn't been sufficiently explored in the cyberphysical domains such as smart grids. This is because a lot of factors weigh in while using these tools. This paper is about intrusion detection in smart grids and how some machine learning techniques can help achieve this goal. It considers the problems of feature and classifier selection along with other data ambiguities. The goal is to apply the machine learning ensemble classifiers on the smart grid traffic and evaluate if these methods can detect anomalies in the system.
Facial Expression Recognition Enhanced by Thermal Images Through Adversarial Learning. Proceedings of the 26th ACM International Conference on Multimedia. :1346–1353.
.
2018. Currently, fusing visible and thermal images for facial expression recognition requires two modalities during both training and testing. Visible cameras are commonly used in real-life applications, and thermal cameras are typically only available in lab situations due to their high price. Thermal imaging for facial expression recognition is not frequently used in real-world situations. To address this, we propose a novel thermally enhanced facial expression recognition method which uses thermal images as privileged information to construct better visible feature representation and improved classifiers by incorporating adversarial learning and similarity constraints during training. Specifically, we train two deep neural networks from visible images and thermal images. We impose adversarial loss to enforce statistical similarity between the learned representations of two modalities, and a similarity constraint to regulate the mapping functions from visible and thermal representation to expressions. Thus, thermal images are leveraged to simultaneously improve visible feature representation and classification during training. To mimic real-world scenarios, only visible images are available during testing. We further extend the proposed expression recognition method for partially unpaired data to explore thermal images' supplementary role in visible facial expression recognition when visible images and thermal images are not synchronously recorded. Experimental results on the MAHNOB Laughter database demonstrate that our proposed method can effectively regularize visible representation and expression classifiers with the help of thermal images, achieving state-of-the-art recognition performance.
Facial Expression Recognition with Deep Learning. Proceedings of the 10th International Conference on Internet Multimedia Computing and Service. :10:1–10:4.
.
2018. Automatic recognition of facial expression images is a challenge for computer due to variation of expression, background, position and label noise. The paper propose a new method for static facial expression recognition. Main process is to perform experiments by FER-2013 dataset, the primary mission is using our CNN model to classify a set of static images into 7 basic emotions and then achieve effective classification automatically. The two preprocessing of the faces picture have enhanced the effect of the picture for recognition. First, FER datasets are preprocessed with standard histogram eqialization. Then we employ ImageDataGenerator to deviate and rotate the facial image to enhance model robustness. Finally, the result of softmax activation function (also known as multinomial logistic regression) is stacked by SVM. The result of softmax activation function + SVM is better than softmax activation function. The accuracy of facial expression recognition achieve 68.79% on the test set.
Facial-based Intrusion Detection System with Deep Learning in Embedded Devices. Proceedings of the 2018 International Conference on Sensors, Signal and Image Processing. :64–68.
.
2018. With the advent of deep learning based methods, facial recognition algorithms have become more effective and efficient. However, these algorithms have usually the disadvantage of requiring the use of dedicated hardware devices, such as graphical processing units (GPUs), which pose restrictions on their usage on embedded devices with limited computational power. In this paper, we present an approach that allows building an intrusion detection system, based on face recognition, running on embedded devices. It relies on deep learning techniques and does not exploit the GPUs. Face recognition is performed using a knn classifier on features extracted from a 50-layers Residual Network (ResNet-50) trained on the VGGFace2 dataset. In our experiment, we determined the optimal confidence threshold that allows distinguishing legitimate users from intruders. In order to validate the proposed system, we created a ground truth composed of 15,393 images of faces and 44 identities, captured by two smart cameras placed in two different offices, in a test period of six months. We show that the obtained results are good both from the efficiency and effectiveness perspective.
A Fast MPEG’s CDVS Implementation for GPU Featured in Mobile Devices. IEEE Access. 6:52027—52046.
.
2018. The Moving Picture Experts Group's Compact Descriptors for Visual Search (MPEG's CDVS) intends to standardize technologies in order to enable an interoperable, efficient, and cross-platform solution for internet-scale visual search applications and services. Among the key technologies within CDVS, we recall the format of visual descriptors, the descriptor extraction process, and the algorithms for indexing and matching. Unfortunately, these steps require precision and computation accuracy. Moreover, they are very time-consuming, as they need running times in the order of seconds when implemented on the central processing unit (CPU) of modern mobile devices. In this paper, to reduce computation times and maintain precision and accuracy, we re-design, for many-cores embedded graphical processor units (GPUs), all main local descriptor extraction pipeline phases of the MPEG's CDVS standard. To reach this goal, we introduce new techniques to adapt the standard algorithm to parallel processing. Furthermore, to reduce memory accesses and efficiently distribute the kernel workload, we use new approaches to store and retrieve CDVS information on proper GPU data structures. We present a complete experimental analysis on a large and standard test set. Our experiments show that our GPU-based approach is remarkably faster than the CPU-based reference implementation of the standard, and it maintains a comparable precision in terms of true and false positive rates.
Feather Forking As a Positive Force: Incentivising Green Energy Production in a Blockchain-based Smart Grid. Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. :99–104.
.
2018. Climate change represents a serious threat to the health of our planet and imposed a discussion upon energy waste and production. In this paper we propose a smart grid architecture relying on blockchain technology aimed at discouraging the production and distribution of non-renewable energy as the one derived from fossil fuel. Our model relies on a reverse application of a recently introduced attack to the blockchain based on chain forking. Our system involves both a central authority and a number of distributed peers representing the stakeholders of the energy grid. This system preserves those advantages derived from the blockchain and it also address some limitations such as energy waste for mining operations. In addition, the reverse attack we rely on allows to mitigate the behavior of a classic blockchain, which is intrinsecally self-regulated, and to trigger a sort of ethical action which penalizes non-renewable energy producers. Blacklisted stakeholders will be induced to provide their transaction with higher fees in order to preserve the selling rate.
Feature Based Image Registration using Heuristic Nearest Neighbour Search. 2018 22nd International Computer Science and Engineering Conference (ICSEC). :1—3.
.
2018. Image registration is the process of aligning images of the same scene taken at different instances, from different viewpoints or by heterogeneous sensors. This can be achieved either by area based or by feature based image matching techniques. Feature based image registration focuses on detecting relevant features from the input images and attaching descriptors to these features. Matching visual descriptions of two images is a major task in image registration. This feature matching is currently done using Exhaustive Search (or Brute-Force) and Nearest Neighbour Search. The traditional method used for nearest neighbour search is by representing the data as k-d trees. This nearest neighbour search can also be performed using combinatorial optimization algorithms such as Simulated Annealing. This work proposes a method to perform image feature matching by nearest neighbour search done based on Threshold Accepting, a faster version of Simulated Annealing.The experiments performed suggest that the proposed algorithm can produce better results within a minimum number of iterations than many existing algorithms.
Flexible and Efficient Authentication of IoT Cloud Scheme Using Crypto Hash Function. Proceedings of the 2018 2Nd International Conference on Computer Science and Artificial Intelligence. :487–494.
.
2018. The Internet of Things and cloud computing (IoT Cloud) have a wide resonance in the Internet and modern communication technology, which allows laptops, phones, sensors, embedded devices, and other things to connect and exchange information via the Internet. Therefore, IoT Cloud offers several facilities, such as resources, storage, sharing, exchange, and communication. However, IoT Cloud suffers from security problems, which are a vital issue in the information technology world. All embedded devices in IoT Cloud need to be supported by strong authentication and preservation of privacy data during information exchange via the IoT Cloud environment. Malicious attacks (such as replay, man-in-the-middle [MITM], and impersonation attacks) play the negative role of obtaining important information of devices. In this study, we propose a good scheme that overcomes the mentioned issues by resisting well-known attacks, such as MITM, insider, offline password guessing, dictionary, replay, and eavesdropping. Our work achieves device anonymity, forward secrecy, confidentiality, and mutual authentication. Security and performance analyses show that our proposed scheme is more efficient, flexible, and secure with respect to several known attacks compared with related schemes.
Floating Genesis Block Enhancement for Blockchain Based Routing Between Connected Vehicles and Software-defined VANET Security Services. Proceedings of the 11th International Conference on Security of Information and Networks. :24:1–24:2.
.
2018. The paper reviews the issue of secure routing in unmanned vehicle ad-hoc networks. Application of the Blockchain technology for routing and authentication information storage and distribution is proposed. A blockchain with the floating genesis block is introduced to solve problems associated with blockchain size growth in the systems using transactions with limited lifetime.
A Flow-Level Architecture for Balancing Accountability and Privacy. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :984–989.
.
2018. With the rapid development of the Internet, flow-based approach has attracted more and more attention. To this end, this paper presents a new and efficient architecture to balance accountability and privacy based on network flows. A self-certifying identifier is proposed to efficiently identify a flow. In addition, a delegate-registry cooperation scheme and a multi-delegate mechanism are developed to ensure users' privacy. The effectiveness and overhead of the proposed architecture are evaluated by virtue of the real trace collected from an Internet service provider. The experimental results show that our architecture can achieve a better network performance in terms of lower resource consumption, lower response time, and higher stability.
Fooling End-To-End Speaker Verification With Adversarial Examples. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :1962–1966.
.
2018. Automatic speaker verification systems are increasingly used as the primary means to authenticate costumers. Recently, it has been proposed to train speaker verification systems using end-to-end deep neural models. In this paper, we show that such systems are vulnerable to adversarial example attacks. Adversarial examples are generated by adding a peculiar noise to original speaker examples, in such a way that they are almost indistinguishable, by a human listener. Yet, the generated waveforms, which sound as speaker A can be used to fool such a system by claiming as if the waveforms were uttered by speaker B. We present white-box attacks on a deep end-to-end network that was either trained on YOHO or NTIMIT. We also present two black-box attacks. In the first one, we generate adversarial examples with a system trained on NTIMIT and perform the attack on a system that trained on YOHO. In the second one, we generate the adversarial examples with a system trained using Mel-spectrum features and perform the attack on a system trained using MFCCs. Our results show that one can significantly decrease the accuracy of a target system even when the adversarial examples are generated with different system potentially using different features.
Forensic Analysis and Anonymisation of Printed Documents. Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security. :127–138.
.
2018. Contrary to popular belief, the paperless office has not yet established itself. Printer forensics is therefore still an important field today to protect the reliability of printed documents or to track criminals. An important task of this is to identify the source device of a printed document. There are many forensic approaches that try to determine the source device automatically and with commercially available recording devices. However, it is difficult to find intrinsic signatures that are robust against a variety of influences of the printing process and at the same time can identify the specific source device. In most cases, the identification rate only reaches up to the printer model. For this reason we reviewed document colour tracking dots, an extrinsic signature embedded in nearly all modern colour laser printers. We developed a refined and generic extraction algorithm, found a new tracking dot pattern and decoded pattern information. Through out we propose to reuse document colour tracking dots, in combination with passive printer forensic methods. From privacy perspective we additional investigated anonymization approaches to defeat arbitrary tracking. Finally we propose our toolkitdeda which implements the entire workflow of extracting, analysing and anonymisation of a tracking dot pattern.
Formal Modeling and Security Analysis for OpenFlow-Based Networks. 2018 23rd International Conference on Engineering of Complex Computer Systems (ICECCS). :201–204.
.
2018. We present a formal OpenFlow-based network programming language (OF) including various flow rules, which can not only describe the behaviors of an individual switch, but also support to model a network of switches connected in the point-to-point topology. Besides, a topology-oriented operational semantics of the proposed language is explored to specify how the packet is processed and delivered in the OpenFlow-based networks. Based on the formal framework, we also propose an approach to detect potential security threats caused by the conflict of dynamic flow rules imposed by dynamic OpenFlow applications.
FP -TESTER : Automated Testing of Browser Fingerprint Resilience. 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :103-107.
.
2018. Despite recent regulations and growing user awareness, undesired browser tracking is increasing. In addition to cookies, browser fingerprinting is a stateless technique that exploits a device's configuration for tracking purposes. In particular, browser fingerprinting builds on attributes made available from Javascript and HTTP headers to create a unique and stable fingerprint. For example, browser plugins have been heavily exploited by state-of-the-art browser fingerprinters as a rich source of entropy. However, as browser vendors abandon plugins in favor of extensions, fingerprinters will adapt. We present FP-TESTER, an approach to automatically test the effectiveness of browser fingerprinting countermeasure extensions. We implement a testing toolkit to be used by developers to reduce browser fingerprintability. While countermeasures aim to hinder tracking by changing or blocking attributes, they may easily introduce subtle side-effects that make browsers more identifiable, rendering the extensions counterproductive. FP-TESTER reports on the side-effects introduced by the countermeasure, as well as how they impact tracking duration from a fingerprinter's point-of-view. To the best of our knowledge, FP-TESTER is the first tool to assist developers in fighting browser fingerprinting and reducing the exposure of end-users to such privacy leaks.
FP-STALKER: Tracking Browser Fingerprint Evolutions. 2018 IEEE Symposium on Security and Privacy (SP). :728-741.
.
2018. Browser fingerprinting has emerged as a technique to track users without their consent. Unlike cookies, fingerprinting is a stateless technique that does not store any information on devices, but instead exploits unique combinations of attributes handed over freely by browsers. The uniqueness of fingerprints allows them to be used for identification. However, browser fingerprints change over time and the effectiveness of tracking users over longer durations has not been properly addressed. In this paper, we show that browser fingerprints tend to change frequently-from every few hours to days-due to, for example, software updates or configuration changes. Yet, despite these frequent changes, we show that browser fingerprints can still be linked, thus enabling long-term tracking. FP-STALKER is an approach to link browser fingerprint evolutions. It compares fingerprints to determine if they originate from the same browser. We created two variants of FP-STALKER, a rule-based variant that is faster, and a hybrid variant that exploits machine learning to boost accuracy. To evaluate FP-STALKER, we conduct an empirical study using 98,598 fingerprints we collected from 1, 905 distinct browser instances. We compare our algorithm with the state of the art and show that, on average, we can track browsers for 54.48 days, and 26 % of browsers can be tracked for more than 100 days.