Visible to the public Biblio

Filters: Keyword is heuristics  [Clear All Filters]
2023-01-13
Al Rahbani, Rani, Khalife, Jawad.  2022.  IoT DDoS Traffic Detection Using Adaptive Heuristics Assisted With Machine Learning. 2022 10th International Symposium on Digital Forensics and Security (ISDFS). :1—6.
DDoS is a major issue in network security and a threat to service providers that renders a service inaccessible for a period of time. The number of Internet of Things (IoT) devices has developed rapidly. Nevertheless, it is proven that security on these devices is frequently disregarded. Many detection methods exist and are mostly focused on Machine Learning. However, the best method has not been defined yet. The aim of this paper is to find the optimal volumetric DDoS attack detection method by first comparing different existing machine learning methods, and second, by building an adaptive lightweight heuristics model relying on few traffic attributes and simple DDoS detection rules. With this new simple model, our goal is to decrease the classification time. Finally, we compare machine learning methods with our adaptive new heuristics method which shows promising results both on the accuracy and performance levels.
2021-04-27
Junosza-Szaniawski, K., Nogalski, D., Wójcik, A..  2020.  Exact and approximation algorithms for sensor placement against DDoS attacks. 2020 15th Conference on Computer Science and Information Systems (FedCSIS). :295–301.
In DDoS attack (Distributed Denial of Service), an attacker gains control of many network users by a virus. Then the controlled users send many requests to a victim, leading to lack of its resources. DDoS attacks are hard to defend because of distributed nature, large scale and various attack techniques. One of possible ways of defense is to place sensors in the network that can detect and stop an unwanted request. However, such sensors are expensive so there is a natural question about a minimum number of sensors and their optimal placement to get the required level of safety. We present two mixed integer models for optimal sensor placement against DDoS attacks. Both models lead to a trade-off between the number of deployed sensors and the volume of uncontrolled flow. Since above placement problems are NP-hard, two efficient heuristics are designed, implemented and compared experimentally with exact linear programming solvers.
Harada, T., Tanaka, K., Ogasawara, R., Mikawa, K..  2020.  A Rule Reordering Method via Pairing Dependent Rules. 2020 IEEE Conference on Communications and Network Security (CNS). :1–9.
Packet classification is used to determine the behavior of incoming packets to network devices. Because it is achieved using a linear search on a classification rule list, a larger number of rules leads to a longer communication latency. To decrease this latency, the problem is generalized as Optimal Rule Ordering (ORO), which aims to identify the order of rules that minimizes the classification latency caused by packet classification while preserving the classification policy. Because ORO is known to be NP-complete by Hamed and Al-Shaer [Dynamic rule-ordering optimization for high-speed firewall filtering, ASIACCS (2006) 332-342], various heuristics for ORO have been proposed. Sub-graph merging (SGM) by Tapdiya and Fulp [Towards optimal firewall rule ordering utilizing directed acyclical graphs, ICCCN (2009) 1-6] is the state of the art heuristic algorithm for ORO. In this paper, we propose a novel heuristic method for ORO. Although most heuristics try to recursively determine the maximum-weight rule and move it as far as possible to an upper position, our algorithm pairs rules that cause policy violations until there are no such rules to simply sort the rules by these weights. Our algorithm markedly decreases the classification latency and reordering time compared with SGM in experiments. The sets consisting of thousands of rules that require one or more hours for reordering by SGM can be reordered by the proposed method within one minute.
2021-02-22
Abdelaal, M., Karadeniz, M., Dürr, F., Rothermel, K..  2020.  liteNDN: QoS-Aware Packet Forwarding and Caching for Named Data Networks. 2020 IEEE 17th Annual Consumer Communications Networking Conference (CCNC). :1–9.
Recently, named data networking (NDN) has been introduced to connect the world of computing devices via naming data instead of their containers. Through this strategic change, NDN brings several new features to network communication, including in-network caching, multipath forwarding, built-in multicast, and data security. Despite these unique features of NDN networking, there exist plenty of opportunities for continuing developments, especially with packet forwarding and caching. In this context, we introduce liteNDN, a novel forwarding and caching strategy for NDN networks. liteNDN comprises a cooperative forwarding strategy through which NDN routers share their knowledge, i.e. data names and interfaces, to optimize their packet forwarding decisions. Subsequently, liteNDN leverages that knowledge to estimate the probability of each downstream path to swiftly retrieve the requested data. Additionally, liteNDN exploits heuristics, such as routing costs and data significance, to make proper decisions about caching normal as well as segmented packets. The proposed approach has been extensively evaluated in terms of the data retrieval latency, network utilization, and the cache hit rate. The results showed that liteNDN, compared to conventional NDN forwarding and caching strategies, achieves much less latency while reducing the unnecessary traffic and caching activities.
2020-03-09
Nathezhtha, T., Sangeetha, D., Vaidehi, V..  2019.  WC-PAD: Web Crawling based Phishing Attack Detection. 2019 International Carnahan Conference on Security Technology (ICCST). :1–6.
Phishing is a criminal offense which involves theft of user's sensitive data. The phishing websites target individuals, organizations, the cloud storage hosting sites and government websites. Currently, hardware based approaches for anti-phishing is widely used but due to the cost and operational factors software based approaches are preferred. The existing phishing detection approaches fails to provide solution to problem like zero-day phishing website attacks. To overcome these issues and precisely detect phishing occurrence a three phase attack detection named as Web Crawler based Phishing Attack Detector(WC-PAD) has been proposed. It takes the web traffics, web content and Uniform Resource Locator(URL) as input features, based on these features classification of phishing and non phishing websites are done. The experimental analysis of the proposed WC-PAD is done with datasets collected from real phishing cases. From the experimental results, it is found that the proposed WC-PAD gives 98.9% accuracy in both phishing and zero-day phishing attack detection.
2019-07-01
Napoli, Daniela.  2018.  Developing Accessible and Usable Security (ACCUS) Heuristics. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. :SRC16:1-SRC16:6.

Currently, usable security and web accessibility design principles exist separately. Although literature at the intersect of accessibility and security is developing, it is limited in its understanding of how users with vision loss operate the web securely. In this paper, we propose heuristics that fuse the nuances of both fields. With these heuristics, we evaluate 10 websites and uncover several issues that can impede users' ability to abide by common security advice.

Ferreyra, N. E. Díaz, Meisy, R., Heiselz, M..  2018.  At Your Own Risk: Shaping Privacy Heuristics for Online Self-Disclosure. 2018 16th Annual Conference on Privacy, Security and Trust (PST). :1-10.

Revealing private and sensitive information on Social Network Sites (SNSs) like Facebook is a common practice which sometimes results in unwanted incidents for the users. One approach for helping users to avoid regrettable scenarios is through awareness mechanisms which inform a priori about the potential privacy risks of a self-disclosure act. Privacy heuristics are instruments which describe recurrent regrettable scenarios and can support the generation of privacy awareness. One important component of a heuristic is the group of people who should not access specific private information under a certain privacy risk. However, specifying an exhaustive list of unwanted recipients for a given regrettable scenario can be a tedious task which necessarily demands the user's intervention. In this paper, we introduce an approach based on decision trees to instantiate the audience component of privacy heuristics with minor intervention from the users. We introduce Disclosure- Acceptance Trees, a data structure representative of the audience component of a heuristic and describe a method for their generation out of user-centred privacy preferences.

Rosa, F. De Franco, Jino, M., Bueno, P. Marcos Siqueira, Bonacin, R..  2018.  Coverage-Based Heuristics for Selecting Assessment Items from Security Standards: A Core Set Proposal. 2018 Workshop on Metrology for Industry 4.0 and IoT. :192-197.

In the realm of Internet of Things (IoT), information security is a critical issue. Security standards, including their assessment items, are essential instruments in the evaluation of systems security. However, a key question remains open: ``Which test cases are most effective for security assessment?'' To create security assessment designs with suitable assessment items, we need to know the security properties and assessment dimensions covered by a standard. We propose an approach for selecting and analyzing security assessment items; its foundations come from a set of assessment heuristics and it aims to increase the coverage of assessment dimensions and security characteristics in assessment designs. The main contribution of this paper is the definition of a core set of security assessment heuristics. We systematize the security assessment process by means of a conceptual formalization of the security assessment area. Our approach can be applied to security standards to select or to prioritize assessment items with respect to 11 security properties and 6 assessment dimensions. The approach is flexible allowing the inclusion of dimensions and properties. Our proposal was applied to a well know security standard (ISO/IEC 27001) and its assessment items were analyzed. The proposal is meant to support: (i) the generation of high-coverage assessment designs, which include security assessment items with assured coverage of the main security characteristics, and (ii) evaluation of security standards with respect to the coverage of security aspects.

2018-08-23
Abbas, W., Laszka, A., Vorobeychik, Y., Koutsoukos, X..  2017.  Improving network connectivity using trusted nodes and edges. 2017 American Control Conference (ACC). :328–333.

Network connectivity is a primary attribute and a characteristic phenomenon of any networked system. A high connectivity is often desired within networks; for instance to increase robustness to failures, and resilience against attacks. A typical approach to increasing network connectivity is to strategically add links; however adding links is not always the most suitable option. In this paper, we propose an alternative approach to improving network connectivity, that is by making a small subset of nodes and edges “trusted,” which means that such nodes and edges remain intact at all times and are insusceptible to failures. We then show that by controlling the number of trusted nodes and edges, any desired level of network connectivity can be obtained. Along with characterizing network connectivity with trusted nodes and edges, we present heuristics to compute a small number of such nodes and edges. Finally, we illustrate our results on various networks.

2018-03-26
Srinivasa Rao, Routhu, Pais, Alwyn R..  2017.  Detecting Phishing Websites Using Automation of Human Behavior. Proceedings of the 3rd ACM Workshop on Cyber-Physical System Security. :33–42.

In this paper, we propose a technique to detect phishing attacks based on behavior of human when exposed to fake website. Some online users submit fake credentials to the login page before submitting their actual credentials. He/She observes the login status of the resulting page to check whether the website is fake or legitimate. We automate the same behavior with our application (FeedPhish) which feeds fake values into login page. If the web page logs in successfully, it is classified as phishing otherwise it undergoes further heuristic filtering. If the suspicious site passes through all heuristic filters then the website is classified as a legitimate site. As per the experimentation results, our application has achieved a true positive rate of 97.61%, true negative rate of 94.37% and overall accuracy of 96.38%. Our application neither demands third party services nor prior knowledge like web history, whitelist or blacklist of URLS. It is able to detect not only zero-day phishing attacks but also detects phishing sites which are hosted on compromised domains.

2018-01-10
Thaler, S., Menkonvski, V., Petkovic, M..  2017.  Towards a neural language model for signature extraction from forensic logs. 2017 5th International Symposium on Digital Forensic and Security (ISDFS). :1–6.
Signature extraction is a critical preprocessing step in forensic log analysis because it enables sophisticated analysis techniques to be applied to logs. Currently, most signature extraction frameworks either use rule-based approaches or handcrafted algorithms. Rule-based systems are error-prone and require high maintenance effort. Hand-crafted algorithms use heuristics and tend to work well only for specialized use cases. In this paper we present a novel approach to extract signatures from forensic logs that is based on a neural language model. This language model learns to identify mutable and non-mutable parts in a log message. We use this information to extract signatures. Neural language models have shown to work extremely well for learning complex relationships in natural language text. We experimentally demonstrate that our model can detect which parts are mutable with an accuracy of 86.4%. We also show how extracted signatures can be used for clustering log lines.
2017-10-27
Przybylek, Michal Roman, Wierzbicki, Adam, Michalewicz, Zbigniew.  2016.  Multi-hard Problems in Uncertain Environment. Proceedings of the Genetic and Evolutionary Computation Conference 2016. :381–388.
Real-world problems are usually composed of two or more (potentially NP-Hard) problems that are interdependent on each other. Such problems have been recently identified as "multi-hard problems" and various strategies for solving them have been proposed. One of the most successful of the strategies is based on a decomposition approach, where each of the components of a multi-hard problem is solved separately (by state-of-the-art solver) and then a negotiation protocol between the sub-solutions is applied to mediate a global solution. Multi-hardness is, however, not the only crucial aspect of real-world problems. Many real-world problems operate in a dynamically-changing, uncertain environment. Special approaches such as risk analysis and minimization may be applied in cases when we know the possible variants of constraints and criteria, as well as their probabilities. On the other hand, adaptive algorithms may be used in the case of uncertainty about criteria variants or probabilities. While such approaches are not new, their application to multi-hard problems has not yet been studied systematically. In this paper we extend the benchmark problem for multi-hardness with the aspect of uncertainty. We adapt the decomposition-based approach to this new setting, and compare it against another promising heuristic (Monte-Carlo Tree Search) on a large publicly available dataset. Our comparisons show that the decomposition-based approach outperforms the other heuristic in most cases.
2017-03-08
Bottazzi, G., Italiano, G. F..  2015.  Fast Mining of Large-Scale Logs for Botnet Detection: A Field Study. 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. :1989–1996.

Botnets are considered one of the most dangerous species of network-based attack today because they involve the use of very large coordinated groups of hosts simultaneously. The behavioral analysis of computer networks is at the basis of the modern botnet detection methods, in order to intercept traffic generated by malwares for which signatures do not exist yet. Defining a pattern of features to be placed at the basis of behavioral analysis, puts the emphasis on the quantity and quality of information to be caught and used to mark data streams as normal or abnormal. The problem is even more evident if we consider extensive computer networks or clouds. With the present paper we intend to show how heuristics applied to large-scale proxy logs, considering a typical phase of the life cycle of botnets such as the search for C&C Servers through AGDs (Algorithmically Generated Domains), may provide effective and extremely rapid results. The present work will introduce some novel paradigms. The first is that some of the elements of the supply chain of botnets could be completed without any interaction with the Internet, mostly in presence of wide computer networks and/or clouds. The second is that behind a large number of workstations there are usually "human beings" and it is unlikely that their behaviors will cause marked changes in the interaction with the Internet in a fairly narrow time frame. Finally, AGDs can highlight, at the moment, common lexical features, detectable quickly and without using any black/white list.