Biblio
In parallel with the increasing growth of the Internet and computer networks, the number of malwares has been increasing every day. Today, one of the newest attacks and the biggest threats in cybersecurity is ransomware. The effectiveness of applying machine learning techniques for malware detection has been explored in much scientific research, however, there is few studies focused on machine learning-based ransomware detection. In this paper, the effectiveness of ransomware detection using machine learning methods applied to CICAndMal2017 dataset is examined in two experiments. First, the classifiers are trained on a single dataset containing different types of ransomware. Second, different classifiers are trained on datasets of 10 ransomware families distinctly. Our findings imply that in both experiments random forest outperforms other tested classifiers and the performance of the classifiers are not changed significantly when they are trained on each family distinctly. Therefore, the random forest classification method is very effective in ransomware detection.
Analyzing clustering results may lead to the privacy disclosure issue in big data mining. In this paper, we put forward a differential privacy-based protecting data preprocessing method for distance-based clustering. Firstly, the data distortion technique differential privacy is used to prevent the distances in distance-based clustering from disclosing the relationships. Differential privacy may affect the clustering results while protecting privacy. Then an adaptive privacy budget parameter adjustment mechanism is applied for keeping the balance between the privacy protection and the clustering results. By solving the maximum and minimum problems, the differential privacy budget parameter can be obtained for different clustering algorithms. Finally, we conduct extensive experiments to evaluate the performance of our proposed method. The results demonstrate that our method can provide privacy protection with precise clustering results.
Software-Defined Network's (SDN) core working depends on the centralized controller which implements the control plane. With the help of this controller, security threats like Distributed Denial of Service (DDoS) attacks can be identified easily. A DDoS attack is usually instigated on servers by sending a huge amount of unwanted traffic that exhausts its resources, denying their services to genuine users. Earlier research work has been carried out to mitigate DDoS attacks at the switch and the host level. Mitigation at switch level involves identifying the switch which sends a lot of unwanted traffic in the network and blocking it from the network. But this solution is not feasible as it will also block genuine hosts connected to that switch. Later mitigation at the host level was introduced wherein the compromised hosts were identified and blocked thereby allowing genuine hosts to send their traffic in the network. Though this solution is feasible, it will block the traffic from the genuine applications of the compromised host as well. In this paper, we propose a new way to identify and mitigate the DDoS attack at the application level so that only the application generating the DDoS traffic is blocked and other genuine applications are allowed to send traffic in the network normally.
Software security is a major concern of the developers who intend to deliver a reliable software. Although there is research that focuses on vulnerability prediction and discovery, there is still a need for building security-specific metrics to measure software security and vulnerability-proneness quantitatively. The existing methods are either based on software metrics (defined on the physical characteristics of code; e.g. complexity or lines of code) which are not security-specific or some generic patterns known as nano-patterns (Java method-level traceable patterns that characterize a Java method or function). Other methods predict vulnerabilities using text mining approaches or graph algorithms which perform poorly in cross-project validation and fail to be a generalized prediction model for any system. In this paper, we envision to construct an automated framework that will assist developers to assess the security level of their code and guide them towards developing secure code. To accomplish this goal, we aim to refine and redefine the existing nano-patterns and software metrics to make them more security-centric so that they can be used for measuring the software security level of a source code (either file or function) with higher accuracy. In this paper, we present our visionary approach through a series of three consecutive studies where we (1) will study the challenges of the current software metrics and nano-patterns in vulnerability prediction, (2) will redefine and characterize the nano-patterns and software metrics so that they can capture security-specific properties of code and measure the security level quantitatively, and finally (3) will implement an automated framework for the developers to automatically extract the values of all the patterns and metrics for the given code segment and then flag the estimated security level as a feedback based on our research results. We accomplished some preliminary experiments and presented the results which indicate that our vision can be practically implemented and will have valuable implications in the community of software security.
In AI Matters Volume 4, Issue 2, and Issue 4, we raised the notion of the possibility of an AI Cosmology in part in response to the "AI Hype Cycle" that we are currently experiencing. We posited that our current machine learning and big data era represents but one peak among several previous peaks in AI research in which each peak had accompanying "Hype Cycles". We associated each peak with an epoch in a possible AI Cosmology. We briefly explored the logic machines, cybernetics, and expert system epochs. One of the objectives of identifying these epochs was to help establish that we have been here before. In particular we've been in the territory where some application of AI research finds substantial commercial success which is then closely followed by AI fever and hype. The public's expectations are heightened only to end in disillusionment when the applications fall short. Whereas it is sometimes somewhat of a challenge even for AI researchers, educators, and practitioners to know where the reality ends and hype begins, the layperson is often in an impossible position and at the mercy of pop culture, marketing and advertising campaigns. We suggested that an AI Cosmology might help us identify a single standard model for AI that could be the foundation for a common shared understanding of what AI is and what it is not. A tool to help the layperson understand where AI has been, where it's going, and where it can't go. Something that could provide a basic road map to help the general public navigate the pitfalls of AI Hype.