Biblio
As a cyber attack which leverages social engineering and other sophisticated techniques to steal sensitive information from users, phishing attack has been a critical threat to cyber security for a long time. Although researchers have proposed lots of countermeasures, phishing criminals figure out circumventions eventually since such countermeasures require substantial manual feature engineering and can not detect newly emerging phishing attacks well enough, which makes developing an efficient and effective phishing detection method an urgent need. In this work, we propose a novel phishing website detection approach by detecting the Uniform Resource Locator (URL) of a website, which is proved to be an effective and efficient detection approach. To be specific, our novel capsule-based neural network mainly includes several parallel branches wherein one convolutional layer extracts shallow features from URLs and the subsequent two capsule layers generate accurate feature representations of URLs from the shallow features and discriminate the legitimacy of URLs. The final output of our approach is obtained by averaging the outputs of all branches. Extensive experiments on a validated dataset collected from the Internet demonstrate that our approach can achieve competitive performance against other state-of-the-art detection methods while maintaining a tolerable time overhead.
In this research paper author surveys the need of data protection from intelligent systems in the private and public sectors. For this, she identifies that the Smart Information Security Intel processes needs to be the suggestive key policy for both sectors of governance either public or private. The information is very sensitive for any organization. When the government offices are concerned, information needs to be abstracted and encapsulated so that there is no information stealing. For this purposes, the art of skill set and new optimized technology needs to be stationed. Author identifies that digital bar-coded air port like security using conveyor belts and digital bar-coded conveyor boxes to scan switched ON articles like internet of things needs to be placed. As otherwise, there can potentially be data, articles or information stealing from the operational sites where access is unauthorized. Such activities shall need to be scrutinized, minutely. The biometric such as fingerprints, iris, voice and face recognition pattern updates in the virtual data tables must be taken to keep data entry-exit log up to-date. The information technicians of the sentinel systems must help catch the anomalies in the professional working time in private and public sectors if there is red flag as indicator. The author in this research paper shall discuss in detail what we shall station, how we shall station and what all measures we might need to undertake to safeguard the stealing of sensitive information from the organizations like administration buildings, government buildings, educational schools, hospitals, courts, private buildings, banks and all other offices nation-wide. The TO-BE new processes shall make the AS-IS office system more information secured, data protected and personnel security stronger.
Network-on-Chip (NoC) is the communication platform of the data among the processing cores in Multiprocessors System-on-Chip (MPSoC). NoC has become a target to security attacks and by outsourcing design, it can be infected with a malicious Hardware Trojan (HT) to degrades the system performance or leaves a back door for sensitive information leaking. In this paper, we proposed a HT model that applies a denial of service attack by deliberately discarding the data packets that are passing through the infected node creating a black hole in the NoC. It is known as Black Hole Router (BHR) attack. We studied the effect of the BHR attack on the NoC. The power and area overhead of the BHR are analyzed. We studied the effect of the locations of BHRs and their distribution in the network as well. The malicious nodes has very small area and power overhead, 1.98% and 0.74% respectively, with a very strong violent attack.
At the time of more and more devices being connected to the internet, personal and sensitive information is going around the network more than ever. Thus, security and privacy regarding IoT communications, devices, and data are a concern due to the diversity of the devices and protocols used. Since traditional security mechanisms cannot always be adequate due to the heterogeneity and resource limitations of IoT devices, we conclude that there are still several improvements to be made to the 2nd line of defense mechanisms like Intrusion Detection Systems. Using a collection of IP flows, we can monitor the network and identify properties of the data that goes in and out. Since network flows collection have a smaller footprint than packet capturing, it makes it a better choice towards the Internet of Things networks. This paper aims to study IP flow properties of certain network attacks, with the goal of identifying an attack signature only by observing those properties.
Hardware Trojan Horses and active fault attacks are a threat to the safety and security of electronic systems. By such manipulations, an attacker can extract sensitive information or disturb the functionality of a device. Therefore, several protections against malicious inclusions have been devised in recent years. A prominent technique to detect abnormal behavior in the field is run-time verification. It relies on dedicated monitoring circuits and on verification rules generated from a set of temporal properties. An important question when dealing with such protections is the effectiveness of the protection against unknown attacks. In this paper, we present a methodology based on automatic generation of monitoring and formal verification techniques that can be used to validate and analyze the quality of a set of temporal properties when used as protection against generic attackers of variable strengths.
Revealing private and sensitive information on Social Network Sites (SNSs) like Facebook is a common practice which sometimes results in unwanted incidents for the users. One approach for helping users to avoid regrettable scenarios is through awareness mechanisms which inform a priori about the potential privacy risks of a self-disclosure act. Privacy heuristics are instruments which describe recurrent regrettable scenarios and can support the generation of privacy awareness. One important component of a heuristic is the group of people who should not access specific private information under a certain privacy risk. However, specifying an exhaustive list of unwanted recipients for a given regrettable scenario can be a tedious task which necessarily demands the user's intervention. In this paper, we introduce an approach based on decision trees to instantiate the audience component of privacy heuristics with minor intervention from the users. We introduce Disclosure- Acceptance Trees, a data structure representative of the audience component of a heuristic and describe a method for their generation out of user-centred privacy preferences.
An air-gapped network is a type of IT network that is separated from the Internet - physically - due to the sensitive information it stores. Even if such a network is compromised with a malware, the hermetic isolation from the Internet prevents an attacker from leaking out any data - thanks to the lack of connectivity. In this paper we show how attackers can covertly leak sensitive data from air-gapped networks via the row of status LEDs on networking equipment such as LAN switches and routers. Although it is known that some network equipment emanates optical signals correlated with the information being processed by the device (‘side-channel'), malware controlling the status LEDs to carry any type of data (‘covert-channel') has never studied before. Sensitive data can be covertly encoded over the blinking of the LEDs and received by remote cameras and optical sensors. A malicious code is executed in a compromised LAN switch or router allowing the attacker direct, low-level control of the LEDs. We provide the technical background on the internal architecture of switches and routers at both the hardware and software level which enables these attacks. We present different modulation and encoding schemas, along with a transmission protocol. We implement prototypes of the malware and discuss its design and implementation. We tested various receivers including remote cameras, security cameras, smartphone cameras, and optical sensors, and discuss detection and prevention countermeasures. Our experiments show that sensitive data can be covertly leaked via the status LEDs of switches and routers at bit rates of 1 bit/sec to more than 2000 bit/sec per LED.
Malware has become sophisticated and organizations don't have a Plan B when standard lines of defense fail. These failures have devastating consequences for organizations, such as sensitive information being exfiltrated. A promising avenue for improving the effectiveness of behavioral-based malware detectors is to combine fast (usually not highly accurate) traditional machine learning (ML) detectors with high-accuracy, but time-consuming, deep learning (DL) models. The main idea is to place software receiving borderline classifications by traditional ML methods in an environment where uncertainty is added, while software is analyzed by time-consuming DL models. The goal of uncertainty is to rate-limit actions of potential malware during deep analysis. In this paper, we describe Chameleon, a Linux-based framework that implements this uncertain environment. Chameleon offers two environments for its OS processes: standard - for software identified as benign by traditional ML detectors - and uncertain - for software that received borderline classifications analyzed by ML methods. The uncertain environment will bring obstacles to software execution through random perturbations applied probabilistically on selected system calls. We evaluated Chameleon with 113 applications from common benchmarks and 100 malware samples for Linux. Our results show that at threshold 10%, intrusive and non-intrusive strategies caused approximately 65% of malware to fail accomplishing their tasks, while approximately 30% of the analyzed benign software to meet with various levels of disruption (crashed or hampered). We also found that I/O-bound software was three times more affected by uncertainty than CPU-bound software.
Botnets are a growing threat to the security of data and services on a global level. They exploit vulnerabilities in networks and host machines to harvest sensitive information, or make use of network resources such as memory or bandwidth in cyber-crime campaigns. Bot programs by nature are largely automated and systematic, and this is often used to detect them. In this paper, we extend upon existing work in this area by proposing a network event correlation method to produce graphs of flows generated by botnets, outlining the implementation and functionality of this approach. We also show how this method can be combined with statistical flow-based analysis to provide a descriptive chain of events, and test on public datasets with an overall success rate of 94.1%.
SQL injection attack (SQLIA) pose a serious security threat to the database driven web applications. This kind of attack gives attackers easily access to the application's underlying database and to the potentially sensitive information these databases contain. A hacker through specifically designed input, can access content of the database that cannot otherwise be able to do so. This is usually done by altering SQL statements that are used within web applications. Due to importance of security of web applications, researchers have studied SQLIA detection and prevention extensively and have developed various methods. In this research, after reviewing the existing research in this field, we present a new hybrid method to reduce the vulnerability of the web applications. Our method is specifically designed to detect and prevent SQLIA. Our proposed method is consists of three phases namely, the database design, implementation, and at the common gateway interface (CGI). Details of our approach along with its pros and cons are discussed in detail.
It is a well-known fact that nowadays access to sensitive information is being performed through the use of a three-tier-architecture. Web applications have become a handy interface between users and data. As database-driven web applications are being used more and more every day, web applications are being seen as a good target for attackers with the aim of accessing sensitive data. If an organization fails to deploy effective data protection systems, they might be open to various attacks. Governmental organizations, in particular, should think beyond traditional security policies in order to achieve proper data protection. It is, therefore, imperative to perform security testing and make sure that there are no holes in the system, before an attack happens. One of the most commonly used web application attacks is by insertion of an SQL query from the client side of the application. This attack is called SQL Injection. Since an SQL Injection vulnerability could possibly affect any website or web application that makes use of an SQL-based database, the vulnerability is one of the oldest, most prevalent and most dangerous of web application vulnerabilities. To overcome the SQL injection problems, there is a need to use different security systems. In this paper, we will use 3 different scenarios for testing security systems. Using Penetration testing technique, we will try to find out which is the best solution for protecting sensitive data within the government network of Kosovo.
The online portion of modern life is growing at an astonishing rate, with the consequence that more of the user's critical information is stored online. This poses an immediate threat to privacy and security of the user's data. This work will cover the increasing dangers and security risks of adware, adware injection, and malware injection. These programs increase in direct proportion to the number of users on the Internet. Each of these programs presents an imminent threat to a user's privacy and sensitive information, anytime they utilize the Internet. We will discuss how current ad blockers are not the actual solution to these threats, but rather a premise to our work. Current ad blocking tools can be discovered by the web servers which often requires suppression of the ad blocking tool. Suppressing the tool creates vulnerabilities in a user's system, but even when the tool is active their system is still susceptible to peril. It is possible, even when an ad blocking tool is functioning, for it to allow adware content through. Our solution to the contemporary threats is our tool, MalFire.
NoSQL databases have gained a lot of popularity over the last few years. They are now used in many new system implementations that work with vast amounts of data. This data will typically also include sensitive information that needs to be secured. NoSQL databases are also underlying a number of cloud implementations which are increasingly being used to store sensitive information by various organisations. This has made NoSQL databases a new target for hackers and other state sponsored actors. Forensic examinations of compromised systems will need to be conducted to determine what exactly transpired and who was responsible. This paper examines specifically if NoSQL databases have security features that leave relevant traces so that accurate forensic attribution can be conducted. The seeming lack of default security measures such as access control and logging has prompted this examination. A survey into the top ranked NoSQL databases was conducted to establish what authentication and authorisation features are available. Additionally the provided logging mechanisms were also examined since access control without any auditing would not aid forensic attribution tremendously. Some of the surveyed NoSQL databases do not provide adequate access control mechanisms and logging features that leave relevant traces to allow forensic attribution to be done using those. The other surveyed NoSQL databases did provide adequate mechanisms and logging traces for forensic attribution, but they are not enabled or configured by default. This means that in many cases they might not be available, leading to insufficient information to perform accurate forensic attribution even on those databases.
Computer security has become an increasingly important hot topic in computer and communication industry, since it is important to support critical business process and to protect personal and sensitive information. Computer security is to keep security attributes (confidentiality, integrity and availability) of computer systems, which face the threats such as deny-of-service (DoS), virus and intrusion. To ensure high computer security, the intrusion tolerance technique based on fault-tolerant scheme has been widely applied. This paper presents the quantitative performance evaluation of a virtual machine (VM) based intrusion tolerant system. Concretely, two security measures are derived; MTTSF (mean time to security failure) and the effective traffic intensity. The mathematical analysis is achieved by using Laplace-Stieltjes transforms according to the analysis of M/G/1 queueing system.
The unauthorized access or theft of sensitive, personal information is becoming a weekly news item. The illegal dissemination of proprietary information to media outlets or competitors costs industry untold millions in remediation costs and losses every year. The 2013 data breach at Target, Inc. that impacted 70 million customers is estimated to cost upwards of 1 billion dollars. Stolen information is also being used to damage political figures and adversely influence foreign and domestic policy. In this paper, we offer some techniques for better understanding the health and security of our networks. This understanding will help professionals to identify network behavior, anomalies and other latent, systematic issues in their networks. Software-Defined Networks (SDN) enable the collection of network operation and configuration metrics that are not readily available, if available at all, in traditional networks. SDN also enables the development of software protocols and tools that increases visibility into the network. By accumulating and analyzing a time series data repository (TSDR) of SDN and traditional metrics along with data gathered from our tools we can establish behavior and security patterns for SDN and SDN hybrid networks. Our research helps provide a framework for a range of techniques for administrators and automated system protection services that give insight into the health and security of the network. To narrow the scope of our research, this paper focuses on a subset of those techniques as they apply to the confidence analysis of a specific network path at the time of use or inspection. This confidence analysis allows users, administrators and autonomous systems to decide whether a network path is secure enough for sending their sensitive information. Our testing shows that malicious activity can be identified quickly as a single metric indicator and consistently within a multi-factor indicator analysis. Our research includes the implementation of - hese techniques in a network path confidence analysis service, called Confidence Assessment as a Service. Using our behavior and security patterns, this service evaluates a specific network path and provides a confidence score for that path before, during and after the transmission of sensitive data. Our research and tools give administrators and autonomous systems a much better understanding of the internal operation and configuration of their networks. Our framework will also provide other services that will focus on detecting latent, systemic network problems. By providing a better understanding of network configuration and operation our research enables a more secure and dependable network and helps prevent the theft of information by malicious actors.
Mobile Devices are part of our lives and we store a lot of private information on it as well as use services that handle sensitive information (e.g. mobile health apps). Whenever users install an application on their smartphones they have to decide whether to trust the applications and share private and sensitive data with at least the developer-owned services. But almost all modern apps not only transmit data to the developer owned servers but also send information to advertising-, analyzing and tracking partners. This paper presents an approach for a "privacy- proxy" which enables to filter unwanted data traffic to third party services without installing additional applications on the smartphone. It is based on a firewall using a black list of tracking- and analyzing networks which is automatically updated on a daily basis. The proof of concept has been implemented with open source components on a Raspberry Pi.
Cross-Site Scripting (XSS) is a common attack technique that lets attackers insert the code in the output application of web page which is referred to the web browser of visitor and then the inserted code executes automatically and steals the sensitive information. In order to prevent the users from XSS attack, many client- side solutions have been implemented; most of them being used are the filters that sanitize the malicious input. However, many of these filters do not provide prevention to the newly designed sophisticated attacks such as multiple points of injection, injection into script etc. This paper proposes and implements an approach based on encoding unfiltered reflections for detecting vulnerable web applications which can be exploited using above mentioned sophisticated attacks. Results prove that the proposed approach provides accurate higher detection rate of exploits. In addition to this, an implementation of blocking the execution of malicious scripts have contributed to XSS-Me: an open source Mozilla Firefox security extension that detects for reflected XSS vulnerabilities which can be considered as an effective solution if it is integrated inside the browser rather than being enforced as an extension.
Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.
The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.
Mobile platform security solution has become especially important for mobile computing paradigms, due to the fact that increasing amounts of private and sensitive information are being stored on the smartphones' on-device memory or MicroSD/SD cards. This paper aims to consider a comparative approach to the security aspects of the current smartphone systems, including: iOS, Android, BlackBerry (QNX), and Windows Phone.