Biblio
Botnets have been a serious threat to the Internet security. With the constant sophistication and the resilience of them, a new trend has emerged, shifting botnets from the traditional desktop to the mobile environment. As in the desktop domain, detecting mobile botnets is essential to minimize the threat that they impose. Along the diverse set of strategies applied to detect these botnets, the ones that show the best and most generalized results involve discovering patterns in their anomalous behavior. In the mobile botnet field, one way to detect these patterns is by analyzing the operation parameters of this kind of applications. In this paper, we present an anomaly-based and host-based approach to detect mobile botnets. The proposed approach uses machine learning algorithms to identify anomalous behaviors in statistical features extracted from system calls. Using a self-generated dataset containing 13 families of mobile botnets and legitimate applications, we were able to test the performance of our approach in a close-to-reality scenario. The proposed approach achieved great results, including low false positive rates and high true detection rates.
In a world where highly skilled actors involved in cyber-attacks are constantly increasing and where the associated underground market continues to expand, organizations should adapt their defence strategy and improve consequently their security incident management. In this paper, we give an overview of Advanced Persistent Threats (APT) attacks life cycle as defined by security experts. We introduce our own compiled life cycle model guided by attackers objectives instead of their actions. Challenges and opportunities related to the specific camouflage actions performed at the end of each APT phase of the model are highlighted. We also give an overview of new APT protection technologies and discuss their effectiveness at each one of life cycle phases.
Controllers for software defined networks (SDNs) are quickly maturing to offer network operators more intuitive programming frameworks and greater abstractions for network application development. Likewise, many security solutions now exist within SDN environments for detecting and blocking clients who violate network policies. However, many of these solutions stop at triggering the security measure and give little thought to amending it. As a consequence, once the violation is addressed, no clear path exists for reinstating the flagged client beyond having the network operator reset the controller or manually implement a state change via an external command. This presents a burden for the network and its clients and administrators. Hence, we present a security policy transition framework for revoking security measures in an SDN environment once said measures are activated.
Ransomware is a growing threat that encrypts auser's files and holds the decryption key until a ransom ispaid by the victim. This type of malware is responsible fortens of millions of dollars in extortion annually. Worse still, developing new variants is trivial, facilitating the evasion of manyantivirus and intrusion detection systems. In this work, we presentCryptoDrop, an early-warning detection system that alerts a userduring suspicious file activity. Using a set of behavior indicators, CryptoDrop can halt a process that appears to be tampering witha large amount of the user's data. Furthermore, by combininga set of indicators common to ransomware, the system can beparameterized for rapid detection with low false positives. Ourexperimental analysis of CryptoDrop stops ransomware fromexecuting with a median loss of only 10 files (out of nearly5,100 available files). Our results show that careful analysis ofransomware behavior can produce an effective detection systemthat significantly mitigates the amount of victim data loss.
Attacks of Ransomware are increasing, this form of malware bypasses many technical solutions by leveraging social engineering methods. This means established methods of perimeter defence need to be supplemented with additional systems. Honeypots are bogus computer resources deployed by network administrators to act as decoy computers and detect any illicit access. This study investigated whether a honeypot folder could be created and monitored for changes. The investigations determined a suitable method to detect changes to this area. This research investigated methods to implement a honeypot to detect ransomware activity, and selected two options, the File Screening service of the Microsoft File Server Resource Manager feature and EventSentry to manipulate the Windows Security logs. The research developed a staged response to attacks to the system along with thresholds when there were triggered. The research ascertained that witness tripwire files offer limited value as there is no way to influence the malware to access the area containing the monitored files.
We investigate the problem of constructing exponentially converging estimates of the state of a continuous-time system from state measurements transmitted via a limited-data-rate communication channel, so that only quantized and sampled measurements of continuous signals are available to the estimator. Following prior work on topological entropy of dynamical systems, we introduce a notion of estimation entropy which captures this data rate in terms of the number of system trajectories that approximate all other trajectories with desired accuracy. We also propose a novel alternative definition of estimation entropy which uses approximating functions that are not necessarily trajectories of the system. We show that the two entropy notions are actually equivalent. We establish an upper bound for the estimation entropy in terms of the sum of the system's Lipschitz constant and the desired convergence rate, multiplied by the system dimension. We propose an iterative procedure that uses quantized and sampled state measurements to generate state estimates that converge to the true state at the desired exponential rate. The average bit rate utilized by this procedure matches the derived upper bound on the estimation entropy. We also show that no other estimator (based on iterative quantized measurements) can perform the same estimation task with bit rates lower than the estimation entropy. Finally, we develop an application of the estimation procedure in determining, from the quantized state measurements, which of two competing models of a dynamical system is the true model. We show that under a mild assumption of exponential separation of the candidate models, detection is always possible in finite time. Our numerical experiments with randomly generated affine dynamical systems suggest that in practice the algorithm always works.
Intrusion detection using multiple security devices has received much attention recently. The large volume of information generated by these tools, however, increases the burden on both computing resources and security administrators. Moreover, attack detection does not improve as expected if these tools work without any coordination. In this work, we propose a simple method to join information generated by security monitors with diverse data formats. We present a novel intrusion detection technique that uses unsupervised clustering algorithms to identify malicious behavior within large volumes of diverse security monitor data. First, we extract a set of features from network-level and host-level security logs that aid in detecting malicious host behavior and flooding-based network attacks in an enterprise network system. We then apply clustering algorithms to the separate and joined logs and use statistical tools to identify anomalous usage behaviors captured by the logs. We evaluate our approach on an enterprise network data set, which contains network and host activity logs. Our approach correctly identifies and prioritizes anomalous behaviors in the logs by their likelihood of maliciousness. By combining network and host logs, we are able to detect malicious behavior that cannot be detected by either log alone.
This study was conducted to determine whether monitoring moderated the impact of trust on the project performance of 57 virtual teams. Two sources of monitoring were examined: internal monitoring done by team members and external monitoring done by someone outside of the team. Two types of trust were also examined: affective-based trust, or trust based on emotion; and cognitive trust, or trust based on competency. Results indicate that when internal monitoring was high, affective trust was associated with increases in performance. However, affective trust was associated with decreases in performance when external monitoring was high. Both types of monitoring reduced the strong positive relationship between cognitive trust and the performance of virtual teams. Results of this study provide new insights about monitoring and trust in virtual teams and inform both theory and design.
Control theory and SDN (Software Defined Networking) are key components for NFV (Network Function Virtualization) deployment. However little has been done to use a control-theoretic approach for SDN and NFV management. In this demo, we describe a use case for NFV management using control theory and SDN. We use the management architecture of RINA (a clean-slate Recursive InterNetwork Architecture) to manage Virtual Network Function (VNF) instances over the GENI testbed. We deploy Snort, an Intrusion Detection System (IDS) as the VNF. Our network topology has source and destination hosts, multiple IDSes, an Open vSwitch (OVS) and an OpenFlow controller. A distributed management application running on RINA measures the state of the VNF instances and communicates this information to a Proportional Integral (PI) controller, which then provides load balancing information to the OpenFlow controller. The latter controller in turn updates traffic flow forwarding rules on the OVS switch, thus balancing load across the VNF instances. This demo demonstrates the benefits of using such a control-theoretic load balancing approach and the RINA management architecture in virtualized environments for NFV management. It also illustrates that the GENI testbed can easily support a wide range of SDN and NFV related experiments.
Real world applications of Wireless Sensor Networks such as border control, healthcare monitoring and target tracking require secure communications. Thus, during WSN setup, one of the first requirements is to distribute the keys to the sensor nodes which can be later used for securing the messages exchanged between sensors. The key management schemes in WSN secure the communication between a pair or a group of nodes. However, the storage capacity of the sensor nodes is limited which makes storage requirement as an important parameter for the evaluation of key management schemes. This paper classifies the existing key management schemes proposed for WSNs into three categories: storage inefficient, storage efficient and highly storage efficient key management schemes.
The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user's behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.
The individual distinguishing proof number or (PIN) and Passwords are the remarkable well known verification strategy used in different gadgets, for example, Atms, cell phones, and electronic gateway locks. Unfortunately, the traditional PIN-entrance technique is helpless vulnerable against shoulder-surfing attacks. However, the security examinations used to support these proposed system are not focused around only quantitative investigation, but instead on the results of experiments and testing performed on proposed system. We propose a new theoretical and experimental technique for quantitative security investigation of PIN-entry method. In this paper we first introduce new security idea know as Grid Based Authentication System and rules for secure PIN-entry method by examining the current routines under the new structure. Thus by consider the existing systems guidelines; we try to develop a new PIN-entry method that definitely avoids human shoulder-surfing attacks by significantly increasing the amount of calculations complexity that required for an attacker to penetrate through the secure system.
Detecting early trends indicating cognitive decline can allow older adults to better manage their health, but current assessments present barriers precluding the use of such continuous monitoring by consumers. To explore the effects of cognitive status on computer interaction patterns, the authors collected typed text samples from older adults with and without pre-mild cognitive impairment (PreMCI) and constructed statistical models from keystroke and linguistic features for differentiating between the two groups. Using both feature sets, they obtained a 77.1 percent correct classification rate with 70.6 percent sensitivity, 83.3 percent specificity, and a 0.808 area under curve (AUC). These results are in line with current assessments for MC–a more advanced disease–but using an unobtrusive method. This research contributes a combination of features for text and keystroke analysis and enhances understanding of how clinicians or older adults themselves might monitor for PreMCI through patterns in typed text. It has implications for embedded systems that can enable healthcare providers and consumers to proactively and continuously monitor changes in cognitive function.
Honey pots and honey nets are popular tools in the area of network security and network forensics. The deployment and usage of these tools are influenced by a number of technical and legal issues, which need to be carefully considered together. In this paper, we outline privacy issues of honey pots and honey nets with respect to technical aspects. The paper discusses the legal framework of privacy, legal ground to data processing, and data collection. The analysis of legal issues is based on EU law and is supported by discussions on privacy and related issues. This paper is one of the first papers which discuss in detail privacy issues of honey pots and honey nets in accordance with EU law.
The United States has US CYBERCOM to protect the US Military Infrastructure and DHS to protect the nation's critical cyber infrastructure. These organizations deal with wide ranging issues at a national level. This leaves local and state governments to largely fend for themselves in the cyber frontier. This paper will focus on how to determine the threat to a community and what indications and warnings can lead us to suspect an attack is underway. To try and help answer these questions we utilized the concepts of Honey pots and Honey nets and extended them to a multi-organization concept within a geographic boundary to form a Honey Community. The initial phase of the research done in support of this paper was to create a fictitious community with various components to entice would-be attackers and determine if the use of multiple sectors in a community would aid in the determination of an attack.
Hardware Trojans are an emerging threat that intrudes in the design and manufacturing cycle of the chips and has gained much attention lately due to the severity of the problems it draws to the chip supply chain. Hardware Typically, hardware Trojans are not detected during the usual manufacturing testing due to the fact that they are activated as an effect of a rare event. A class of published HTs are based on the geometrical characteristics of the circuit and claim to be undetectable, in the sense that their activation cannot be detected. In this work we study the effect of continuously monitoring the inputs of the module under test with respect to the detection of HTs possibly inserted in the module, either in the design or the manufacturing stage.
Security in cloud environments is always considered an issue, due to the lack of control over leased resources. In this paper, we present a solution that offers security-as-a-service by relying on Security Service Level Agreements (Security SLAs) as a means to represent the security features to be granted. In particular, we focus on a security mechanism that is automatically configured and activated in an as-a-service fashion in order to protect cloud resources against DoS attacks. The activities reported in this paper are part of a wider work carried out in the FP7-ICT programme project SPECS, which aims at building a framework offering Security-as-a-Service using an SLA-based approach. The proposed approach founds on the adoption of SPECS Services to negotiate, to enforce and to monitor suitable security metrics, chosen by cloud customers, negotiated with the provider and included in a signed Security SLA.
The perception of lack of control over resources deployed in the cloud may represent one of the critical factors for an organization to decide to cloudify or not their own services. Furthermore, in spite of the idea of offering security-as-a-service, the development of secure cloud applications requires security skills that can slow down the adoption of the cloud for nonexpert users. In the recent years, the concept of Security Service Level Agreements (Security SLA) is assuming a key role in the provisioning of cloud resources. This paper presents the SPECS framework, which enables the development of secure cloud applications covered by a Security SLA. The SPECS framework offers APIs to manage the whole Security SLA life cycle and provides all the functionalities needed to automatize the enforcement of proper security mechanisms and to monitor userdefined security features. The development process of SPECS applications offering security-enhanced services is illustrated, presenting as a real-world case study the provisioning of a secure web server.
This paper briefly presents a position that hardware-based roots of trust, integrated in silicon with System-on-Chip (SoC) solutions, represent the most current stage in a progression of technologies aimed at realizing the most foundational computer security concepts. A brief look at this historical progression from a personal perspective is followed by an overview of more recent developments, with particular focus on a root of trust for cryptographic key provisioning and SoC feature management aimed at achieving supply chain assurances and serves as a basis for trust that is linked to properties enforced in hardware. The author assumes no prior knowledge of these concepts and developments by the reader.
In a system manufacturing process that use screens, for exemple, TVs, computer monitors, or notebook, the inspection images is one of the most important quality tests. Due to increasing complexity of these systems, manual inspection became complex and slow. Thus, automatic inspection is an attractive alternative. In this paper, we present an automatic inspection system images using edge and line detection algorithms, rectangles recognition and image comparison metrics. The experiments, performed to 504 images (TVs, computer monitors, and notebook) demonstrate that the system has good performance.
Nowadays, with the rapid development of Internet, the use of Web is increasing and the Web applications have become a substantial part of people's daily life (e.g. E-Government, E-Health and E-Learning), as they permit to seamlessly access and manage information. The main security concern for e-business is Web application security. Web applications have many vulnerabilities such as Injection, Broken Authentication and Session Management, and Cross-site scripting (XSS). Subsequently, web applications have become targets of hackers, and a lot of cyber attack began to emerge in order to block the services of these Web applications (Denial of Service Attach). Developers are not aware of these vulnerabilities and have no enough time to secure their applications. Therefore, there is a significant need to study and improve attack detection for web applications through determining the most significant factors for detection. To the best of our knowledge, there is not any research that summarizes the influent factors of detection web attacks. In this paper, the author studies state-of-the-art techniques and research related to web attack detection: the author analyses and compares different methods of web attack detections and summarizes the most important factors for Web attack detection independent of the type of vulnerabilities. At the end, the author gives recommendation to build a framework for web application protection.
In recent years, cyber security threats have become increasingly dangerous. Hackers have fabricated fake emails to spoof specific users into clicking on malicious attachments or URL links in them. This kind of threat is called a spear-phishing attack. Because spear-phishing attacks use unknown exploits to trigger malicious activities, it is difficult to effectively defend against them. Thus, this study focuses on the challenges faced, and we develop a Cloud-threat Inspection Appliance (CIA) system to defend against spear-phishing threats. With the advantages of hardware-assisted virtualization technology, we use the CIA to develop a transparent hypervisor monitor that conceals the presence of the detection engine in the hypervisor kernel. In addition, the CIA also designs a document pre-filtering algorithm to enhance system performance. By inspecting PDF format structures, the proposed CIA was able to filter 77% of PDF attachments and prevent them from all being sent into the hypervisor monitor for deeper analysis. Finally, we tested CIA in real-world scenarios. The hypervisor monitor was shown to be a better anti-evasion sandbox than commercial ones. During 2014, CIA inspected 780,000 mails in a company with 200 user accounts, and found 65 unknown samples that were not detected by commercial anti-virus software.
Cyber security operations centre (CSOC) is an essential business control aimed to protect ICT systems and support an organisation's Cyber Defense Strategy. Its overarching purpose is to ensure that incidents are identified and managed to resolution swiftly, and to maintain safe & secure business operations and services for the organisation. A CSOC framework is proposed comprising Log Collection, Analysis, Incident Response, Reporting, Personnel and Continuous Monitoring. Further, a Cyber Defense Strategy, supported by the CSOC framework, is discussed. Overlaid atop the strategy is the well-known Her Majesty's Government (HMG) Protective Monitoring Controls (PMCs). Finally, the difficulty and benefits of operating a CSOC are explained.
The internet has had a major impact on how information is shared within supply chains, and in commerce in general. This has resulted in the establishment of information systems such as e-supply chains amongst others which integrate the internet and other information and communications technology (ICT) with traditional business processes for the swift transmission of information between trading partners. Many organisations have reaped the benefits of adopting the eSC model, but have also faced the challenges with which it comes. One such major challenge is information security. Digital forensic readiness is a relatively new exciting field which can prepare and prevent incidents from occurring within an eSC environment if implemented strategically. With the current state of cybercrime, tool developers are challenged with the task of developing cutting edge digital forensic readiness tools that can keep up with the current technological advancements, such as (eSCs), in the business world. Therefore, the problem addressed in this paper is that there are no DFR tools that are designed to support eSCs specifically. There are some general-purpose monitoring tools that have forensic readiness functionality, but currently there are no tools specifically designed to serve the eSC environment. Therefore, this paper discusses the limitations of current digital forensic readiness tools for the eSC environment and an architectural design for next-generation eSC DFR systems is proposed, along with the system requirements that such systems must satisfy. It is the view of the authors that the conclusions drawn from this paper can spearhead the development of cutting-edge next-generation digital forensic readiness tools, and bring attention to some of the shortcomings of current tools.