Biblio
The Internet of Things(IoT) has become a popular technology, and various middleware has been proposed and developed for IoT systems. However, there have been few studies on the data management of IoT systems. In this paper, we consider graph database models for the data management of IoT systems because these models can specify relationships in a straightforward manner among entities such as devices, users, and information that constructs IoT systems. However, applying a graph database to the data management of IoT systems raises issues regarding distribution and security. For the former issue, we propose graph database operations integrated with REST APIs. For the latter, we extend a graph edge property by adding access protocol permissions and checking permissions using the APIs with authentication. We present the requirements for a use case scenario in addition to the features of a distributed graph database for IoT data management to solve the aforementioned issues, and implement a prototype of the graph database.
This paper proposes a highly scalable framework that can be applied to detect network anomaly at per flow level by constructing a meta-model for a family of machine learning algorithms or statistical data models. The approach is scalable and attainable because raw data needs to be accessed only one time and it will be processed, computed and transformed into a meta-model matrix in a much smaller size that can be resident in the system RAM. The calculation of meta-model matrix can be achieved through disposable updating operations at per row level: once a per-flow information is proceeded, it is no longer needed in calculating the meta-model matrix. While the proposed framework covers both Gaussian and non-Gaussian data, the focus of this work is on the linear regression models. Specifically, a new concept called meta-model sufficient statistics is proposed to analyze a group of models, where exact, not the approximate, results are derived. In addition, the proposed framework can quickly discover an optimal statistical or computer model from a family of candidate models without the need of rescanning the raw dataset. This suggest an extremely efficient and effectively theory and method is possible for big data security analysis.
Distributed Denial of Service attacks against high-profile targets have become more frequent in recent years. In response to such massive attacks, several architectures have adopted proxies to introduce layers of indirection between end users and target services and reduce the impact of a DDoS attack by migrating users to new proxies and shuffling clients across proxies so as to isolate malicious clients. However, the reactive nature of these solutions presents weaknesses that we leveraged to develop a new attack - the proxy harvesting attack - which enables malicious clients to collect information about a large number of proxies before launching a DDoS attack. We show that current solutions are vulnerable to this attack, and propose a moving target defense technique consisting in periodically and proactively replacing one or more proxies and remapping clients to proxies. Our primary goal is to disrupt the attacker's reconnaissance effort. Additionally, to mitigate ongoing attacks, we propose a new client-to-proxy assignment strategy to isolate compromised clients, thereby reducing the impact of attacks. We validate our approach both theoretically and through simulation, and show that the proposed solution can effectively limit the number of proxies an attacker can discover and isolate malicious clients.
In last twenty years, use of internet applications, web hacking activities have exaggerated speedily. Organizations facing very significant challenges in securing their web applications from rising cyber threats, as compromise with the protection issues don't seem to be reasonable. Vulnerability Assessment and Penetration Testing (VAPT) techniques help them to go looking out security loopholes. These security loopholes could also be utilized by attackers to launch attacks on technical assets. Thus it is necessary ascertain these vulnerabilities and install security patches. VAPT helps organization to determine whether their security arrangements are working properly. This paper aims to elucidate overview and various techniques used in vulnerability assessment and penetration testing (VAPT). Also focuses on making cyber security awareness and its importance at various level of an organization for adoption of required up to date security measures by the organization to stay protected from various cyber-attacks.
Online Social Networks (OSNs) are continuously suffering from the negative impact of Cross-Site Scripting (XSS) vulnerabilities. This paper describes a novel framework for mitigating XSS attack on OSN-based platforms. It is completely based on the request authentication and view isolation approach. It detects XSS attack through validating string value extracted from the vulnerable checkpoint present in the web page by implementing string examination algorithm with the help of XSS attack vector repository. Any similarity (i.e. string is not validated) indicates the presence of malicious code injected by the attacker and finally it removes the script code to mitigate XSS attack. To assess the defending ability of our designed model, we have tested it on OSN-based web application i.e. Humhub. The experimental results revealed that our model discovers the XSS attack vectors with low false negatives and false positive rate tolerable performance overhead.
The Internet routing ecosystem is facing substantial scalability challenges on the data plane. Various “clean slate” architectures for representing forwarding tables (FIBs), such as IPv6, introduce additional constraints on efficient implementations from both lookup time and memory footprint perspectives due to significant classification width. In this work, we propose an abstraction layer able to represent IPv6 FIBs on existing IP and even MPLS infrastructure. Feasibility of the proposed representations is confirmed by an extensive simulation study on real IPv6 forwarding tables, including low-level experimental performance evaluation.
Botnets are emerging as the most serious cyber threat among different forms of malware. Today botnets have been facilitating to launch many cybercriminal activities like DDoS, click fraud, phishing attacks etc. The main purpose of botnet is to perform massive financial threat. Many large organizations, banks and social networks became the target of bot masters. Botnets can also be leased to motivate the cybercriminal activities. Recently several researches and many efforts have been carried out to detect bot, C&C channels and bot masters. Ultimately bot maters also strengthen their activities through sophisticated techniques. Many botnet detection techniques are based on payload analysis. Most of these techniques are inefficient for encrypted C&C channels. In this paper we explore different categories of botnet and propose a detection methodology to classify bot host from the normal host by analyzing traffic flow characteristics based on time intervals instead of payload inspection. Due to that it is possible to detect botnet activity even encrypted C&C channels are used.
Nowadays, Memory Forensics is more acceptable in Cyber Forensics Investigation because malware authors and attackers choose RAM or physical memory for storing critical information instead of hard disk. The volatile physical memory contains forensically relevant artifacts such as user credentials, chats, messages, running processes and its details like used dlls, files, command and network connections etc. Memory Forensics involves acquiring the memory dump from the Suspect's machine and analyzing the acquired dump to find out crucial evidence with the help of windows pre-defined kernel data structures. While retrieving different artifacts from these data structures, finding the network connections from Windows 7 system's memory dump is a very challenging task. This is because the data structures that store network connections in earlier versions of Windows are not present in Windows 7. In this paper, a methodology is described for efficiently retrieving details of network related activities from Windows 7 x64 memory dump. This includes remote and local IP addresses and associated port information corresponding to each of the running processes. This can provide crucial information in cyber crime investigation.
In the Internet-of-Things (IoT), users might share part of their data with different IoT prosumers, which offer applications or services. Within this open environment, the existence of an adversary introduces security risks. These can be related, for instance, to the theft of user data, and they vary depending on the security controls that each IoT prosumer has put in place. To minimize such risks, users might seek an “optimal” set of prosumers. However, assuming the adversary has the same information as the users about the existing security measures, he can then devise which prosumers will be preferable (e.g., with the highest security levels) and attack them more intensively. This paper proposes a decision-support approach that minimizes security risks in the above scenario. We propose a non-cooperative, two-player game entitled Prosumers Selection Game (PSG). The Nash Equilibria of PSG determine subsets of prosumers that optimize users' payoffs. We refer to any game solution as the Nash Prosumers Selection (NPS), which is a vector of probabilities over subsets of prosumers. We show that when using NPS, a user faces the least expected damages. Additionally, we show that according to NPS every prosumer, even the least secure one, is selected with some non-zero probability. We have also performed simulations to compare NPS against two different heuristic selection algorithms. The former is proven to be approximately 38% more effective in terms of security-risk mitigation.
The video streaming between the sender and the receiver involves multiple unsecured hops where the video data can be illegally copied if the nodes run malicious forwarding logic. This paper introduces a novel method to stream video data through dual channels using dual data paths. The frames' pixels are also scrambled. The video frames are divided into two frame streams. At the receiver side video is re-constructed and played for a limited time period. As soon as small chunk of merged video is played, it is deleted from video buffer. The approach has been tried to formalize and initial simulation has been done over MATLAB. Preliminary results are optimistic and a refined approach may lead to a formal designing of network layer routing protocol with corrections in transport layer.
Security in mobile handsets of telecommunication standards such as GSM, Project 25 and TETRA is very important, especially when governments and military forces use handsets and telecommunication devices. Although telecommunication could be quite secure by using encryption, coding, tunneling and exclusive channel, attackers create new ways to bypass them without the knowledge of the legitimate user. In this paper we introduce a new, simple and economical circuit to warn the user in cases where the message is not encrypted because of manipulation by attackers or accidental damage. This circuit not only consumes very low power but also is created to sustain telecommunication devices in aspect of security and using friendly. Warning to user causes the best practices of telecommunication devices without wasting time and energy for fault detection.
The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
With the spectacular increase in online activities like e-transactions, security and privacy issues are at the peak with respect to their significance. Large numbers of database security breaches are occurring at a very high rate on daily basis. So, there is a crucial need in the field of database forensics to make several redundant copies of sensitive data found in database server artifacts, audit logs, cache, table storage etc. for analysis purposes. Large volume of metadata is available in database infrastructure for investigation purposes but most of the effort lies in the retrieval and analysis of that information from computing systems. Thus, in this paper we mainly focus on the significance of metadata in database forensics. We proposed a system here to perform forensics analysis of database by generating its metadata file independent of the DBMS system used. We also aim to generate the digital evidence against criminals for presenting it in the court of law in the form of who, when, why, what, how and where did the fraudulent transaction occur. Thus, we are presenting a system to detect major database attacks as well as anti-forensics attacks by developing an open source database forensics tool. Eventually, we are pointing out the challenges in the field of forensics and how these challenges can be used as opportunities to stimulate the areas of database forensics.
One Time Password which is fixed length strings to perform authentication in electronic media is used as a one-time. In this paper, One Time Password production methods which based on hash functions were investigated. Keccak digest algorithm was used for the production of One Time Password. This algorithm has been selected as the latest standards for hash algorithm in October 2012 by National Instute of Standards and Technology. This algorithm is preferred because it is faster and safer than the others. One Time Password production methods based on hash functions is called Hashing-Based Message Authentication Code structure. In these structures, the key value is using with the hash function to generate the Hashing-Based Message Authentication Code value. Produced One Time Password value is based on the This value. In this application, the length of the value One Time Password was the eight characters to be useful in practice.
Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
The migration from a vertical to horizontal business model has made it easier to introduce hardware Trojans and counterfeit electronic parts into the electronic component supply chain. Hardware Trojans are malicious modifications made to original IC designs that reduce system integrity (change functionality, leak private data, etc.). Counterfeit parts are often below specification and/or of substandard quality. The existence of Trojans and counterfeit parts creates risks for the life-critical systems and infrastructures that incorporate them including automotive, aerospace, military, and medical systems. In this tutorial, we will cover: (i) Background and motivation for hardware Trojan and counterfeit prevention/detection; (ii) Taxonomies related to both topics; (iii) Existing solutions; (iv) Open challenges; (v) New and unified solutions to address these challenges.
The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axbymod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage.
Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
Routers in the Content-Centric Networking (CCN) architecture maintain state for all pending content requests, so as to be able to later return the corresponding content. By employing stateful forwarding, CCN supports native multicast, enhances security and enables adaptive forwarding, at the cost of excessive forwarding state that raises scalability concerns. We propose a semi-stateless forwarding scheme in which, instead of tracking each request at every on-path router, requests are tracked at every d hops. At intermediate hops, requests gather reverse path information, which is later used to deliver responses between routers using Bloom filter-based stateless forwarding. Our approach effectively reduces forwarding state, while preserving the advantages of CCN forwarding. Evaluation results over realistic ISP topologies show that our approach reduces forwarding state by 54%-70% in unicast delivery, without any bandwidth penalties, while in multicast delivery it reduces forwarding state by 34%-55% at the expense of 6%-13% in bandwidth overhead.
Rapid advances in wireless ad hoc networks lead to increase their applications in real life. Since wireless ad hoc networks have no centralized infrastructure and management, they are vulnerable to several security threats. Malicious packet dropping is a serious attack against these networks. In this attack, an adversary node tries to drop all or partial received packets instead of forwarding them to the next hop through the path. A dangerous type of this attack is called black hole. In this attack, after absorbing network traffic by the malicious node, it drops all received packets to form a denial of service (DOS) attack. In this paper, a dynamic trust model to defend network against this attack is proposed. In this approach, a node trusts all immediate neighbors initially. Getting feedback from neighbors' behaviors, a node updates the corresponding trust value. The simulation results by NS-2 show that the attack is detected successfully with low false positive probability.
Mobile Ad-Hoc Networks are dynamic and wireless self-organization networks that many mobile nodes connect to each other weakly. To compare with traditional networks, they suffer failures that prevent the system from working properly. Nevertheless, we have to cope with many security issues such as unauthorized attempts, security threats and reliability. Using mobile agents in having low level fault tolerance ad-hoc networks provides fault masking that the users never notice. Mobile agent migration among nodes, choosing an alternative paths autonomous and, having high level fault tolerance provide networks that have low bandwidth and high failure ratio, more reliable. In this paper we declare that mobile agents fault tolerance peculiarity and existing fault tolerance method based on mobile agents. Also in ad-hoc networks that need security precautions behind fault tolerance, we express the new model: Secure Mobil Agent Based Fault Tolerance Model.
Cloud Computing is emerging as a new paradigm that aims delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used, it is critical that the security mechanisms are robust and resilient to faults and attacks. Securing cloud systems is extremely complex due to the many interdependent tasks such as application layer firewalls, alert monitoring and analysis, source code analysis, and user identity management. It is strongly believed that we cannot build cloud services that are immune to attacks. Resiliency to attacks is becoming an important approach to address cyber-attacks and mitigate their impacts. Resiliency for mission critical systems is demanded higher. In this paper, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on moving target defense, cloud service Behavior Obfuscation (BO), and autonomic computing. By continuously and randomly changing the cloud execution environments and platform types, it will be difficult especially for insider attackers to figure out the current execution environment and their existing vulnerabilities, thus allowing the system to evade attacks. We show how to apply the ARCM to one class of applications, Map/Reduce, and evaluate its performance and overhead.
We are currently living in the age of Big Data coming along with the challenge to grasp the golden opportunities at hand. This mixed blessing also dominates the relation between Big Data and trust. On the one side, large amounts of trust-related data can be utilized to establish innovative data-driven approaches for reputation-based trust management. On the other side, this is intrinsically tied to the trust we can put in the origins and quality of the underlying data. In this paper, we address both sides of trust and Big Data by structuring the problem domain and presenting current research directions and inter-dependencies. Based on this, we define focal issues which serve as future research directions for the track to our vision of Next Generation Online Trust within the FORSEC project.