Biblio
When implemented on real systems, cryptographic algorithms are vulnerable to attacks observing their execution behavior, such as cache-timing attacks. Designing protected implementations must be done with knowledge and validation tools as early as possible in the development cycle. In this article we propose a methodology to assess the robustness of the candidates for the NIST post-quantum standardization project to cache-timing attacks. To this end we have developed a dedicated vulnerability research tool. It performs a static analysis with tainting propagation of sensitive variables across the source code and detects leakage patterns. We use it to assess the security of the NIST post-quantum cryptography project submissions. Our results show that more than 80% of the analyzed implementations have at least one potential flaw, and three submissions total more than 1000 reported flaws each. Finally, this comprehensive study of the competitors security allows us to identify the most frequent weaknesses amongst candidates and how they might be fixed.
With the rise in worth and popularity of cryptocurrencies, a new opportunity for criminal gain is being exploited and with little currently offered in the way of defence. The cost of mining (i.e., earning cryptocurrency through CPU-intensive calculations that underpin the blockchain technology) can be prohibitively expensive, with hardware costs and electrical overheads previously offering a loss compared to the cryptocurrency gained. Off-loading these costs along a distributed network of machines via malware offers an instantly profitable scenario, though standard Anti-virus (AV) products offer some defences against file-based threats. However, newer fileless malicious attacks, occurring through the browser on seemingly legitimate websites, can easily evade detection and surreptitiously engage the victim machine in computationally-expensive cryptomining (cryptojacking). With no current academic literature on the dynamic opcode analysis of cryptomining, to the best of our knowledge, we present the first such experimental study. Indeed, this is the first such work presenting opcode analysis on non-executable files. Our results show that browser-based cryptomining within our dataset can be detected by dynamic opcode analysis, with accuracies of up to 100%. Further to this, our model can distinguish between cryptomining sites, weaponized benign sites, de-weaponized cryptomining sites and real world benign sites. As it is process-based, our technique offers an opportunity to rapidly detect, prevent and mitigate such attacks, a novel contribution which should encourage further future work.
Now a days, Cloud computing has brought a unbelievable change in companies, organizations, firm and institutions etc. IT industries is advantage with low investment in infrastructure and maintenance with the growth of cloud computing. The Virtualization technique is examine as the big thing in cloud computing. Even though, cloud computing has more benefits; the disadvantage of the cloud computing environment is ensuring security. Security means, the Cloud Service Provider to ensure the basic integrity, availability, privacy, confidentiality, authentication and authorization in data storage, virtual machine security etc. In this paper, we presented a Local outlier factors mechanism, which may be helpful for the detection of Distributed Denial of Service attack in a cloud computing environment. As DDoS attack becomes strong with the passing of time, and then the attack may be reduced, if it is detected at first. So we fully focused on detecting DDoS attack to secure the cloud environment. In addition, our scheme is able to identify their possible sources, giving important clues for cloud computing administrators to spot the outliers. By using WEKA (Waikato Environment for Knowledge Analysis) we have analyzed our scheme with other clustering algorithm on the basis of higher detection rates and lower false alarm rate. DR-LOF would serve as a better DDoS detection tool, which helps to improve security framework in cloud computing.
Microsoft's PowerShell is a command-line shell and scripting language that is installed by default on Windows machines. Based on Microsoft's .NET framework, it includes an interface that allows programmers to access operating system services. While PowerShell can be configured by administrators for restricting access and reducing vulnerabilities, these restrictions can be bypassed. Moreover, PowerShell commands can be easily generated dynamically, executed from memory, encoded and obfuscated, thus making the logging and forensic analysis of code executed by PowerShell challenging. For all these reasons, PowerShell is increasingly used by cybercriminals as part of their attacks' tool chain, mainly for downloading malicious contents and for lateral movement. Indeed, a recent comprehensive technical report by Symantec dedicated to PowerShell's abuse by cybercrimials [52] reported on a sharp increase in the number of malicious PowerShell samples they received and in the number of penetration tools and frameworks that use PowerShell. This highlights the urgent need of developing effective methods for detecting malicious PowerShell commands. In this work, we address this challenge by implementing several novel detectors of malicious PowerShell commands and evaluating their performance. We implemented both "traditional" natural language processing (NLP) based detectors and detectors based on character-level convolutional neural networks (CNNs). Detectors' performance was evaluated using a large real-world dataset. Our evaluation results show that, although our detectors (and especially the traditional NLP-based ones) individually yield high performance, an ensemble detector that combines an NLP-based classifier with a CNN-based classifier provides the best performance, since the latter classifier is able to detect malicious commands that succeed in evading the former. Our analysis of these evasive commands reveals that some obfuscation patterns automatically detected by the CNN classifier are intrinsically difficult to detect using the NLP techniques we applied. Our detectors provide high recall values while maintaining a very low false positive rate, making us cautiously optimistic that they can be of practical value.
Mobile Ad Hoc Networks are dynamic in nature and have no rigid or reliable network infrastructure by their very definition. They are expected to be self-governed and have dynamic wireless links which are not entirely reliable in terms of connectivity and security. Several factors could cause their degradation, such as attacks by malicious and selfish nodes which result in data carrying packets being dropped which in turn could cause breaks in communication between nodes in the network. This paper aims to address the issue of remedy and mitigation of the damage caused by packet drops. We proposed an improvement on the EAACK protocol to reduce the network overhead packet delivery ratio by using hybrid cryptography techniques DES due to its higher efficiency in block encryption, and RSA due to its management in key cipher. Comparing to the existing approaches, our simulated results show that hybrid cryptography techniques provide higher malicious behavior detection rates, and improve the performance. This research can also lead to more future efforts in using hybrid encryption based authentication techniques for attack detection/prevention in MANETs.
This research proposes a system for detecting known and unknown Distributed Denial of Service (DDoS) Attacks. The proposed system applies two different intrusion detection approaches anomaly-based distributed artificial neural networks(ANNs) and signature-based approach. The Amazon public cloud was used for running Spark as the fast cluster engine with varying cores of machines. The experiment results achieved the highest detection accuracy and detection rate comparing to signature based or neural networks-based approach.
The report presents the results of the investigations into the effects of the information hybrid threats through cyberspace on social, technical, socio and technical systems. The composition of the system of early efficient detection of the above hybrids is suggested. The results of the structural and parametric synthesis of the system are described. The recommendations related to the system implementation are given.
Internet users are increasing day by day. The web services and mobile web applications or desktop web application's demands are also increasing. The chances of a system being hacked are also increasing. All web applications maintain data at the backend database from which results are retrieved. As web applications can be accessed from anywhere all around the world which must be available to all the users of the web application. SQL injection attack is nowadays one of the topmost threats for security of web applications. By using SQL injection attackers can steal confidential information. In this paper, the SQL injection attack detection method by removing the parameter values of the SQL query is discussed and results are presented.
The evolution of the microelectronics manufacturing industry is characterized by increased complexity, analysis, integration, distribution, data sharing and collaboration, all of which is enabled by the big data explosion. This evolution affords a number of opportunities in improved productivity and quality, and reduced cost, however it also brings with it a number of risks associated with maintaining security of data systems. The International Roadmap for Devices and System Factory Integration International Focus Team (IRDS FI IFT) determined that a security technology roadmap for the industry is needed to better understand the needs, challenges and potential solutions for security in the microelectronics industry and its supply chain. As a first step in providing this roadmap, the IFT conducted a security survey, soliciting input from users, suppliers and OEMs. Preliminary results indicate that data partitioning with IP protection is the number one topic of concern, with the need for industry-wide standards as the second most important topic. Further, the "fear" of security breach is considered to be a significant hindrance to Advanced Process Control efforts as well as use of cloud-based solutions. The IRDS FI IFT will endeavor to provide components of a security roadmap for the industry in the 2018 FI chapter, leveraging the output of the survey effort combined with follow-up discussions with users and consultations with experts.
With the transition from IPv4 IPv6 protocol to improve network communications, there are concerns about devices and applications' security that must be dealt at the beginning of implementation or during its lifecycle. Automate the vulnerability assessment process reduces management overhead, enabling better management of risks and control of the vulnerabilities. Consequently, it reduces the effort needed for each test and it allows the increase of the frequency of application, improving time management to perform all the other complicated tasks necessary to support a secure network. There are several researchers involved in tests of vulnerability in IPv6 networks, exploiting addressing mechanisms, extension headers, fragmentation, tunnelling or dual-stack networks (using both IPv4 and IPv6 at the same time). Most existing tools use the programming languages C, Java, and Python instead of a language designed specifically to create a suite of tests, which reduces maintainability and extensibility of the tests. This paper presents a solution for IPv6 vulnerabilities scan tests, based on attack simulations, combining passive analysis (observing the manifestation of behaviours of the system under test) and an active one (stimulating the system to become symptomatic). Also, it describes a prototype that simulates and detects denial-of-service attacks on the ICMPv6 Protocol from IPv6. Also, a detailed report is created with the identified vulnerability and the possible existing solutions to mitigate such a gap, thus assisting the process of vulnerability management.
DNA synthesis has become increasingly common, and many synthetic DNA molecules are licensed intellectual property (IP). DNA samples are shared between academic labs, ordered from DNA synthesis companies and manipulated for a variety of different purposes, mostly to study their properties and improve upon them. However, it is not uncommon for a sample to change hands many times with very little accompanying information and no proof of origin. This poses significant challenges to the original inventor of a DNA molecule, trying to protect her IP rights. More importantly, following the anthrax attacks of 2001, there is an increased urgency to employ microbial forensic technologies to trace and track agent inventories. However, attribution of physical samples is next to impossible with existing technologies. In this paper, we describe our efforts to solve this problem by embedding digital signatures in DNA molecules synthesized in the laboratory. We encounter several challenges that we do not face in the digital world. These challenges arise primarily from the fact that changes to a physical DNA molecule can affect its properties, random mutations can accumulate in DNA samples over time, DNA sequencers can sequence (read) DNA erroneously and DNA sequencing is still relatively expensive (which means that laboratories would prefer not to read and re-read their DNA samples to get error-free sequences). We address these challenges and present a digital signature technology that can be applied to synthetic DNA molecules in living cells.
Threshold cryptography provides a mechanism for protecting secret keys by sharing them among multiple parties, who then jointly perform cryptographic operations. An attacker who corrupts up to a threshold number of parties cannot recover the secrets or violate security. Prior works in this space have mostly focused on definitions and constructions for public-key cryptography and digital signatures, and thus do not capture the security concerns and efficiency challenges of symmetric-key based applications which commonly use long-term (unprotected) master keys to protect data at rest, authenticate clients on enterprise networks, and secure data and payments on IoT devices. We put forth the first formal treatment for distributed symmetric-key encryption, proposing new notions of correctness, privacy and authenticity in presence of malicious attackers. We provide strong and intuitive game-based definitions that are easy to understand and yield efficient constructions. We propose a generic construction of threshold authenticated encryption based on any distributed pseudorandom function (DPRF). When instantiated with the two different DPRF constructions proposed by Naor, Pinkas and Reingold (Eurocrypt 1999) and our enhanced versions, we obtain several efficient constructions meeting different security definitions. We implement these variants and provide extensive performance comparisons. Our most efficient instantiation uses only symmetric-key primitives and achieves a throughput of upto 1 million encryptions/decryptions per seconds, or alternatively a sub-millisecond latency with upto 18 participating parties.
In recent years, cyber attacks have caused substantial financial losses and been able to stop fundamental public services. Among the serious attacks, Advanced Persistent Threat (APT) has emerged as a big challenge to the cyber security hitting selected companies and organisations. The main objectives of APT are data exfiltration and intelligence appropriation. As part of the APT life cycle, an attacker creates a Point of Entry (PoE) to the target network. This is usually achieved by installing malware on the targeted machine to leave a back-door open for future access. A common technique employed to breach into the network, which involves the use of social engineering, is the spear phishing email. These phishing emails may contain disguised executable files. This paper presents the disguised executable file detection (DeFD) module, which aims at detecting disguised exe files transferred over the network connections. The detection is based on a comparison between the MIME type of the transferred file and the file name extension. This module was experimentally evaluated and the results show a successful detection of disguised executable files.
Distributed denial of service (DDoS) attacks is a serious cyberattack that exhausts target machine's processing capacity by sending a huge number of packets from hijacked machines. To minimize resource consumption caused by DDoS attacks, filtering attack packets at source machines is the best approach. Although many studies have explored the detection of DDoS attacks, few studies have proposed DDoS attack prevention schemes that work at source machines. We propose a reliable, lightweight, transparent, and flexible DDoS attack prevention scheme that works at source machines. In this scheme, we employ a hypervisor with a packet filtering mechanism on each managed machine to allow the administrator to easily and reliably suppress packet transmissions. To make the proposed scheme lightweight and transparent, we exploit a thin hypervisor that allows pass-through access to hardware (except for network devices) from the operating system, thereby reducing virtualization overhead and avoiding compromising user experience. To make the proposed scheme flexible, we exploit a configurable packet filtering mechanism with a guaranteed safe code execution mechanism that allows the administrator to provide a filtering policy as executable code. In this study, we implemented the proposed scheme using BitVisor and the Berkeley Packet Filter. Experimental results show that the proposed scheme can suppress arbitrary packet transmissions with negligible latency and throughput overhead compared to a bare metal system without filtering mechanisms.
Considering their independent and environmentally-varied work-fashion, one of the most important factors in WSN applications is fault-tolerance. Due to the fact that the possibilities of an absent sensor node, damaged communication link or missing data are unavoidable in wireless sensor networks, fault-tolerance becomes a key-issue. Among the causes of these constant failures are environmental factors, battery exhaustion, damaged communications links, data collision, wear-out of memory and storage units and overloaded sensors. WSN can be in use for a variety of purposes, nevertheless its fault-tolerance needs to depend mostly on the application type. Scientific research, for example, tends to rely on accurate and precise massive amount of sensed data, thus demanding WSNs to support high degree of data sampling. The data storage capacity on the sensors is crucial because while some applications require instantaneous transmission to another node or directly to the base station, others demand intervallic or interrupted transmissions. Thus, if the amount of data is large - as a derivative of the data precision needed by the application - WSN nodes are required to store those amounts of data in a rapid and effective fashion till the transmission stage. However, since those requirements are mostly depend on the hardware and the wireless settings, WSNs frequently have distinguished amount of data loss, causing data integrity issues. Sensor nodes are inherently a cheap piece of hardware, due to the common need to use many of them over a large area, sometimes in a non-retrievable environment - a restriction that does not allow a usage of a pricey tampering or overflow resistant hardware (which also may not always be unfailing), and a damaged or overflowed sensor can harm the data integrity, or even completely reject incoming messages. The problem gets even worse when there is a need for high-rate sampling or when data should be received from many nodes since missing data becomes a more common phenomenon as deployed WSNs grow in scale. Therefore, high-rate sampling WSNs applications require fault-tolerant data storage, even though this requirement is not realistic. In cases of an overflow, our Distributed Adaptive Clustering algorithm (D-ACR) [1] reconfigures the network, by adaptively and hierarchically re-clustering parts of it, based on the rate of incoming data packages in order to minimize the energy-consumption, and prevent premature death of nodes. However, the re-clustering cannot prevent data loss caused by the nature of the sensors. We suggest to address this problem by an efficient distributed backup-placement algorithm named DBP-ACR, performed on the D-ACR refined clusters. The DBP-ACR algorithm re-directs packages from overloaded sensors to more efficient placements outside of the overloaded areas in the WSN cluster, thus increasing the fault-tolerance of the network and reducing the data loss.
We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including Google's Federated Learning. Formally, we focus on a decentralized system that consists of a parameter server and m working machines; each working machine keeps N/m data samples, where N is the total number of samples. In each iteration, up to q of the m working machines suffer Byzantine faults – a faulty machine in the given iteration behaves arbitrarily badly against the system and has complete knowledge of the system. Additionally, the sets of faulty machines may be different across iterations. Our goal is to design robust algorithms such that the system can learn the underlying true parameter, which is of dimension d, despite the interruption of the Byzantine attacks. In this paper, based on the geometric median of means of the gradients, we propose a simple variant of the classical gradient descent method. We show that our method can tolerate q Byzantine failures up to 2(1+$ε$)q łe m for an arbitrarily small but fixed constant $ε$0. The parameter estimate converges in O(łog N) rounds with an estimation error on the order of max $\surd$dq/N, \textasciitilde$\surd$d/N , which is larger than the minimax-optimal error rate $\surd$d/N in the centralized and failure-free setting by at most a factor of $\surd$q . The total computational complexity of our algorithm is of O((Nd/m) log N) at each working machine and O(md + kd log 3 N) at the central server, and the total communication cost is of O(m d log N). We further provide an application of our general results to the linear regression problem. A key challenge arises in the above problem is that Byzantine failures create arbitrary and unspecified dependency among the iterations and the aggregated gradients. To handle this issue in the analysis, we prove that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function.
Mobile ad hoc networks (MANETs) are self-configuring, dynamic networks in which nodes are free to move. These nodes are susceptible to various malicious attacks. In this paper, we propose a distributed trust-based security scheme to prevent multiple attacks such as Probe, Denial-of-Service (DoS), Vampire, User-to-Root (U2R) occurring simultaneously. We report above 95% accuracy in data transmission and reception by applying the proposed scheme. The simulation has been carried out using network simulator ns-2 in a AODV routing protocol environment. To the best of the authors' knowledge, this is the first work reporting a distributed trust-based prevention scheme for preventing multiple attacks. We also check the scalability of the technique using variable node densities in the network.
Inference based techniques are one of the major approaches to analyze DNS data and detect malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new approach to identify domains controlled by the same entity. Our key idea is an in-depth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak "co-IP" relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed approach not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithms are specifically designed for DNS data analysis. They are effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improve the inference efficiency, we construct a new domain-IP graph that can work well with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only a minor impact to detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.
The rise of social networks during the last 10 years has created a situation in which up to 100 million new images and photographs are uploaded and shared by users every day. This environment poses a ideal background for those who wish to communicate covertly by the use of steganography. It also creates a new set of challenges for steganalysts, who have to shift their field of work away from a purely scientific laboratory environment and into a diverse real-world scenario, while at the same time having to deal with entirely new problems, such as the detection of steganographic channels or the impact that even a low false positive rate has when investigating the millions of images which are shared every day on social networks. We evaluate how to address these challenges with traditional steganographic and statistical methods, rather then using high performance computing and machine learning. By the double embedding attack on the well-known F5 steganographic algorithm we achieve a false positive rate well below known attacks.
As drone attracts much interest, the drone industry has opened their market to ordinary people, making drones to be used in daily lives. However, as it got easier for drone to be used by more people, safety and security issues have raised as accidents are much more likely to happen: colliding into people by losing control or invading secured properties. For safety purposes, it is essential for observers and drone to be aware of an approaching drone. In this paper, we introduce a comprehensive drone detection system based on machine learning. This system is designed to be operable on drones with camera. Based on the camera images, the system deduces location on image and vendor model of drone based on machine classification. The system is actually built with OpenCV library. We collected drone imagery and information for learning process. The system's output shows about 89 percent accuracy.
Crowd surveillance will play a fundamental role in the coming generation of video surveillance systems, in particular for improving public safety and security. However, traditional camera networks are mostly not able to closely survey the entire monitoring area due to limitations in coverage, resolution and analytics performance. A smart camera network, on the other hand, offers the ability to reconfigure the sensing infrastructure by incorporating active devices such as pan-tilt-zoom (PTZ) cameras and UAV-based cameras, which enable the adaptation of coverage and target resolution over time. This paper proposes a novel decentralized approach for dynamic network reconfiguration, where cameras locally control their PTZ parameters and position, to optimally cover the entire scene. For crowded scenes, cameras must deal with a trade-off among global coverage and target resolution to effectively perform crowd analysis. We evaluate our approach in a simulated environment surveyed with fixed, PTZ, and UAV-based cameras.
A dynamic DNA for key-based Cryptography that encrypt and decrypt plain text characters, text file, image file and audio file using DNA sequences. Cryptography is always taken as the secure way while transforming the confidential information over the network such as LAN, Internet. But over the time, the traditional cryptographic approaches are been replaced with more effective cryptographic systems such as Quantum Cryptography, Biometric Cryptography, Geographical Cryptography and DNA Cryptography. This approach accepts the DNA sequences as the input to generate the key that going to provide two stages of data security.
Malicious software or malware is one of the most significant dangers facing the Internet today. In the fight against malware, users depend on anti-malware and anti-virus products to proactively detect threats before damage is done. Those products rely on static signatures obtained through malware analysis. Unfortunately, malware authors are always one step ahead in avoiding detection. This research deals with dynamic malware analysis, which emphasizes on: how the malware will behave after execution, what changes to the operating system, registry and network communication take place. Dynamic analysis opens up the doors for automatic generation of anomaly and active signatures based on the new malware's behavior. The research includes a design of honeypot to capture new malware and a complete dynamic analysis laboratory setting. We propose a standard analysis methodology by preparing the analysis tools, then running the malicious samples in a controlled environment to investigate their behavior. We analyze 173 recent Phishing emails and 45 SPIM messages in search for potentially new malwares, we present two malware samples and their comprehensive dynamic analysis.
In order to meet the demand of electrical energy by consumers, utilities have to maintain the security of the system. This paper presents a design of the Microgrid Central Energy Management System (MCEMS). It will plan operation of the system one-day advance. The MCEMS will adjust itself during operation if a fault occurs anywhere in the generation system. The proposed approach uses Dynamic Programming (DP) algorithm solves the Unit Commitment (UC) problem and at the same time enhances the security of power system. A case study is performed with ten subsystems. The DP is used to manage the operation of the subsystems and determines the UC on the situation demands. Faults are applied to the system and the DP corrects the UC problem with appropriate power sources to maintain reliability supply. The MATLAB software has been used to simulate the operation of the system.
As the traffic congestion increases on the transport network, Payable on the road to slower speeds, longer falter times, as a consequence bigger vehicular queuing, it's necessary to introduce smart way to reduce traffic. We are already edging closer to ``smart city-smart travel''. Today, a large number of smart phone applications and connected sat-naves will help get you to your destination in the quickest and easiest manner possible due to real-time data and communication from a host of sources. In present situation, traffic lights are used in each phase. The other way is to use electronic sensors and magnetic coils that detect the congestion frequency and monitor traffic, but found to be more expensive. Hence we propose a traffic control system using image processing techniques like edge detection. The vehicles will be detected using images instead of sensors. The cameras are installed alongside of the road and it will capture image sequence for every 40 seconds. The digital image processing techniques will be applied to analyse and process the image and according to that the traffic signal lights will be controlled.