Biblio
Integrated circuits (ICs) are now designed and fabricated in a globalized multivendor environment making them vulnerable to malicious design changes, the insertion of hardware Trojans/malware, and intellectual property (IP) theft. Algorithmic reverse engineering of digital circuits can mitigate these concerns by enabling analysts to detect malicious hardware, verify the integrity of ICs, and detect IP violations. In this paper, we present a set of algorithms for the reverse engineering of digital circuits starting from an unstructured netlist and resulting in a high-level netlist with components such as register files, counters, adders, and subtractors. Our techniques require no manual intervention and experiments show that they determine the functionality of >45% and up to 93% of the gates in each of the test circuits that we examine. We also demonstrate that our algorithms are scalable to real designs by experimenting with a very large, highly-optimized system-on-chip (SOC) design with over 375000 combinational elements. Our inference algorithms cover 68% of the gates in this SOC. We also demonstrate that our algorithms are effective in aiding a human analyst to detect hardware Trojans in an unstructured netlist.
Detecting hardware Trojan is a difficult task in general. The context is that of a fabless design house that sells IP blocks as GDSII hard macros, and wants to check that final products have not been infected by Trojan during the foundry stage. In this paper we analyzed hardware Trojan horses insertion and detection in Scalable Encryption Algorithm (SEA) crypto. We inserted Trojan at different levels in the ASIC design flow of SEA crypto and most importantly we focused on Gate level and layout level Trojan insertions. We choose path delays in order to detect Trojan at both levels in design phase. Because the path delays detection technique is cost effective and efficient method to detect Trojan. The comparison of path delays makes small Trojan circuits significant from a delay point of view. We used typical, fast and slow 90nm libraries in order to estimate the efficiency of path delay technique in different operating conditions. The experiment's results show that the detection rate on payload Trojan is 100%.
The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axbymod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage.
Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
This paper proposes a modified empirical-mode decomposition (EMD) filtering-based adaptive dynamic phasor estimation algorithm for the removal of exponentially decaying dc offset. Discrete Fourier transform does not have the ability to attain the accurate phasor of the fundamental frequency component in digital protective relays under dynamic system fault conditions because the characteristic of exponentially decaying dc offset is not consistent. EMD is a fully data-driven, not model-based, adaptive filtering procedure for extracting signal components. But the original EMD technique has high computational complexity and requires a large data series. In this paper, a short data series-based EMD filtering procedure is proposed and an optimum hermite polynomial fitting (OHPF) method is used in this modified procedure. The proposed filtering technique has high accuracy and convergent speed, and is greatly appropriate for relay applications. This paper illustrates the characteristics of the proposed technique and evaluates its performance by computer-simulated signals, PSCAD/EMTDC-generated signals, and real power system fault signals.
Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.
To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
Contingency analysis is a critical activity in the context of the power infrastructure because it provides a guide for resiliency and enables the grid to continue operating even in the case of failure. In this paper, we augment this concept by introducing SOCCA, a cyber-physical security evaluation technique to plan not only for accidental contingencies but also for malicious compromises. SOCCA presents a new unified formalism to model the cyber-physical system including interconnections among cyber and physical components. The cyber-physical contingency ranking technique employed by SOCCA assesses the potential impacts of events. Contingencies are ranked according to their impact as well as attack complexity. The results are valuable in both cyber and physical domains. From a physical perspective, SOCCA scores power system contingencies based on cyber network configuration, whereas from a cyber perspective, control network vulnerabilities are ranked according to the underlying power system topology.
With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
Since the massive deployment of Cyber-Physical Systems (CPSs) calls for long-range and reliable communication services with manageable cost, it has been believed to be an inevitable trend to relay a significant portion of CPS traffic through existing networking infrastructures such as the Internet. Adversaries who have access to networking infrastructures can therefore eavesdrop network traffic and then perform traffic analysis attacks in order to identify CPS sessions and subsequently launch various attacks. As we can hardly prevent all adversaries from accessing network infrastructures, thwarting traffic analysis attacks becomes indispensable. Traffic morphing serves as an effective means towards this direction. In this paper, a novel traffic morphing algorithm, CPSMorph, is proposed to protect CPS sessions. CPSMorph maintains a number of network sessions whose distributions of inter-packet delays are statistically indistinguishable from those of typical network sessions. A CPS message will be sent through one of these sessions with assured satisfaction of its time constraint. CPSMorph strives to minimize the overhead by dynamically adjusting the morphing process. It is characterized by low complexity as well as high adaptivity to changing dynamics of CPS sessions. Experimental results have shown that CPSMorph can effectively performing traffic morphing for real-time CPS messages with moderate overhead.
Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads.
Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
Security companies have recently realised that mining massive amounts of security data can help generate actionable intelligence and improve their understanding of Internet attacks. In particular, attack attribution and situational understanding are considered critical aspects to effectively deal with emerging, increasingly sophisticated Internet attacks. This requires highly scalable analysis tools to help analysts classify, correlate and prioritise security events, depending on their likely impact and threat level. However, this security data mining process typically involves a considerable amount of features interacting in a non-obvious way, which makes it inherently complex. To deal with this challenge, we introduce MR-TRIAGE, a set of distributed algorithms built on MapReduce that can perform scalable multi-criteria data clustering on large security data sets and identify complex relationships hidden in massive datasets. The MR-TRIAGE workflow is made of a scalable data summarisation, followed by scalable graph clustering algorithms in which we integrate multi-criteria evaluation techniques. Theoretical computational complexity of the proposed parallel algorithms are discussed and analysed. The experimental results demonstrate that the algorithms can scale well and efficiently process large security datasets on commodity hardware. Our approach can effectively cluster any type of security events (e.g., spam emails, spear-phishing attacks, etc) that are sharing at least some commonalities among a number of predefined features.
The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
Today in the world of globalization mobile communication is one of the fastest growing medium though which one sender can interact with other in short time. During the transmission of data from sender to receiver, size of data is important, since more data takes more time. But one of the limitations of sending data through mobile devices is limited use of bandwidth and number of packets transmitted. Also the security of these data is important. Hence various protocols are implemented which not only provides security to the data but also utilizes bandwidth. Here we proposed an efficient technique of sending SMS text using combination of compression and encryption. The data to be send is first encrypted using Elliptic curve Cryptographic technique, but encryption increases the size of the text data, hence compression is applied to this encrypted data so the data gets compressed and is send in short time. The Compression technique implemented here is an efficient one since it includes an algorithm which compresses the text by 99.9%, hence a great amount of bandwidth gets saved.The hybrid technique of Compression-Encryption of SMS text message is implemented for Android Operating Systems.
Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.
Demand response management (DRM) is one of the main features in smart grid, which is realized via communications between power providers and consumers. Due to the vulnerabilities of communication channels, communication is not perfect in practice and will be threatened by jamming attack. In this paper, we consider jamming attack in the wireless communication for smart grid. Firstly, the DRM performance degradation introduced by unreliable communication is fully studied. Secondly, a regret matching based anti-jamming algorithm is proposed to enhance the performance of communication and DRM. Finally, numerical results are presented to illustrate the impacts of unreliable communication on DRM and the performance of the proposed anti-jamming algorithm.
Contingency analysis is a critical activity in the context of the power infrastructure because it provides a guide for resiliency and enables the grid to continue operating even in the case of failure. In this paper, we augment this concept by introducing SOCCA, a cyber-physical security evaluation technique to plan not only for accidental contingencies but also for malicious compromises. SOCCA presents a new unified formalism to model the cyber-physical system including interconnections among cyber and physical components. The cyber-physical contingency ranking technique employed by SOCCA assesses the potential impacts of events. Contingencies are ranked according to their impact as well as attack complexity. The results are valuable in both cyber and physical domains. From a physical perspective, SOCCA scores power system contingencies based on cyber network configuration, whereas from a cyber perspective, control network vulnerabilities are ranked according to the underlying power system topology.