Biblio
Cooperation of software and hardware with hybrid architectures, such as Xilinx Zynq SoC combining ARM CPU and FPGA fabric, is a high-performance and low-power platform for accelerating RSA Algorithm. This paper adopts the none-subtraction Montgomery algorithm and the Chinese Remainder Theorem (CRT) to implement high-speed RSA processors, and deploys a 48-node cluster infrastructure based on Zynq SoC to achieve extremely high scalability and throughput of RSA computing. In this design, we use the ARM to implement node-to-node communication with the Message Passing Interface (MPI) while use the FPGA to handle complex calculation. Finally, the experimental results show that the overall performance is linear with the number of nodes. And the cluster achieves 6× 9× speedup against a multi-core desktop (Intel i7-3770) and comparable performance to a many-core server (288-core). In addition, we gain up to 2.5× energy efficiency compared to these two traditional platforms.
In the original algorithm for grey correlation analysis, the detected edge is comparatively rough and the thresholds need determining in advance. Thus, an adaptive edge detection method based on grey correlation analysis is proposed, in which the basic principle of the original algorithm for grey correlation analysis is used to get adaptively automatic threshold according to the mean value of the 3×3 area pixels around the detecting pixel and the property of people's vision. Because the false edge that the proposed algorithm detected is relatively large, the proposed algorithm is enhanced by dealing with the eight neighboring pixels around the edge pixel, which is merged to get the final edge map. The experimental results show that the algorithm can get more complete edge map with better continuity by comparing with the traditional edge detection algorithms.
In assessing privacy on online social networks, it is important to investigate their vulnerability to reconnaissance strategies, in which attackers lure targets into being their friends by exploiting the social graph in order to extract victims' sensitive information. As the network topology is only partially revealed after each successful friend request, attackers need to employ an adaptive strategy. Existing work only considered a simple strategy in which attackers sequentially acquire one friend at a time, which causes tremendous delay in waiting for responses before sending the next request, and which lack the ability to retry failed requests after the network has changed. In contrast, we investigate an adaptive and parallel strategy, of which attackers can simultaneously send multiple friend requests in batch and recover from failed requests by retrying after topology changes, thereby significantly reducing the time to reach the targets and greatly improving robustness. We cast this approach as an optimization problem, Max-Crawling, and show it inapproximable within (1 - 1/e + $ε$). We first design our core algorithm PM-AReST which has an approximation ratio of (1 - e-(1-1/e)) using adaptive monotonic submodular properties. We next tighten our algorithm to provide a nearoptimal solution, i.e. having a ratio of (1 - 1/e), via a two-stage stochastic programming approach. We further establish the gap bound of (1 - e-(1-1/e)2) between batch strategies versus the optimal sequential one. We experimentally validate our theoretical results, finding that our algorithm performs nearoptimally in practice and that this is robust under a variety of problem settings.
The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
The analysis of multiple Android malware families indicates malware instances within a common malware family always have similar call graph structures. Based on the isomorphism of sensitive API call graph, we propose a method which is used to construct malware family features via combining static analysis approach with graph similarity metric. The experiment is performed on a malware dataset which contains 1326 malware samples from 16 different malware families. The result shows that the method can differentiate distinct malware family features and divide suspect malware samples into corresponding families with a high accuracy of 96.77% overall and even defend a certain extent of obfuscation.
The work proposes and justifies a processing algorithm of computer security incidents based on the author's signatures of cyberattacks. Attention is also paid to the design pattern SOPKA based on the Russian ViPNet technology. Recommendations are made regarding the establishment of the corporate segment SOPKA, which meets the requirements of Presidential Decree of January 15, 2013 number 31c “On the establishment of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” and “Concept of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” approved by the President of the Russian Federation on December 12, 2014, No K 1274.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.
Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.
In a system manufacturing process that use screens, for exemple, TVs, computer monitors, or notebook, the inspection images is one of the most important quality tests. Due to increasing complexity of these systems, manual inspection became complex and slow. Thus, automatic inspection is an attractive alternative. In this paper, we present an automatic inspection system images using edge and line detection algorithms, rectangles recognition and image comparison metrics. The experiments, performed to 504 images (TVs, computer monitors, and notebook) demonstrate that the system has good performance.
Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited.
Content Security Policy (CSP) is powerful client-side security layer that helps in mitigating and detecting wide ranges of Web attacks including cross-site scripting (XSS). However, utilizing CSP by site administrators is a fallible process and may require significant changes in web application code. In this paper, we propose an approach to help site administers to overcome these limitations in order to utilize the full benefits of CSP mechanism which leads to more immune sites from XSS. The algorithm is implemented as a plugin. It does not interfere with the Web application original code. The plugin can be “installed” on any other web application with minimum efforts. The algorithm can be implemented as part of Web Server layer, not as part of the business logic layer. It can be extended to support generating CSP for contents that are modified by JavaScript after loading. Current approach inspects the static contents of URLs.
Certain crimes are difficult to be committed by individuals but carefully organised by group of associates and affiliates loosely connected to each other with a single or small group of individuals coordinating the overall actions. A common starting point in understanding the structural organisation of criminal groups is to identify the criminals and their associates. Situations arise in many criminal datasets where there is no direct connection among the criminals. In this paper, we investigate ties and community structure in crime data in order to understand the operations of both traditional and cyber criminals, as well as to predict the existence of organised criminal networks. Our contributions are twofold: we propose a bipartite network model for inferring hidden ties between actors who initiated an illegal interaction and objects affected by the interaction, we then validate the method in two case studies on pharmaceutical crime and underground forum data using standard network algorithms for structural and community analysis. The vertex level metrics and community analysis results obtained indicate the significance of our work in understanding the operations and structure of organised criminal networks which were not immediately obvious in the data. Identifying these groups and mapping their relationship to one another is essential in making more effective disruption strategies in the future.
This paper proposes a novel scheme for RFID anti-counterfeiting by applying bisectional multivariate quadratic equations (BMQE) system into an RF tag data encryption. In the key generation process, arbitrarily choose two matrix sets (denoted as A and B) and a base Rab such that [AB] = λRABT, and generate 2n BMQ polynomials (denoted as p) over finite field Fq. Therefore, (Fq, p) is taken as a public key and (A, B, λ) as a private key. In the encryption process, the EPC code is hashed into a message digest dm. Then dm is padded to d'm which is a non-zero 2n×2n matrix over Fq. With (A, B, λ) and d'm, Sm is formed as an n-vector over F2. Unlike the existing anti-counterfeit scheme, the one we proposed is based on quantum cryptography, thus it is robust enough to resist the existing attacks and has high security.
Detection and prevention of data breaches in corporate networks is one of the most important security problems of today's world. The techniques and applications proposed for solution are not successful when attackers attempt to steal data using steganography. Steganography is the art of storing data in a file called cover, such as picture, sound and video. The concealed data cannot be directly recognized in the cover. Steganalysis is the process of revealing the presence of embedded messages in these files. There are many statistical and signature based steganalysis algorithms. In this work, the detection of steganographic images with steganalysis techniques is reviewed and a system has been developed which automatically detects steganographic images in network traffic by using open source tools.
In this paper, we propose a new color image encryption and compression algorithm based on the DNA complementary rule and the Chinese remainder theorem, which combines the DNA complementary rule with quantum chaotic map. We use quantum chaotic map and DNA complementary rule to shuffle the color image and obtain the shuffled image, then Chinese remainder theorem from number theory is utilized to diffuse and compress the shuffled image simultaneously. The security analysis and experiment results show that the proposed encryption algorithm has large key space and good encryption result, it also can resist against common attacks.
Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
Botnets have long been used for malicious purposes with huge economic costs to the society. With the proliferation of cheap but non-secure Internet-of-Things (IoT) devices generating large amounts of data, the potential for damage from botnets has increased manifold. There are several approaches to detect bots or botnets, though many traditional techniques are becoming less effective as botnets with centralized command & control structure are being replaced by peer-to-peer (P2P) botnets which are harder to detect. Several algorithms have been proposed in literature that use graph analysis or machine learning techniques to detect the overlay structure of P2P networks in communication graphs. Many of these algorithms however, depend on the availability of a universal communication graph or a communication graph aggregated from several ISPs, which is not likely to be available in reality. In real world deployments, significant gaps in communication graphs are expected and any solution proposed should be able to work with partial information. In this paper, we analyze the effectiveness of some community detection algorithms in detecting P2P botnets, especially with partial information. We show that the approach can work with only about half of the nodes reporting their communication graphs, with only small increase in detection errors.
In recent years a wide range of wearable IoT healthcare applications have been developed and deployed. The rapid increase in wearable devices allows the transfer of patient personal information between different devices, at the same time personal health and wellness information of patients can be tracked and attacked. There are many techniques that are used for protecting patient information in medical and wearable devices. In this research a comparative study of the complexity for cyber security architecture and its application in IoT healthcare industry has been carried out. The objective of the study is for protecting healthcare industry from cyber attacks focusing on IoT based healthcare devices. The design has been implemented on Xilinx Zynq-7000, targeting XC7Z030 - 3fbg676 FPGA device.
Differential privacy is a rigorous privacy standard that has been applied to a range of data analysis tasks. To broaden the application scenarios of differential privacy when data records have dependencies, the notion of Bayesian differential privacy has been recently proposed. However, it is unknown whether Bayesian differential privacy preserves three nice properties of differential privacy: sequential composability, parallel composability, and post-processing. In this paper, we provide an affirmative answer to this question; i.e., Bayesian differential privacy still have these properties. The idea behind sequential composability is that if we have m algorithms Y1, Y2,łdots, Ym, where Y$\mathscrl$ is independently $ε\mathscrl$-Bayesian differential private for $\mathscrl$ = 1,2,łdots, m, then by feeding the result of Y1 into Y2, the result of Y2 into Y3, and so on, we will finally have an $Σ$m$\mathscrl$=;1 $ε\mathscrl$-Bayesian differential private algorithm. For parallel composability, we consider the situation where a database is partitioned into m disjoint subsets. The $\mathscrl$-th subset is input to a Bayesian differential private algorithm Y$\mathscrl$, for $\mathscrl$= 1, 2,łdots, m. Then the parallel composition of Y1, Y2,łdots, Ym will be maxm$\mathscrl$=;1=1 $ε\mathscrl$-Bayesian differential private. The postprocessing property means that a data analyst, without additional knowledge abo- t the private database, cannot compute a function of the output of a Bayesian differential private algorithm and reduce its privacy guarantee.
Future wars will be cyber wars and the attacks will be a sturdy amalgamation of cryptography along with malware to distort information systems and its security. The explosive Internet growth facilitates cyber-attacks. Web threats include risks, that of loss of confidential data and erosion of consumer confidence in e-commerce. The emergence of cyber hack jacking threat in the new form in cyberspace is known as ransomware or crypto virus. The locker bot waits for specific triggering events, to become active. It blocks the task manager, command prompt and other cardinal executable files, a thread checks for their existence every few milliseconds, killing them if present. Imposing serious threats to the digital generation, ransomware pawns the Internet users by hijacking their system and encrypting entire system utility files and folders, and then demanding ransom in exchange for the decryption key it provides for release of the encrypted resources to its original form. We present in this research, the anatomical study of a ransomware family that recently picked up quite a rage and is called CTB locker, and go on to the hard money it makes per user, and its source C&C server, which lies with the Internet's greatest incognito mode-The Dark Net. Cryptolocker Ransomware or the CTB Locker makes a Bitcoin wallet per victim and payment mode is in the form of digital bitcoins which utilizes the anonymity network or Tor gateway. CTB Locker is the deadliest malware the world ever encountered.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
Cloud computing is a revolution in IT technology that provides scalable, virtualized on-demand resources to the end users with greater flexibility, less maintenance and reduced infrastructure cost. These resources are supervised by different management organizations and provided over Internet using known networking protocols, standards and formats. The underlying technologies and legacy protocols contain bugs and vulnerabilities that can open doors for intrusion by the attackers. Attacks as DDoS (Distributed Denial of Service) are ones of the most frequent that inflict serious damage and affect the cloud performance. In a DDoS attack, the attacker usually uses innocent compromised computers (called zombies) by taking advantages of known or unknown bugs and vulnerabilities to send a large number of packets from these already-captured zombies to a server. This may occupy a major portion of network bandwidth of the victim cloud infrastructures or consume much of the servers time. Thus, in this work, we designed a DDoS detection system based on the C.4.5 algorithm to mitigate the DDoS threat. This algorithm, coupled with signature detection techniques, generates a decision tree to perform automatic, effective detection of signatures attacks for DDoS flooding attacks. To validate our system, we selected other machine learning techniques and compared the obtained results.