Biblio
Hardware implementations of cryptographic algorithms may leak information through numerous side channels, which can be used to reveal the secret cryptographic keys, and therefore compromise the security of the algorithm. Power Analysis Attacks (PAAs) [1] exploit the information leakage from the device's power consumption (typically measured on the supply and/or ground pins). Digital circuits consume dynamic switching energy when data propagate through the logic in each new calculation (e.g. new clock cycle). The average power dissipation of a design can be expressed by: Ptot(t) = α · (Pd(t) + Ppvt(t)) (1) where α is the activity factor (the probability that the gate will switch) and depends on the probability distribution of the inputs to the combinatorial logic. This induces a linear relationship between the power and the processed data [2]. Pd is the deterministic power dissipated by the switching of the gate, including any parasitic and intrinsic capacitances, and hence can be evaluated prior to manufacturing. Ppvt is the change in expected power consumption due to nondeterministic parameters such as process variations, mismatch, temperature, etc. In this manuscript, we describe the design of logic gates that induce data-independent (constant) α and Pd.
Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.
This paper presents a true random number generator that exploits the subthreshold properties of jitter of events propagating in a self-timed ring and jitter of events propagating in an inverter based ring oscillator. Design was implemented in 180nm CMOS flash process. Devices provide high quality random bit sequences passing FIPS 140-2 and NIST SP 800-22 statistical tests which guaranty uniform distribution and unpredictability thanks to the physics based entropy source.
Use of digital token - which certifies the bearer's rights to some kind of products or services - is quite common nowadays for its convenience, ease of use and cost-effectiveness. Many of such digital tokens, however, are produced with software alone, making them vulnerable to forgery, including alteration and duplication. For a more secure safeguard for both token owner's right and service provider's accountability, digital tokens should be tamper-resistant as much as possible in order for them to withstand physical attacks as well. In this paper, we present a rights management system that leverages tamper-resistant digital tokens created by hardware-software collaboration in our eTRON architecture. The system features the complete life cycle of a digital token from generation to storage and redemption. Additionally, it provides a secure mechanism for transfer of rights in a peer-to-peer manner over the Internet. The proposed system specifies protocols for permissible manipulation on digital tokens, and subsequently provides a set of APIs for seamless application development. Access privileges to the tokens are strictly defined and state-of-the-art asymmetric cryptography is used for ensuring their confidentiality. Apart from the digital tokens being physically tamper-resistant, the protocols involved in the system are proven to be secure against attacks. Furthermore, an authentication mechanism is implemented that invariably precedes any operation involving the digital token in question. The proposed system presents clear security gains compared to existing systems that do not take tamper-resistance into account, and schemes that use symmetric key cryptography.
We have proposed a method of designing embedded clock-cycle-sensitive Hardware Trojans (HTs) to manipulate finite state machine (FSM). By using pipeline to choose and customize critical path, the Trojans can facilitate a series of attack and need no redundant circuits. One cannot detect any malicious architecture through logic analysis because the proposed circuitry is the part of FSM. Furthermore, this kind of HTs alerts the trusted systems designers to the importance of clock tree structure. The attackers may utilize modified clock to bypass certain security model or change the circuit behavior.
Blockchain is an integrated technology to ensure keeping record and process transactions with decentralized manner. It is thought as the foundation of future decentralized ecosystem, and collects much attention. However, the maturity of this technology including security of the fundamental protocol and its applications is not enough, thus we need more research on the security evaluation and verification of Blockchain technology This tutorial explains the current status of the security of this technology, its security layers and possibility of application of formal analysis and verification.
The chaotic system and cryptography have some common features. Due to the close relationship between chaotic system and cryptosystem, researchers try to combine the chaotic system with cryptosystem. In this study, security analysis of an encryption algorithm which aims to encrypt the data with ECG signals and chaotic functions was performed using the Logistic map in text encryption and Henon map in image encryption. In the proposed algorithm, text and image data can be encrypted at the same time. In addition, ECG signals are used to determine the initial conditions and control parameters of the chaotic functions used in the algorithm to personalize of the encryption algorithm. In this cryptanalysis study, the inadequacy of the mentioned process and the weaknesses of the proposed method have been determined. Encryption algorithm has not sufficient capacity to provide necessary security level of key space and secret key can be obtained with only one plaintext/ciphertext pair with chosen-plaintext attack.
The symmetric block ciphers, which represent a core element for building cryptographic communications systems and protocols, are used in providing message confidentiality, authentication and integrity. Various limitations in hardware and software resources, especially in terminal devices used in mobile communications, affect the selection of appropriate cryptosystem and its parameters. In this paper, an implementation of three symmetric ciphers (DES, 3DES, AES) used in different operating modes are analyzed on Android platform. The cryptosystems' performance is analyzed in different scenarios using several variable parameters: cipher, key size, plaintext size and number of threads. Also, the influence of parallelization supported by multi-core CPUs on cryptosystem performance is analyzed. Finally, some conclusions about the parameter selection for optimal efficiency are given.
This paper investigates the suitability of employing various measurable features derived from multiple wearable devices (Apple Watch), for the generation of unique authentication and encryption keys related to the user. This technique is termed as ICMetrics. The ICMetrics technology requires identifying the suitable features in an environment for key generation most useful for online services. This paper presents an evaluation of the feasibility of identifying a unique user based on desirable feature set and activity data collected over short and long term and explores how the number of samples being factored into the ICMetrics system affects uniqueness of the key.
Traffic classification, i.e. associating network traffic to the application that generated it, is an important tool for several tasks, spanning on different fields (security, management, traffic engineering, R&D). This process is challenged by applications that preserve Internet users' privacy by encrypting the communication content, and even more by anonymity tools, additionally hiding the source, the destination, and the nature of the communication. In this paper, leveraging a public dataset released in 2017, we provide (repeatable) classification results with the aim of investigating to what degree the specific anonymity tool (and the traffic it hides) can be identified, when compared to the traffic of the other considered anonymity tools, using machine learning approaches based on the sole statistical features. To this end, four classifiers are trained and tested on the dataset: (i) Naïve Bayes, (ii) Bayesian Network, (iii) C4.5, and (iv) Random Forest. Results show that the three considered anonymity networks (Tor, I2P, JonDonym) can be easily distinguished (with an accuracy of 99.99%), telling even the specific application generating the traffic (with an accuracy of 98.00%).
In this paper we investigate whether and how hardware-based roots of trust, namely Trusted Platform Modules (TPMs) can improve the security of the communication protocol OPC UA (Open Platform Communications Unified Architecture) under reasonable assumptions, i.e. the Dolev-Yao attacker model. Our analysis shows that TPMs may serve for generating (RNG) and securely storing cryptographic keys, as cryptocoprocessors for weak systems, as well as for remote attestation. We propose to include these TPM functions into OPC UA via so-called ConformanceUnits, which can serve as building blocks of profiles that are used by clients and servers for negotiating the parameters of a session. Eventually, we present first results regarding the performance of a client-server communication including an additional OPC UA server providing remote attestation of other OPC UA servers.
The size of counterfeiting activities is increasing day by day. These activities are encountered especially in electronics market. In this paper, a countermeasure against counterfeiting on intellectual properties (IP) on Field-Programmable Gate Arrays (FPGA) is proposed. FPGA vendors provide bitstream ciphering as an IP security solution such as battery-backed or non-volatile FPGAs. However, these solutions are secure as long as they can keep decryption key away from third parties. Key storage and key transfer over unsecure channels expose risks for these solutions. In this work, physical unclonable functions (PUFs) have been used for key generation. Generating a key from a circuit in the device solves key transfer problem. Proposed system goes through different phases when it operates. Therefore, partial reconfiguration feature of FPGAs is essential for feasibility of proposed system.
One important goal of black-box complexity theory is the development of complexity models allowing to derive meaningful lower bounds for whole classes of randomized search heuristics. Complementing classical runtime analysis, black-box models help us understand how algorithmic choices such as the population size, the variation operators, or the selection rules influence the optimization time. One example for such a result is the Ω(n log n) lower bound for unary unbiased algorithms on functions with a unique global optimum [Lehre/Witt, GECCO 2010], which tells us that higher arity operators or biased sampling strategies are needed when trying to beat this bound. In lack of analyzing techniques, almost no non-trivial bounds are known for other restricted models. Proving such bounds therefore remains to be one of the main challenges in black-box complexity theory. With this paper we contribute to our technical toolbox for lower bound computations by proposing a new type of information-theoretic argument. We regard the permutation- and bit-invariant version of LeadingOnes and prove that its (1+1) elitist black-box complexity is Ω(n2), a bound that is matched by (1+1)-type evolutionary algorithms. The (1+1) elitist complexity of LeadingOnes is thus considerably larger than its unrestricted one, which is known to be of order n log log n [Afshani et al., 2013].
Ransomwares have become a growing threat since 2012, and the situation continues to worsen until now. The lack of security mechanisms and security awareness are pushing the systems into mire of ransomware attacks. In this paper, a new framework called 2entFOX' is proposed in order to detect high survivable ransomwares (HSR). To our knowledge this framework can be considered as one of the first frameworks in ransomware detection because of little publicly-available research in this field. We analyzed Windows ransomwares' behaviour and we tried to find appropriate features which are particular useful in detecting this type of malwares with high detection accuracy and low false positive rate. After hard experimental analysis we extracted 20 effective features which due to two highly efficient ones we could achieve an appropriate set for HSRs detection. After proposing architecture based on Bayesian belief network, the final evaluation is done on some known ransomware samples and unknown ones based on six different scenarios. The result of this evaluations shows the high accuracy of 2entFox in detection of HSRs.
Security in Mobile Ad Hoc networks is still ongoing research in the scientific community and it is difficult bring an overall security solution. In this paper we assess feasibility of distributed firewall solutions in the Mobile Ad Hoc Networks. Attention is also focused on different security solutions in the Ad Hoc networks. We propose a security architecture which secures network on the several layers and is the most secured solution out of analyzed materials. For this purpose we use distributed public key infrastructure, distributed firewall and intrusion detection system. Our architecture is using both symmetric and asymmetric cryptography and in this paper we present performance measurements and the security analysis of our solution.
Suppose that you have n truly random bits x=(x1,…,xn) and you wish to use them to generate m≫ n pseudorandom bits y=(y1,…, ym) using a local mapping, i.e., each yi should depend on at most d=O(1) bits of x. In the polynomial regime of m=ns, stextgreater1, the only known solution, originates from (Goldreich, ECCC 2000), is based on Random Local Functions: Compute yi by applying some fixed (public) d-ary predicate P to a random (public) tuple of distinct inputs (xi1,…,xid). Our goal in this paper is to understand, for any value of s, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate. We derive the following results: (1) We show that pseudorandomness against F2-linear adversaries (i.e., the distribution y has low-bias) is achieved if the predicate is (a) k=Ω(s)-resilience, i.e., uncorrelated with any k-subset of its inputs, and (b) has algebraic degree of Ω(s) even after fixing Ω(s) of its inputs. We also show that these requirements are necessary, and so they form a tight characterization (up to constants) of security against linear attacks. Our positive result shows that a d-local low-bias generator can have output length of nΩ(d), answering an open question of Mossel, Shpilka and Trevisan (FOCS, 2003). Our negative result shows that a candidate for pseudorandom generator proposed by the first author (computational complexity, 2015) and by O’Donnell and Witmer (CCC 2014) is insecure. We use similar techniques to refute a conjecture of Feldman, Perkins and Vempala (STOC 2015) regarding the hardness of planted constraint satisfaction problems. (2) Motivated by the cryptanalysis literature, we consider security against algebraic attacks. We provide the first theoretical treatment of such attacks by formalizing a general notion of algebraic inversion and distinguishing attacks based on the Polynomial Calculus proof system. We show that algebraic attacks succeed if and only if there exist a degree e=O(s) non-zero polynomial Q whose roots cover the roots of P or cover the roots of P’s complement. As a corollary, we obtain the first example of a predicate P for which the generated sequence y passes all linear tests but fails to pass some polynomial-time computable test, answering an open question posed by the first author (Question 4.9, computational complexity 2015).
Finding and proving lower bounds on black-box complexities is one of the hardest problems in theory of randomized search heuristics. Until recently, there were no general ways of doing this, except for information theoretic arguments similar to the one of Droste, Jansen and Wegener. In a recent paper by Buzdalov, Kever and Doerr, a theorem is proven which may yield tighter bounds on unrestricted black-box complexity using certain problem-specific information. To use this theorem, one should split the search process into a finite number of states, describe transitions between states, and for each state specify (and prove) the maximum number of different answers to any query. We augment these state constraints by one more kind of constraints on states, namely, the maximum number of different currently possible optima. An algorithm is presented for computing the lower bounds based on these constraints. We also empirically show improved lower bounds on black-box complexity of OneMax and Mastermind.
Today, cloud vendors host third party black-box services, whose developers usually provide only textual descriptions or purely syntactical interface specifications. Cloud vendors that give substantial support to other third party developers to integrate hosted services into new software solutions would have a unique selling feature over their competitors. However, to reliably determine if a service is reusable, comprehensive service specifications are needed. Characteristic for comprehensive in contrast to syntactical specifications are the formalization of ontological and behavioral semantics, homogeneity according to a global ontology, and a service grounding that links the abstract service description and its technical realization. Homogeneous, semantical specifications enable to reliably identify reusable services, whereas the service grounding is needed for the technical service integration. In general, comprehensive specifications are not available and have to be derived. Existing automatized approaches are restricted to certain characteristics of comprehensiveness. In my PhD, I consider an automatized approach to derive fully-fledged comprehensive specifications for black-box services. Ontological semantics are derived from syntactical interface specifications. Behavioral semantics are mined from call logs that cloud vendors create to monitor the hosted services. The specifications are harmonized over a global ontology. The service grounding is established using traceability information. The approach enables third party developers to compose services into complex systems and creates new sales channels for cloud and service providers.
Synchronous replication is critical for today's enterprise IT organization. It is mandatory by regulation in several countries for some types of organizations, including banks and insurance companies. The technology has been available for a long period of time, but due to speed of light and maximal latency limitations, it is usually limited to a distance of 50-100 miles. Flight data recorders, also known as black boxes, have long been used to record the last actions which happened in airplanes at times of disasters. We present an integration between an Enterprise Data Recorder and an asynchronous replication mechanism, which allows breaking the functional limits that light speed imposes on synchronous replication.