Biblio
Hackers create different types of Malware such as Trojans which they use to steal user-confidential information (e.g. credit card details) with a few simple commands, recent malware however has been created intelligently and in an uncontrolled size, which puts malware analysis as one of the top important subjects of information security. This paper proposes an efficient dynamic malware-detection method based on API similarity. This proposed method outperform the traditional signature-based detection method. The experiment evaluated 197 malware samples and the proposed method showed promising results of correctly identified malware.
Cryptographic APIs are often vulnerable to attacks that compromise sensitive cryptographic keys. In the literature we find many proposals for preventing or mitigating such attacks but they typically require to modify the API or to configure it in a way that might break existing applications. This makes it hard to adopt such proposals, especially because security APIs are often used in highly sensitive settings, such as financial and critical infrastructures, where systems are rarely modified and legacy applications are very common. In this paper we take a different approach. We propose an effective method to monitor existing cryptographic systems in order to detect, and possibly prevent, the leakage of sensitive cryptographic keys. The method collects logs for various devices and cryptographic services and is able to detect, offline, any leakage of sensitive keys, under the assumption that a key fingerprint is provided for each sensitive key. We define key security formally and we prove that the method is sound, complete and efficient. We also show that without key fingerprinting completeness is lost, i.e., some attacks cannot be detected. We discuss possible practical implementations and we develop a proof-of-concept log analysis tool for PKCS\#11 that is able to detect, on a significant fragment of the API, all key-management attacks from the literature.
Biometric authentication has been extremely popular in large scale industries. The face biometric has been used widely in various applications. Handling large numbers of face images is a challenging task in authentication of biometric system. It requires large amount of secure storage, where the registered user information can be stored. Maintaining centralized data centers to store the information requires high investment and maintenance cost, therefore there is a need for deployment of cloud services. However as there is no guaranty of the security in the cloud, user needs to implement an additional or extra layer of security before storing facial data of all registered users. In this work a unique cloud based biometric authentication system is developed using Microsoft cognitive face API. Because most of the cloud based biometric techniques are scalable it is paramount to implement a security technique which can handle the scalability. Any users can use this system for single enterprise application base over the entire enterprise application. In this work the identification number which is text information associated with each biometric image is protected by AES algorithm. The proposed technique also works under distributed system in order to have wider accessibility. The system is also being extended to validate the registered user with an image of aadhar card. An accuracy of 96% is achieved with 100 registered users face images and aadhar card images. Earlier research carried out for the development of biometric system either suffers from development of distributed system are security aspects to handle multiple biometric information such as facial image and aadhar card image.
Currently, mobile botnet attacks have shifted from computers to smartphones due to its functionality, ease to exploit, and based on financial intention. Mostly, it attacks Android due to its popularity and high usage among end users. Every day, more and more malicious mobile applications (apps) with the botnet capability have been developed to exploit end users' smartphones. Therefore, this paper presents a new mobile botnet classification based on permission and Application Programming Interface (API) calls in the smartphone. This classification is developed using static analysis in a controlled lab environment and the Drebin dataset is used as the training dataset. 800 apps from the Google Play Store have been chosen randomly to test the proposed classification. As a result, 16 permissions and 31 API calls that are most related with mobile botnet have been extracted using feature selection and later classified and tested using machine learning algorithms. The experimental result shows that the Random Forest Algorithm has achieved the highest detection accuracy of 99.4% with the lowest false positive rate of 16.1% as compared to other machine learning algorithms. This new classification can be used as the input for mobile botnet detection for future work, especially for financial matters.
Honeypots are servers or systems built to mimic critical parts of a network, distracting attackers while logging their information to develop attack profiles. This paper discusses the design and implementation of a honeypot disguised as a REpresentational State Transfer (REST) Application Programming Interface (API). We discuss the motivation for this work, design features of the honeypot, and experimental performance results under various traffic conditions. We also present analyses of both a distributed denial of service (DDoS) attack and a cross-site scripting (XSS) malware insertion attempt against this honeypot.
Internet of Things devices (IoT-D) have limited resource capacity. But these devices can share resources. Hence, they are being used in variety of applications in various fields including smart city, smart energy, healthcare etc. Traditional practice of medicine and healthcare is mostly heuristic driven. There exist big gaps in our understanding of human body, disease and health. We can use upcoming digital revolution to turn healthcare upside down with data-driven medical science. Various healthcare companies now provide remote healthcare services. Healthcare professionals are also adapting remote healthcare monitoring practices so as to monitor patients who are either hospitalized or executing their normal lifestyle activities at remote locations. Wearable devices available in the market calculate different health parameters and corresponding applications pass the information to server through their proprietary platforms. However, these devices or applications cannot directly communicate or share the data. So, there needs an API to access health and wellness data from different wearable medical devices and applications. This paper proposes and demonstrates an API to connect different wearable healthcare devices and transfer patient personal information securely to the doctor or health provider.
Software metrics are widely used to measure the quality of software and to give an early indication of the efficiency of the development process in industry. There are many well-established frameworks for measuring the quality of source code through metrics, but limited attention has been paid to the quality of software models. In this article, we evaluate the quality of state machine models specified using the Analytical Software Design (ASD) tooling. We discuss how we applied a number of metrics to ASD models in an industrial setting and report about results and lessons learned while collecting these metrics. Furthermore, we recommend some quality limits for each metric and validate them on models developed in a number of industrial projects.
Ensuring software security is essential for developing a reliable software. A software can suffer from security problems due to the weakness in code constructs during software development. Our goal is to relate software security with different code constructs so that developers can be aware very early of their coding weaknesses that might be related to a software vulnerability. In this study, we chose Java nano-patterns as code constructs that are method-level patterns defined on the attributes of Java methods. This study aims to find out the correlation between software vulnerability and method-level structural code constructs known as nano-patterns. We found the vulnerable methods from 39 versions of three major releases of Apache Tomcat for our first case study. We extracted nano-patterns from the affected methods of these releases. We also extracted nano-patterns from the non-vulnerable methods of Apache Tomcat, and for this, we selected the last version of three major releases (6.0.45 for release 6, 7.0.69 for release 7 and 8.0.33 for release 8) as the non-vulnerable versions. Then, we compared the nano-pattern distributions in vulnerable versus non-vulnerable methods. In our second case study, we extracted nano-patterns from the affected methods of three vulnerable J2EE web applications: Blueblog 1.0, Personalblog 1.2.6 and Roller 0.9.9, all of which were deliberately made vulnerable for testing purpose. We found that some nano-patterns such as objCreator, staticFieldReader, typeManipulator, looper, exceptions, localWriter, arrReader are more prevalent in affected methods whereas some such as straightLine are more vivid in non-affected methods. We conclude that nano-patterns can be used as the indicator of vulnerability-proneness of code.
Industrial Control Systems (ICS) are found in critical infrastructure such as for power generation and water treatment. When security requirements are incorporated into an ICS, one needs to test the additional code and devices added do improve the prevention and detection of cyber attacks. Conducting such tests in legacy systems is a challenge due to the high availability requirement. An approach using Timed Automata (TA) is proposed to overcome this challenge. This approach enables assessment of the effectiveness of an attack detection method based on process invariants. The approach has been demonstrated in a case study on one stage of a 6- stage operational water treatment plant. The model constructed captured the interactions among components in the selected stage. In addition, a set of attacks, attack detection mechanisms, and security specifications were also modeled using TA. These TA models were conjoined into a network and implemented in UPPAAL. The models so implemented were found effective in detecting the attacks considered. The study suggests the use of TA as an effective tool to model an ICS and study its attack detection mechanisms as a complement to doing so in a real plant-operational or under design.
The principal mission of Multi-Source Multicast (MSM) is to disseminate all messages from all sources in a network to all destinations. MSM is utilized in numerous applications. In many of them, securing the messages disseminated is critical. A common secure model is to consider a network where there is an eavesdropper which is able to observe a subset of the network links, and seek a code which keeps the eavesdropper ignorant regarding all the messages. While this is solved when all messages are located at a single source, Secure MSM (SMSM) is an open problem, and the rates required are hard to characterize in general. In this paper, we consider Individual Security, which promises that the eavesdropper has zero mutual information with each message individually. We completely characterize the rate region for SMSM under individual security, and show that such a security level is achievable at the full capacity of the network, that is, the cut-set bound is the matching converse, similar to non-secure MSM. Moreover, we show that the field size is similar to non-secure MSM and does not have to be larger due to the security constraint.
We consider the problem of designing repair efficient distributed storage systems, which are information-theoretically secure against a passive eavesdropper that can gain access to a limited number of storage nodes. We present a framework that enables design of a broad range of secure storage codes through a joint construction of inner and outer codes. As case studies, we focus on two specific families of storage codes: (i) minimum storage regenerating (MSR) codes, and (ii) maximally recoverable (MR) codes, which are a class of locally repairable codes (LRCs). The main idea of this framework is to utilize the existing constructions of storage codes to jointly design an outer coset code and inner storage code. Finally, we present a construction of an outer coset code over small field size to secure locally repairable codes presented by Tamo and Barg for the special case of an eavesdropper that can observe any subset of nodes of maximum possible size.
Distributed storage systems and caching systems are becoming widespread, and this motivates the increasing interest on assessing their achievable performance in terms of reliability for legitimate users and security against malicious users. While the assessment of reliability takes benefit of the availability of well established metrics and tools, assessing security is more challenging. The classical cryptographic approach aims at estimating the computational effort for an attacker to break the system, and ensuring that it is far above any feasible amount. This has the limitation of depending on attack algorithms and advances in computing power. The information-theoretic approach instead exploits capacity measures to achieve unconditional security against attackers, but often does not provide practical recipes to reach such a condition. We propose a mixed cryptographic/information-theoretic approach with a twofold goal: estimating the levels of information-theoretic security and defining a practical scheme able to achieve them. In order to find optimal choices of the parameters of the proposed scheme, we exploit an effective probabilistic model checker, which allows us to overcome several limitations of more conventional methods.
There are several security requirements identification methods proposed by researchers in up-front requirements engineering (RE). However, in open source software (OSS) projects, developers use lightweight representation and refine requirements frequently by writing comments. They also tend to discuss security aspect in comments by providing code snippets, attachments, and external resource links. Since most security requirements identification methods in up-front RE are based on textual information retrieval techniques, these methods are not suitable for OSS projects or just-in-time RE. In our study, we propose a new model based on logistic regression to identify security requirements in OSS projects. We used five metrics to build security requirements identification models and tested the performance of these metrics by applying those models to three OSS projects. Our results show that four out of five metrics achieved high performance in intra-project testing.
Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.
In this paper, we present a security and privacy enhancement (SPE) framework for unmodified mobile operating systems. SPE introduces a new layer between the application and the operating system and does not require a device be jailbroken or utilize a custom operating system. We utilize an existing ontology designed for enforcing security and privacy policies on mobile devices to build a policy that is customizable. Based on this policy, SPE provides enhancements to native controls that currently exist on the platform for privacy and security sensitive components. SPE allows access to these components in a way that allows the framework to ensure the application is truthful in its declared intent and ensure that the user's policy is enforced. In our evaluation we verify the correctness of the framework and the computing impact on the device. Additionally, we discovered security and privacy issues in several open source applications by utilizing the SPE Framework. From our findings, if SPE is adopted by mobile operating systems producers, it would provide consumers and businesses the additional privacy and security controls they demand and allow users to be more aware of security and privacy issues with applications on their devices.
iOS devices are steadily obtaining popularity of the majority of users because of its some unique advantages in recent years. They can do many things that have been done on a desktop computer or laptop. With the increase in the use of mobile devices by individuals, organizations and government, there are many problems with information security especially some sensitive data related to users. As we all known, encryption algorithm play a significant role in data security. In order to prevent data being intercepted and being leaked during communication, in this paper, we adopted DES encryption algorithm that is fast, simple and suitable for large amounts of data of encryption to encrypt the data of iOS client and adopted the ECC encryption algorithms that was used to overcome the shortcoming of exchanging keys in a securing way before communications. In addition, we should also consider the application isolation and security mechanism of iOS that these features also protect the data securing to some extent. Namely, we propose an encryption algorithm combined the strengths of DES and ECC and make full use of the advantages of hybrid algorithm. Then, we tested and evaluated the performances of the suggested cryptography mechanism within the mobile platform of iOS. The results show that the algorithm has fairly efficiency in practical applications and strong anti-attack ability and it also improves the security and efficiency in data transmission.
We use symbolic formal models to study the composition of public key-based protocols with public key infrastructures (PKIs). We put forth a minimal set of requirements which a PKI should satisfy and then identify several reasons why composition may fail. Our main results are positive and offer various trade-offs which align the guarantees provided by the PKI with those required by the analysis of protocol with which they are composed. We consider both the case of ideally distributed keys but also the case of more realistic PKIs.,,Our theorems are broadly applicable. Protocols are not limited to specific primitives and compositionality asks only for minimal requirements on shared ones. Secure composition holds with respect to arbitrary trace properties that can be specified within a reasonably powerful logic. For instance, secrecy and various forms of authentication can be expressed in this logic. Finally, our results alleviate the common yet demanding assumption that protocols are fully tagged.
We address security and trust in the context of a commercial IP camera. We take a hands-on approach, as we not only define abstract vulnerabilities, but we actually implement the attacks on a real camera. We then discuss the nature of the attacks and the root cause; we propose a formal model of trust that can be used to address the vulnerabilities by explicitly constraining compositionality for trust relationships.
We present a framework for learning to describe finegrained visual differences between instances using attribute phrases. Attribute phrases capture distinguishing aspects of an object (e.g., “propeller on the nose” or “door near the wing” for airplanes) in a compositional manner. Instances within a category can be described by a set of these phrases and collectively they span the space of semantic attributes for a category. We collect a large dataset of such phrases by asking annotators to describe several visual differences between a pair of instances within a category. We then learn to describe and ground these phrases to images in the context of a reference game between a speaker and a listener. The goal of a speaker is to describe attributes of an image that allows the listener to correctly identify it within a pair. Data collected in a pairwise manner improves the ability of the speaker to generate, and the ability of the listener to interpret visual descriptions. Moreover, due to the compositionality of attribute phrases, the trained listeners can interpret descriptions not seen during training for image retrieval, and the speakers can generate attribute-based explanations for differences between previously unseen categories. We also show that embedding an image into the semantic space of attribute phrases derived from listeners offers 20% improvement in accuracy over existing attributebased representations on the FGVC-aircraft dataset.
Hierarchical approaches for representation learning have the ability to encode relevant features at multiple scales or levels of abstraction. However, most hierarchical approaches exploit only the last level in the hierarchy, or provide a multiscale representation that holds a significant amount of redundancy. We argue that removing redundancy across the multiple levels of abstraction is important for an efficient representation of compositionality in object-based representations. With the perspective of feature learning as a data compression operation, we propose a new greedy inference algorithm for hierarchical sparse coding. Convolutional matching pursuit with a L0-norm constraint was used to encode the input signal into compact and non-redundant codes distributed across levels of the hierarchy. Simple and complex synthetic datasets of temporal signals were created to evaluate the encoding efficiency and compare with the theoretical lower bounds on the information rate for those signals. Empirical evidence have shown that the algorithm is able to infer near-optimal codes for simple signals. However, it failed for complex signals with strong overlapping between objects. We explain the inefficiency of convolutional matching pursuit that occurred in such case. This brings new insights about the NP-hard optimization problem related to using L0-norm constraint in inferring optimally compact and distributed object-based representations.