Biblio
Source camera attribution of digital images has been a hot research topic in digital forensics literature. However, the thermal cameras and the radiometric data they generate stood as a nascent topic, as such devices are expensive and tailored for specific use-cases - not adapted by the masses. This has changed dramatically, with the low-cost, pluggable thermal-camera add-ons to smartphones and similar low-cost pocket-size thermal cameras introduced to consumers recently, which enabled the use of thermal imaging devices for the masses. In this paper, we are going to investigate the use of an established source device attribution method on radiometric data produced with a consumer-level, low-cost handheld thermal camera. The results we represent in this paper are promising and show that it is quite possible to attribute thermal images with their source camera.
Today, there are several applications which allow us to share images over the internet. All these images must be stored in a secure manner and should be accessible only to the intended recipients. Hence it is of utmost importance to develop efficient and fast algorithms for encryption of images. This paper uses chaotic generators to generate random sequences which can be used as keys for image encryption. These sequences are seemingly random and have statistical properties. This makes them resistant to analysis and correlation attacks. However, these sequences have fixed cycle lengths. This restricts the number of sequences that can be used as keys. This paper utilises neural networks as a source of perturbation in a chaotic generator and uses its output to encrypt an image. The robustness of the encryption algorithm can be verified using NPCR, UACI, correlation coefficient analysis and information entropy analysis.
Cloud service has the computing characteristics of self-organizing strain on demand, which is prone to failure or loss of responsibility in its extensive application. In the prediction or accountability of this, the modeling of cloud service structure becomes an insurmountable priority. This paper reviews the modeling of cloud service network architecture. It mainly includes: Firstly, the research status of cloud service structure modeling is analyzed and reviewed. Secondly, the classification of time-varying structure of cloud services and the classification of time-varying structure modeling methods are summarized as a whole. Thirdly, it points out the existing problems. Finally, for cloud service accountability, research approach of time-varying structure modeling is proposed.
Signature-based Intrusion Detection Systems (IDS) are a key component in the cybersecurity defense strategy for any network being monitored. In order to improve the efficiency of the intrusion detection system and the corresponding mitigation action, it is important to address the problem of false alarms. In this paper, we present a comparative analysis of two approaches that consider the false alarm minimization and alarm correlation techniques. The output of this analysis provides us the elements to propose a parallelizable strategy designed to achieve better results in terms of precision, recall and alarm load reduction in the prioritization of alarms. We use Prelude SIEM as the event normalizer in order to process security events from heterogeneous sensors and to correlate them. The alarms are verified using the dynamic network context information collected from the vulnerability analysis, and they are prioritized using the HP Arsight priority formula. The results show an important reduction in the volume of alerts, together with a high precision in the identification of false alarms.
The analysis of applied tasks and methods of entropy signal processing are carried out in this article. The theoretical comments about the specific schemes of special processors for the determination of probability and correlation activity are given. The perspective of the influence of probabilistic entropy of C. Shannon as cipher signal receivers is reviewed. Examples of entropy-manipulated signals and system characteristics of the proposed special processors are given.
Caching methods are developed since 50 years for paging in CPU and database systems, and since 25 years for web caching as main application areas among others. Pages of unique size are usual in CPU caches, whereas web caches are storing data chunks of different size in a widely varying range. We study the impact of different object sizes on the performance and the overhead of web caching. This entails different caching goals, starting from the byte and object hit ratio to a generalized value hit ratio for optimized costs and benefits of caching regarding traffic engineering (TE), reduced delays and other QoS measures. The selection of the cache contents turns out to be crucial for the web cache efficiency with awareness of the size and other properties in a score for each object. We introduce a new class of rank exchange caching methods and show how their performance compares to other strategies with extensions needed to include the size and scores for QoS and TE caching goals. Finally, we derive bounds on the object, byte and value hit ratio for the independent request model (IRM) based on optimum knapsack solutions of the cache content.
Security vulnerabilities and software defects are prevalent in software systems, threatening every aspect of cyberspace. The complexity of modern software makes it hard to secure systems. Security vulnerabilities and software defects become a major target of cyberattacks which can lead to significant consequences. Manual identification of vulnerabilities and defects in software systems is very time-consuming and tedious. Many tools have been designed to help analyze software systems and to discover vulnerabilities and defects. However, these tools tend to miss various types of bugs. The bugs that are not caught by these tools usually include vulnerabilities and defects that are too complicated to find or do not fall inside of an existing rule-set for identification. It was hypothesized that these undiscovered vulnerabilities and defects do not occur randomly, rather, they share certain common characteristics. A methodology was proposed to detect the probability of a bug existing in a code structure. We used a comprehensive experimental evaluation to assess the methodology and report our findings.
With the unprecedented prevalence of mobile network applications, cryptographic protocols, such as the Secure Socket Layer/Transport Layer Security (SSL/TLS), are widely used in mobile network applications for communication security. The proven methods for encrypted video stream classification or encrypted protocol detection are unsuitable for the SSL/TLS traffic. Consequently, application-level traffic classification based networking and security services are facing severe challenges in effectiveness. Existing encrypted traffic classification methods exhibit unsatisfying accuracy for applications with similar state characteristics. In this paper, we propose a multiple-attribute-based encrypted traffic classification system named Multi-Attribute Associated Fingerprints (MAAF). We develop MAAF based on the two key insights that the DNS traces generated during the application runtime contain classification guidance information and that the handshake certificates in the encrypted flows can provide classification clues. Apart from the exploitation of key insights, MAAF employs the context of the encrypted traffic to overcome the attribute-lacking problem during the classification. Our experimental results demonstrate that MAAF achieves 98.69% accuracy on the real-world traceset that consists of 16 applications, supports the early prediction, and is robust to the scale of the training traceset. Besides, MAAF is superior to the state-of-the-art methods in terms of both accuracy and robustness.
Energy efficiency and security is a critical requirement for computing at edge nodes. Unrolled architectures for lightweight cryptographic algorithms have been shown to be energy-efficient, providing higher performance while meeting resource constraints. Hardware implementations of unrolled datapaths have also been shown to be resistant to side channel analysis (SCA) attacks due to a reduction in signal-to-noise ratio (SNR) and an increased complexity in the leakage model. This paper demonstrates optimal leakage models and an improved CFA attack which makes it feasible to extract first-order side-channel leakages from combinational logic in the initial rounds of unrolled datapaths. Several leakage models, targeting initial rounds, are explored and 1-bit hamming weight (HW) based leakage model is shown to be an optimal choice. Additionally, multi-band narrow bandpass filtering techniques in conjunction with correlation frequency analysis (CFA) is demonstrated to improve SNR by up to 4×, attributed to the removal of the misalignment effect in combinational logics and signal isolation. The improved CFA attack is performed on side channel signatures acquired for 7-round unrolled SIMON datapaths, implemented on Sakura-G (XILINX spartan 6, 45nm) based FPGA platform and a 24× reduction in minimum-traces-to-disclose (MTD) for revealing 80% of the key bits is demonstrated with respect to conventional time domain correlation power analysis (CPA). Finally, the proposed method is successfully applied to a fully-unrolled datapath for PRINCE and a parallel round-based datapath for Advanced Encryption Standard (AES) algorithm to demonstrate its general applicability.
The computer network is used by billions of people worldwide for variety of purposes. This has made the security increasingly important in networks. It is essential to use Intrusion Detection Systems (IDS) and devices whose main function is to detect anomalies in networks. Mostly all the intrusion detection approaches focuses on the issues of boosting techniques since results are inaccurate and results in lengthy detection process. The major pitfall in network based intrusion detection is the wide-ranging volume of data gathered from the network. In this paper, we put forward a hybrid anomaly based intrusion detection system which uses Classification and Boosting technique. The Paper is organized in such a way it compares the performance three different Classifiers along with boosting. Boosting process maximizes classification accuracy. Results of proposed scheme will analyzed over different datasets like Intrusion Detection Kaggle Dataset and NSL KDD. Out of vast analysis it is found Random tree provides best average Accuracy rate of around 99.98%, Detection rate of 98.79% and a minimum False Alarm rate.
The natural redundancy in video data due to its spatio-temporal correlation of neighbouring pixels require highly complex encryption process to successfully cipher the data. Conventional encryption methods are based on lengthy keys and higher number of rounds which are inefficient for low powered, small battery operated devices. Motivated by the success of lightweight encryption methods specially designed for IoT environment, herein an efficient method for video encryption is proposed. The proposed technique is based on a recently proposed encryption algorithm named Secure IoT (SIT), which utilizes P and Q functions of the KHAZAD cipher to achieve high encryption at low computation cost. Extensive simulations are performed to evaluate the efficacy of the proposed method and results are compared with Secure Force (SF-64) cipher. Under all conditions the proposed method achieved significantly improved results.