Biblio

Found 1589 results

Filters: Keyword is cryptography  [Clear All Filters]
2017-02-14
J. Vukalović, D. Delija.  2015.  "Advanced Persistent Threats - detection and defense". 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). :1324-1330.

The term “Advanced Persistent Threat” refers to a well-organized, malicious group of people who launch stealthy attacks against computer systems of specific targets, such as governments, companies or military. The attacks themselves are long-lasting, difficult to expose and often use very advanced hacking techniques. Since they are advanced in nature, prolonged and persistent, the organizations behind them have to possess a high level of knowledge, advanced tools and competent personnel to execute them. The attacks are usually preformed in several phases - reconnaissance, preparation, execution, gaining access, information gathering and connection maintenance. In each of the phases attacks can be detected with different probabilities. There are several ways to increase the level of security of an organization in order to counter these incidents. First and foremost, it is necessary to educate users and system administrators on different attack vectors and provide them with knowledge and protection so that the attacks are unsuccessful. Second, implement strict security policies. That includes access control and restrictions (to information or network), protecting information by encrypting it and installing latest security upgrades. Finally, it is possible to use software IDS tools to detect such anomalies (e.g. Snort, OSSEC, Sguil).

2017-02-23
H. Ulusoy, M. Kantarcioglu, B. Thuraisingham, L. Khan.  2015.  "Honeypot based unauthorized data access detection in MapReduce systems". 2015 IEEE International Conference on Intelligence and Security Informatics (ISI). :126-131.

The data processing capabilities of MapReduce systems pioneered with the on-demand scalability of cloud computing have enabled the Big Data revolution. However, the data controllers/owners worried about the privacy and accountability impact of storing their data in the cloud infrastructures as the existing cloud computing solutions provide very limited control on the underlying systems. The intuitive approach - encrypting data before uploading to the cloud - is not applicable to MapReduce computation as the data analytics tasks are ad-hoc defined in the MapReduce environment using general programming languages (e.g, Java) and homomorphic encryption methods that can scale to big data do not exist. In this paper, we address the challenges of determining and detecting unauthorized access to data stored in MapReduce based cloud environments. To this end, we introduce alarm raising honeypots distributed over the data that are not accessed by the authorized MapReduce jobs, but only by the attackers and/or unauthorized users. Our analysis shows that unauthorized data accesses can be detected with reasonable performance in MapReduce based cloud environments.

2017-02-13
M. M. Olama, M. M. Matalgah, M. Bobrek.  2015.  "An integrated signaling-encryption mechanism to reduce error propagation in wireless communications: performance analyses". 2015 IEEE International Workshop Technical Committee on Communications Quality and Reliability (CQR). :1-6.

Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

2017-02-14
C. O'Flynn, Z. David Chen.  2015.  "Side channel power analysis of an AES-256 bootloader". 2015 IEEE 28th Canadian Conference on Electrical and Computer Engineering (CCECE). :750-755.

Side Channel Attacks (SCA) using power measurements are a known method of breaking cryptographic algorithms such as AES. Published research into attacks on AES frequently target only AES-128, and often target only the core Electronic Code-Book (ECB) algorithm, without discussing surrounding issues such as triggering, along with breaking the initialization vector. This paper demonstrates a complete attack on a secure bootloader, where the firmware files have been encrypted with AES-256-CBC. A classic Correlation Power Analysis (CPA) attack is performed on AES-256 to recover the complete 32-byte key, and a CPA attack is also used to attempt recovery of the initialization vector (IV).

2017-02-23
M. Vahidalizadehdizaj, L. Tao.  2015.  "A new mobile payment protocol (GMPCP) by using a new key agreement protocol (GC)". 2015 IEEE International Conference on Intelligence and Security Informatics (ISI). :169-172.

According to the advancement of mobile devices and wireless network technology, these portable devices became the potential devices that can be used for different types of payments. Recently, most of the people would rather to do their activities by their cellphones. On the other hand, there are some issues that hamper the widespread acceptance of mobile payment among people. The traditional ways of mobile payment are not secure enough, since they follow the traditional flow of data. This paper is going to suggest a new protocol named Golden Mobile Pay Center Protocol that is based on client centric model. The suggested protocol downgrade the computational operations and communications that are necessary between the engaging parties and achieves a completely privacy protection for the engaging parties. It avoids transaction repudiation among the engaging parties and will decrease replay attack s risk. The goal of the protocol is to help n users to have payments to each others'. Besides, it will utilize a new key agreement protocol named Golden Circle that is working by employing symmetric key operations. GMPCP uses GC for generating a shared session key between n users.

2017-02-13
K. R. Kashwan, K. A. Dattathreya.  2015.  "Improved serial 2D-DWT processor for advanced encryption standard". 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). :209-213.

This paper reports a research work on how to transmit a secured image data using Discrete Wavelet Transform (DWT) in combination of Advanced Encryption Standard (AES) with low power and high speed. This can have better quality secured image with reduced latency and improved throughput. A combined model of DWT and AES technique help in achieving higher compression ratio and simultaneously it provides high security while transmitting an image over the channels. The lifting scheme algorithm is realized using a single and serialized DT processor to compute up to 3-levels of decomposition for improving speed and security. An ASIC circuit is designed using RTL-GDSII to simulate proposed technique using 65 nm CMOS Technology. The ASIC circuit is implemented on an average area of about 0.76 mm2 and the power consumption is estimated in the range of 10.7-19.7 mW at a frequency of 333 MHz which is faster compared to other similar research work reported.

2017-02-23
I. Mukherjee, R. Ganguly.  2015.  "Privacy preserving of two sixteen-segmented image using visual cryptography". 2015 IEEE International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). :417-422.

With the advancement of technology, the world has not only become a better place to live in but have also lost the privacy and security of shared data. Information in any form is never safe from the hands of unauthorized accessing individuals. Here, in our paper we propose an approach by which we can preserve data using visual cryptography. In this paper, two sixteen segment displayed text is broken into two shares that does not reveal any information about the original images. By this process we have obtained satisfactory results in statistical and structural testes.

2017-03-08
Chen, S., Wang, T., Ai, J..  2015.  A fair exchange and track system for RFID-tagged logistic chains. 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI). :661–666.

RFID (Radio-Frequency IDentification) is attractive for the strong visibility it provides into logistics operations. In this paper, we explore fair-exchange techniques to encourage honest reporting of item receipt in RFID-tagged supply chains and present a fair ownership transfer system for RFID-tagged supply chains. In our system, a receiver can only access the data and/or functions of the RFID tag by providing the sender with a cryptographic attestation of successful receipt; cheating results in a defunct tag. Conversely, the sender can only obtain the receiver's attestation by providing the secret keys required to access the tag.

2017-03-07
Allawi, M. A. A., Hadi, A., Awajan, A..  2015.  MLDED: Multi-layer Data Exfiltration Detection System. 2015 Fourth International Conference on Cyber Security, Cyber Warfare, and Digital Forensic (CyberSec). :107–112.

Due to the growing advancement of crime ware services, the computer and network security becomes a crucial issue. Detecting sensitive data exfiltration is a principal component of each information protection strategy. In this research, a Multi-Level Data Exfiltration Detection (MLDED) system that can handle different types of insider data leakage threats with staircase difficulty levels and their implications for the organization environment has been proposed, implemented and tested. The proposed system detects exfiltration of data outside an organization information system, where the main goal is to use the detection results of a MLDED system for digital forensic purposes. MLDED system consists of three major levels Hashing, Keywords Extraction and Labeling. However, it is considered only for certain type of documents such as plain ASCII text and PDF files. In response to the challenging issue of identifying insider threats, a forensic readiness data exfiltration system is designed that is capable of detecting and identifying sensitive information leaks. The results show that the proposed system has an overall detection accuracy of 98.93%.

2017-02-14
F. Hassan, J. L. Magalini, V. de Campos Pentea, R. A. Santos.  2015.  "A project-based multi-disciplinary elective on digital data processing techniques". 2015 IEEE Frontiers in Education Conference (FIE). :1-7.

Todays' era of internet-of-things, cloud computing and big data centers calls for more fresh graduates with expertise in digital data processing techniques such as compression, encryption and error correcting codes. This paper describes a project-based elective that covers these three main digital data processing techniques and can be offered to three different undergraduate majors electrical and computer engineering and computer science. The course has been offered successfully for three years. Registration statistics show equal interest from the three different majors. Assessment data show that students have successfully completed the different course outcomes. Students' feedback show that students appreciate the knowledge they attain from this elective and suggest that the workload for this course in relation to other courses of equal credit is as expected.

A. Chouhan, S. Singh.  2015.  "Real time secure end to end communication over GSM network". 2015 International Conference on Energy Systems and Applications. :663-668.

GSM network is the most widely used communication network for mobile phones in the World. However the security of the voice communication is the main issue in the GSM network. This paper proposes the technique for secure end to end communication over GSM network. The voice signal is encrypted at real time using digital techniques and transmitted over the GSM network. At receiver end the same decoding algorithm is used to extract the original speech signal. The speech trans-coding process of the GSM, severely distort an encrypted signal that does not possess the characteristics of speech signal. Therefore, it is not possible to use standard modem techniques over the GSM speech channel. The user may choose an appropriate algorithm and hardware platform as per requirement.

2021-04-08
Tyagi, H., Vardy, A..  2015.  Universal Hashing for Information-Theoretic Security. Proceedings of the IEEE. 103:1781–1795.
The information-theoretic approach to security entails harnessing the correlated randomness available in nature to establish security. It uses tools from information theory and coding and yields provable security, even against an adversary with unbounded computational power. However, the feasibility of this approach in practice depends on the development of efficiently implementable schemes. In this paper, we review a special class of practical schemes for information-theoretic security that are based on 2-universal hash families. Specific cases of secret key agreement and wiretap coding are considered, and general themes are identified. The scheme presented for wiretap coding is modular and can be implemented easily by including an extra preprocessing layer over the existing transmission codes.
2017-02-14
A. A. Zewail, A. Yener.  2015.  "The two-hop interference untrusted-relay channel with confidential messages". 2015 IEEE Information Theory Workshop - Fall (ITW). :322-326.

This paper considers the two-user interference relay channel where each source wishes to communicate to its destination a message that is confidential from the other destination. Furthermore, the relay, that is the enabler of communication, due to the absence of direct links, is untrusted. Thus, the messages from both sources need to be kept secret from the relay as well. We provide an achievable secure rate region for this network. The achievability scheme utilizes structured codes for message transmission, cooperative jamming and scaled compute-and-forward. In particular, the sources use nested lattice codes and stochastic encoding, while the destinations jam using lattice points. The relay decodes two integer combinations of the received lattice points and forwards, using Gaussian codewords, to both destinations. The achievability technique provides the insight that we can utilize the untrusted relay node as an encryption block in a two-hop interference relay channel with confidential messages.

2017-03-08
Darabseh, A., Namin, A. Siami.  2015.  On Accuracy of Keystroke Authentications Based on Commonly Used English Words. 2015 International Conference of the Biometrics Special Interest Group (BIOSIG). :1–8.

The aim of this research is to advance the user active authentication using keystroke dynamics. Through this research, we assess the performance and influence of various keystroke features on keystroke dynamics authentication systems. In particular, we investigate the performance of keystroke features on a subset of most frequently used English words. The performance of four features such as i) key duration, ii) flight time latency, iii) digraph time latency, and iv) word total time duration are analyzed. Experiments are performed to measure the performance of each feature individually as well as the results from the different subsets of these features. Four machine learning techniques are employed for assessing keystroke authentications. The selected classification methods are two-class support vector machine (TC) SVM, one-class support vector machine (OC) SVM, k-nearest neighbor classifier (K-NN), and Naive Bayes classifier (NB). The logged experimental data are captured for 28 users. The experimental results show that key duration time offers the best performance result among all four keystroke features, followed by word total time. Furthermore, our results show that TC SVM and KNN perform the best among the four classifiers.

2017-03-07
Guofu, M., Zixian, W., Yusi, C..  2015.  Recovery of Evidence and the Judicial Identification of Electronic Data Based on ExFAT. 2015 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery. :66–71.

The ExFAT file system is for large capacity flash memory medium. On the base of analyzing the characteristics of ExFAT file system, this paper presents a model of electronic data recovery forensics and judicial Identification based on ExFAT. The proposed model aims at different destroyed situation of data recovery medium. It uses the file location algorithm, file character code algorithm, document fragment reassembly algorithm for accurate, efficient recovery of electronic data for forensics and judicial Identification. The model implements the digital multi-signature, process monitoring, media mirror and Hash authentication in the data recovery process to improve the acceptability, weight of evidence and Legal effect of the electronic data in the lawsuit. The experimental results show that the model has good work efficiency based on accuracy.

2017-02-13
B. Boyadjis, C. Bergeron, S. Lecomte.  2015.  "Auto-synchronized selective encryption of video contents for an improved transmission robustness over error-prone channels". 2015 IEEE International Conference on Image Processing (ICIP). :2969-2973.

Selective encryption designates a technique that aims at scrambling a message content while preserving its syntax. Such an approach allows encryption to be transparent towards middle-box and/or end user devices, and to easily fit within existing pipelines. In this paper, we propose to apply this property to a real-time diffusion scenario - or broadcast - over a RTP session. The main challenge of such problematic is the preservation of the synchronization between encryption and decryption. Our solution is based on the Advanced Encryption Standard in counter mode which has been modified to fit our auto-synchronization requirement. Setting up the proposed synchronization scheme does not induce any latency, and requires no additional bandwidth in the RTP session (no additional information is sent). Moreover, its parallel structure allows to start decryption on any given frame of the video while leaving a lot of room for further optimization purposes.

R. Mishra, A. Mishra, P. Bhanodiya.  2015.  "An edge based image steganography with compression and encryption". 2015 International Conference on Computer, Communication and Control (IC4). :1-4.

Security of secret data has been a major issue of concern from ancient time. Steganography and cryptography are the two techniques which are used to reduce the security threat. Cryptography is an art of converting secret message in other than human readable form. Steganography is an art of hiding the existence of secret message. These techniques are required to protect the data theft over rapidly growing network. To achieve this there is a need of such a system which is very less susceptible to human visual system. In this paper a new technique is going to be introducing for data transmission over an unsecure channel. In this paper secret data is compressed first using LZW algorithm before embedding it behind any cover media. Data is compressed to reduce its size. After compression data encryption is performed to increase the security. Encryption is performed with the help of a key which make it difficult to get the secret message even if the existence of the secret message is reveled. Now the edge of secret message is detected by using canny edge detector and then embedded secret data is stored there with the help of a hash function. Proposed technique is implemented in MATLAB and key strength of this project is its huge data hiding capacity and least distortion in Stego image. This technique is applied over various images and the results show least distortion in altered image.

M. Ayoob, W. Adi.  2015.  "Fault Detection and Correction in Processing AES Encryption Algorithm". 2015 Sixth International Conference on Emerging Security Technologies (EST). :7-12.

Robust and stringent fault detection and correction techniques in executing Advanced Encryption Standard (AES) are still interesting issues for many critical applications. The purpose of fault detection and correction techniques is not only to ensure the reliability of a cryptosystem, but also protect the system against side channel attacks. Such errors could result due to a fault injection attack, production faults, noise or radiation effects in deep space. Devising a proper error control mechanisms for AES cipher during execution would improve both system reliability and security. In this work a novel fault detection and correction algorithm is proposed. The proposed mechanism is making use of the linear mappings of AES round structure to detect errors in the ShiftRow (SR) and MixColumn (MC) transformations. The error correction is achieved by creating temporary redundant check words through the combined SR and MC mapping to create in case of errors an error syndrome leading to error correction with relatively minor additional complexity. The proposed technique is making use of an error detecting and correcting capability in the combined mapping of SR and MC rather than detecting and/or correcting errors in each transformation separately. The proposed technique is making use especially of the MC mapping exhibiting efficient ECC properties, which can be deployed to simplify the design of a fault-tolerance technique. The performance of the algorithm proposed is evaluated by a simulated system model in FPGA technology. The simulation results demonstrate the ability to reach relatively high fault coverage with error correction up to four bytes of execution errors in the merged transformation SR-MC. The overall gate complexity overhead of the resulting system is estimated for proposed technique in FPGA technology.

2021-04-08
Liu, S., Hong, Y., Viterbo, E..  2014.  On measures of information theoretic security. 2014 IEEE Information Theory Workshop (ITW 2014). :309–310.
While information-theoretic security is stronger than computational security, it has long been considered impractical. In this work, we provide new insights into the design of practical information-theoretic cryptosystems. Firstly, from a theoretical point of view, we give a brief introduction into the existing information theoretic security criteria, such as the notions of Shannon's perfect/ideal secrecy in cryptography, and the concept of strong secrecy in coding theory. Secondly, from a practical point of view, we propose the concept of ideal secrecy outage and define a outage probability. Finally, we show how such probability can be made arbitrarily small in a practical cryptosystem.
2015-04-30
Miller, Andrew, Hicks, Michael, Katz, Jonathan, Shi, Elaine.  2014.  Authenticated Data Structures, Generically. SIGPLAN Not.. 49:411–423.

An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations.

This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.

2014-09-17
Mitra, Sayan.  2014.  Proving Abstractions of Dynamical Systems Through Numerical Simulations. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :12:1–12:9.

A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions.

Escobar, Santiago, Meadows, Catherine, Meseguer, José, Santiago, Sonia.  2014.  A Rewriting-based Forwards Semantics for Maude-NPA. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :3:1–3:12.

The Maude-NRL Protocol Analyzer (Maude-NPA) is a tool for reasoning about the security of cryptographic protocols in which the cryptosystems satisfy different equational properties. It tries to find secrecy or authentication attacks by searching backwards from an insecure attack state pattern that may contain logical variables, in such a way that logical variables become properly instantiated in order to find an initial state. The execution mechanism for this logical reachability is narrowing modulo an equational theory. Although Maude-NPA also possesses a forwards semantics naturally derivable from the backwards semantics, it is not suitable for state space exploration or protocol simulation. In this paper we define an executable forwards semantics for Maude-NPA, instead of its usual backwards one, and restrict it to the case of concrete states, that is, to terms without logical variables. This case corresponds to standard rewriting modulo an equational theory. We prove soundness and completeness of the backwards narrowing-based semantics with respect to the rewriting-based forwards semantics. We show its effectiveness as an analysis method that complements the backwards analysis with new prototyping, simulation, and explicit-state model checking features by providing some experimental results.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Yu, Xianqing, Ning, Peng, Vouk, Mladen A..  2014.  Securing Hadoop in Cloud. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :26:1–26:2.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

2015-05-01
Y. Seifi, S. Suriadi, E. Foo, C. Boyd.  2014.  Security properties analysis in a TPM-based protocol. Int. J. of Security and Networks, 2014 Vol.9, No.2, pp.85 - 103.

Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.