Biblio

Found 951 results

Filters: First Letter Of Last Name is E  [Clear All Filters]
2018-03-05
Ehrlich, M., Wisniewski, L., Trsek, H., Mahrenholz, D., Jasperneite, J..  2017.  Automatic Mapping of Cyber Security Requirements to Support Network Slicing in Software-Defined Networks. 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.
The process of digitalisation has an advanced impact on social lives, state affairs, and the industrial automation domain. Ubiquitous networks and the increased requirements in terms of Quality of Service (QoS) create the demand for future-proof network management. Therefore, new technological approaches, such as Software-Defined Networks (SDN) or the 5G Network Slicing concept, are considered. However, the important topic of cyber security has mainly been ignored in the past. Recently, this topic has gained a lot of attention due to frequently reported security related incidents, such as industrial espionage, or production system manipulations. Hence, this work proposes a concept for adding cyber security requirements to future network management paradigms. For this purpose, various security related standards and guidelines are available. However, these approaches are mainly static, require a high amount of manual efforts by experts, and need to be performed in a steady manner. Therefore, the proposed solution contains a dynamic, machine-readable, automatic, continuous, and future-proof approach to model and describe cyber security QoS requirements for the next generation network management.
Ehrlich, M., Wisniewski, L., Trsek, H., Mahrenholz, D., Jasperneite, J..  2017.  Automatic Mapping of Cyber Security Requirements to Support Network Slicing in Software-Defined Networks. 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.
The process of digitalisation has an advanced impact on social lives, state affairs, and the industrial automation domain. Ubiquitous networks and the increased requirements in terms of Quality of Service (QoS) create the demand for future-proof network management. Therefore, new technological approaches, such as Software-Defined Networks (SDN) or the 5G Network Slicing concept, are considered. However, the important topic of cyber security has mainly been ignored in the past. Recently, this topic has gained a lot of attention due to frequently reported security related incidents, such as industrial espionage, or production system manipulations. Hence, this work proposes a concept for adding cyber security requirements to future network management paradigms. For this purpose, various security related standards and guidelines are available. However, these approaches are mainly static, require a high amount of manual efforts by experts, and need to be performed in a steady manner. Therefore, the proposed solution contains a dynamic, machine-readable, automatic, continuous, and future-proof approach to model and describe cyber security QoS requirements for the next generation network management.
2018-08-23
Edwards, Stephen A., Townsend, Richard, Kim, Martha A..  2017.  Compositional Dataflow Circuits. Proceedings of the 15th ACM-IEEE International Conference on Formal Methods and Models for System Design. :175–184.
We present a technique for implementing dataflow networks as compositional hardware circuits. We first define an abstract dataflow model with unbounded buffers that supports data-dependent blocks (mux, demux, and nondeterministic merge); we then show how to faithfully implement such networks with bounded buffers and handshaking. Handshaking admits compositionality: our circuits can be connected with or without buffers and still compute the same function without introducing spurious combinational cycles. As such, inserting or removing buffers affects the performance but not the functionality of our networks, which we demonstrate through experiments that show how design space can be explored.
2018-05-09
Ur, Blase, Alfieri, Felicia, Aung, Maung, Bauer, Lujo, Christin, Nicolas, Colnago, Jessica, Cranor, Lorrie Faith, Dixon, Henry, Emami Naeini, Pardis, Habib, Hana et al..  2017.  Design and Evaluation of a Data-Driven Password Meter. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. :3775–3786.
Despite their ubiquity, many password meters provide inaccurate strength estimates. Furthermore, they do not explain to users what is wrong with their password or how to improve it. We describe the development and evaluation of a data-driven password meter that provides accurate strength measurement and actionable, detailed feedback to users. This meter combines neural networks and numerous carefully combined heuristics to score passwords and generate data-driven text feedback about the user's password. We describe the meter's iterative development and final design. We detail the security and usability impact of the meter's design dimensions, examined through a 4,509-participant online study. Under the more common password-composition policy we tested, we found that the data-driven meter with detailed feedback led users to create more secure, and no less memorable, passwords than a meter with only a bar as a strength indicator.
2018-06-11
Chole, Sharad, Fingerhut, Andy, Ma, Sha, Sivaraman, Anirudh, Vargaftik, Shay, Berger, Alon, Mendelson, Gal, Alizadeh, Mohammad, Chuang, Shang-Tse, Keslassy, Isaac et al..  2017.  dRMT: Disaggregated Programmable Switching. Proceedings of the Conference of the ACM Special Interest Group on Data Communication. :1–14.
We present dRMT (disaggregated Reconfigurable Match-Action Table), a new architecture for programmable switches. dRMT overcomes two important restrictions of RMT, the predominant pipeline-based architecture for programmable switches: (1) table memory is local to an RMT pipeline stage, implying that memory not used by one stage cannot be reclaimed by another, and (2) RMT is hardwired to always sequentially execute matches followed by actions as packets traverse pipeline stages. We show that these restrictions make it difficult to execute programs efficiently on RMT. dRMT resolves both issues by disaggregating the memory and compute resources of a programmable switch. Specifically, dRMT moves table memories out of pipeline stages and into a centralized pool that is accessible through a crossbar. In addition, dRMT replaces RMT's pipeline stages with a cluster of processors that can execute match and action operations in any order. We show how to schedule a P4 program on dRMT at compile time to guarantee deterministic throughput and latency. We also present a hardware design for dRMT and analyze its feasibility and chip area. Our results show that dRMT can run programs at line rate with fewer processors compared to RMT, and avoids performance cliffs when there are not enough processors to run a program at line rate. dRMT's hardware design incurs a modest increase in chip area relative to RMT, mainly due to the crossbar.
2018-05-02
Kirsch, Julian, Bierbaumer, Bruno, Kittel, Thomas, Eckert, Claudia.  2017.  Dynamic Loader Oriented Programming on Linux. Proceedings of the 1st Reversing and Offensive-oriented Trends Symposium. :5:1–5:13.
Memory corruptions are still the most prominent venue to attack otherwise secure programs. In order to make exploitation of software bugs more difficult, defenders introduced a vast number of post corruption security mitigations, such as w⊕x memory, Stack Canaries, and Address Space Layout Randomization (ASLR), to only name a few. In the following, we describe the Wiedergänger1-Attack, a new attack vector that reliably allows to escalate unbounded array access vulnerabilities occurring in specifically allocated memory regions to full code execution on programs running on i386/x86\_64 Linux. Wiedergänger-attacks abuse determinism in Linux ASLR implementation combined with the fact that (even with protection mechanisms such as relro and glibc's pointer mangling enabled) there exist easy-to-hijack, writable (function) pointers in application memory. To discover such pointers, we use taint analysis and backwards slicing at the binary level and calculate an over-approximation of vulnerable instruction sequences. To show the relevance of Wiedergänger, we exploit one of the discovered instruction sequences to perform an attack on Debian 10 (Buster) by overwriting structures used by the dynamic loader (dl) that are present in any application with glibc and the dynamic loader as dependency. In order to show generality, we solely focus on data structures dispatched at program shutdown, as this is a point that arguably all applications eventually have to reach. This results in a reliable compromise that effectively bypasses all protection mechanisms deployed on x86\_64/i386 Linux to date. We believe Wiedergänger to be part of an under-researched type of control flow hijacking attacks targeting internal control structures of the dynamic loader for which we propose to use the terminology Loader Oriented Programming (LOP).
2018-05-01
Eberz, Simon, Rasmussen, Kasper B., Lenders, Vincent, Martinovic, Ivan.  2017.  Evaluating Behavioral Biometrics for Continuous Authentication: Challenges and Metrics. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :386–399.
In recent years, behavioral biometrics have become a popular approach to support continuous authentication systems. Most generally, a continuous authentication system can make two types of errors: false rejects and false accepts. Based on this, the most commonly reported metrics to evaluate systems are the False Reject Rate (FRR) and False Accept Rate (FAR). However, most papers only report the mean of these measures with little attention paid to their distribution. This is problematic as systematic errors allow attackers to perpetually escape detection while random errors are less severe. Using 16 biometric datasets we show that these systematic errors are very common in the wild. We show that some biometrics (such as eye movements) are particularly prone to systematic errors, while others (such as touchscreen inputs) show more even error distributions. Our results also show that the inclusion of some distinctive features lowers average error rates but significantly increases the prevalence of systematic errors. As such, blind optimization of the mean EER (through feature engineering or selection) can sometimes lead to lower security. Following this result we propose the Gini Coefficient (GC) as an additional metric to accurately capture different error distributions. We demonstrate the usefulness of this measure both to compare different systems and to guide researchers during feature selection. In addition to the selection of features and classifiers, some non- functional machine learning methodologies also affect error rates. The most notable examples of this are the selection of training data and the attacker model used to develop the negative class. 13 out of the 25 papers we analyzed either include imposter data in the negative class or randomly sample training data from the entire dataset, with a further 6 not giving any information on the methodology used. Using real-world data we show that both of these decisions lead to significant underestimation of error rates by 63% and 81%, respectively. This is an alarming result, as it suggests that researchers are either unaware of the magnitude of these effects or might even be purposefully attempting to over-optimize their EER without actually improving the system.
2018-09-28
Emura, Keita, Hayashi, Takuya, Ishida, Ai.  2017.  Group Signatures with Time-bound Keys Revisited: A New Model and an Efficient Construction. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. :777–788.
Chu et al. (ASIACCS 2012) proposed group signature with time-bound keys (GS-TBK) where each signing key is associated to an expiry time τ. In addition to prove the membership of the group, a signer needs to prove that the expiry time has not passed, i.e., t\textbackslashtextlessτ where t is the current time. A signer whose expiry time has passed is automatically revoked, and this revocation is called natural revocation. Simultaneously, signers can be revoked before their expiry times have passed due to the compromise of the credential. This revocation is called premature revocation. A nice property of the Chu et al. proposal is that the size of revocation lists can be reduced compared to those of Verifier-Local Revocation (VLR) group signature schemes, by assuming that natural revocation accounts for most of signer revocations in practice, and prematurely revoked signers are only a small fraction. In this paper, we point out that the definition of traceability of Chu et al. did not capture unforgeability of expiry time of signing keys which guarantees that no adversary who has a signing key associated to an expiry time τ can compute a valid signature after τ has passed. We introduce a security model that captures unforgeability, and propose a GS-TBK scheme secure in the new model. Our scheme also provides the constant signing costs whereas those of the previous schemes depend on the bit-length of the time representation. Finally, we give implementation results, and show that our scheme is feasible in practical settings.
2018-06-11
Anderson, Jeff, El-Ghazawi, Tarek.  2017.  Hardware Support for Secure Stream Processing in Cloud Environments. Proceedings of the Computing Frontiers Conference. :283–286.
Many-core microprocessor architectures are quickly becoming prevalent in data centers, due to their demonstrated processing power and network flexibility. However, this flexibility comes at a cost; co-mingled data from disparate users must be kept secure, which forces processor cycles to be wasted on cryptographic operations. This paper introduces a novel, secure, stream processing architecture which supports efficient homomorphic authentication of data and enforces secrecy of individuals' data. Additionally, this architecture is shown to secure time-series analysis of data from multiple users from both corruption and disclosure. Hardware synthesis shows that security-related circuitry incurs less than 10% overhead, and latency analysis shows an increase of 2 clocks per hop. However, despite the increase in latency, the proposed architecture shows an improvement over stream processing systems that use traditional security methods.
2018-11-19
Ekstrom, Joseph J., Lunt, Barry M., Parrish, Allen, Raj, Rajendra K., Sobiesk, Edward.  2017.  Information Technology As a Cyber Science. Proceedings of the 18th Annual Conference on Information Technology Education. :33–37.
Emerging technologies are proliferating and the computing profession continues to evolve to embrace the many opportunities and solve the many challenges this brings. Among the challenges is identifying and describing the competencies, responsibilities, and curriculum content needed for cybersecurity. As part of addressing these issues, there are efforts taking place that both improve integration of cybersecurity into the established computing disciplines while other efforts are developing and articulating cybersecurity as a new meta-discipline. The various individual computing disciplines, such as Computer Science, Information Technology, and Information Systems, have increased and improved the amount of cybersecurity in their model curricula. In parallel, organizations such as the Cyber Education Project, an ACM/IEEE Joint Task Force, and the accrediting body ABET are producing such artifacts as a multi-disciplinary Body of Knowledge and accreditation program criteria for cybersecurity writ large. This paper explores these various cybersecurity initiatives from the perspective of the Information Technology discipline, and it addresses the degree to which cybersecurity and Information Technology are both similar and different.
Ekstrom, Joseph J., Lunt, Barry M., Parrish, Allen, Raj, Rajendra K., Sobiesk, Edward.  2017.  Information Technology As a Cyber Science. Proceedings of the 18th Annual Conference on Information Technology Education. :33–37.
Emerging technologies are proliferating and the computing profession continues to evolve to embrace the many opportunities and solve the many challenges this brings. Among the challenges is identifying and describing the competencies, responsibilities, and curriculum content needed for cybersecurity. As part of addressing these issues, there are efforts taking place that both improve integration of cybersecurity into the established computing disciplines while other efforts are developing and articulating cybersecurity as a new meta-discipline. The various individual computing disciplines, such as Computer Science, Information Technology, and Information Systems, have increased and improved the amount of cybersecurity in their model curricula. In parallel, organizations such as the Cyber Education Project, an ACM/IEEE Joint Task Force, and the accrediting body ABET are producing such artifacts as a multi-disciplinary Body of Knowledge and accreditation program criteria for cybersecurity writ large. This paper explores these various cybersecurity initiatives from the perspective of the Information Technology discipline, and it addresses the degree to which cybersecurity and Information Technology are both similar and different.
2018-06-20
Benjbara, Chaimae, Habbani, Ahmed, Mahdi, Fatna El, Essaid, Bilal.  2017.  Multi-path Routing Protocol in the Smart Digital Environment. Proceedings of the 2017 International Conference on Smart Digital Environment. :14–18.
During the last decade, the smart digital environment has become one of the most scientific challenges that occupy scientists and researchers. This new environment consists basically of smart connected products including three main parts: the physical mechanical/electrical product, the smart part of the product made from embedded software and human machine interface, and finally the connectivity part including antennas and routing protocols insuring the wired/wireless communication with other products, from our side, we are involved in the implementation of the latter part by developing a routing protocol that will meet the increasingly demanding requirements of today's systems (security, bandwidth, network lifetime, ...). Based on the researches carried out in other fields of application such as MANETS, multi-path routing fulfills our expectations. In this article, the MPOLSR protocol was chosen as an example, comparing its standard version and its improvements in order to choose the best solution that can be applied in the smart digital environment.
2018-02-15
van Do, Thanh, Engelstad, Paal, Feng, Boning, Do, Van Thuan.  2017.  A Near Real Time SMS Grey Traffic Detection. Proceedings of the 6th International Conference on Software and Computer Applications. :244–249.
Lately, mobile operators experience threats from SMS grey routes which are used by fraudsters to evade SMS fees and to deny them millions in revenues. But more serious are the threats to the user's security and privacy and consequently the operator's reputation. Therefore, it is crucial for operators to have adequate solutions to protect both their network and their customers against this kind of fraud. Unfortunately, so far there is no sufficiently efficient countermeasure against grey routes. This paper proposes a near real time SMS grey traffic detection which makes use of Counting Bloom Filters combined with blacklist and whitelist to detect SMS grey traffic on the fly and to block them. The proposed detection has been implemented and proved to be quite efficient. The paper provides also comprehensive explanation of SMS grey routes and the challenges in their detection. The implementation and verification are also described thoroughly.
Vu, Xuan-Son, Jiang, Lili, Brändström, Anders, Elmroth, Erik.  2017.  Personality-based Knowledge Extraction for Privacy-preserving Data Analysis. Proceedings of the Knowledge Capture Conference. :44:1–44:4.
In this paper, we present a differential privacy preserving approach, which extracts personality-based knowledge to serve privacy guarantee data analysis on personal sensitive data. Based on the approach, we further implement an end-to-end privacy guarantee system, KaPPA, to provide researchers iterative data analysis on sensitive data. The key challenge for differential privacy is determining a reasonable amount of privacy budget to balance privacy preserving and data utility. Most of the previous work applies unified privacy budget to all individual data, which leads to insufficient privacy protection for some individuals while over-protecting others. In KaPPA, the proposed personality-based privacy preserving approach automatically calculates privacy budget for each individual. Our experimental evaluations show a significant trade-off of sufficient privacy protection and data utility.
2018-06-11
Shu, Rui, Gu, Xiaohui, Enck, William.  2017.  A Study of Security Vulnerabilities on Docker Hub. Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. :269–280.
Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update. This article is summarized in: the morning paper an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer
2018-06-07
El Mir, Iman, Kim, Dong Seong, Haqiq, Abdelkrim.  2017.  Towards a Stochastic Model for Integrated Detection and Filtering of DoS Attacks in Cloud Environments. Proceedings of the 2Nd International Conference on Big Data, Cloud and Applications. :28:1–28:6.
Cloud Data Center (CDC) security remains a major challenge for business organizations and takes an important concern with research works. The attacker purpose is to guarantee the service unavailability and maximize the financial loss costs. As a result, Distributed Denial of Service (DDoS) attacks have appeared as the most popular attack. The main aim of such attacks is to saturate and overload the system network through a massive data packets size flooding toward a victim server and to block the service to users. This paper provides a defending system in order to mitigate the Denial of Service (DoS) attack in CDC environment. Basically it outlines the different techniques of DoS attacks and its countermeasures by combining the filtering and detection mechanisms. We presented an analytical model based on queueing model to evaluate the impact of flooding attack on cloud environment regarding service availability and QoS performance. Consequently, we have plotted the response time, throughput, drop rate and resource computing utilization varying the attack arrival rate. We have used JMT (Java Modeling Tool) simulator to validate the analytical model. Our approach was appeared powerful for attacks mitigation in the cloud environment.
2018-05-16
Hukerikar, Saurabh, Ashraf, Rizwan A., Engelmann, Christian.  2017.  Towards New Metrics for High-Performance Computing Resilience. Proceedings of the 2017 Workshop on Fault-Tolerance for HPC at Extreme Scale. :23–30.
Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, we develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.
2018-05-02
Allodi, Luca, Etalle, Sandro.  2017.  Towards Realistic Threat Modeling: Attack Commodification, Irrelevant Vulnerabilities, and Unrealistic Assumptions. Proceedings of the 2017 Workshop on Automated Decision Making for Active Cyber Defense. :23–26.
Current threat models typically consider all possible ways an attacker can penetrate a system and assign probabilities to each path according to some metric (e.g. time-to-compromise). In this paper we discuss how this view hinders the realness of both technical (e.g. attack graphs) and strategic (e.g. game theory) approaches of current threat modeling, and propose to steer away by looking more carefully at attack characteristics and attacker environment. We use a toy threat model for ICS attacks to show how a realistic view of attack instances can emerge from a simple analysis of attack phases and attacker limitations.
2018-06-07
Li, Guanpeng, Hari, Siva Kumar Sastry, Sullivan, Michael, Tsai, Timothy, Pattabiraman, Karthik, Emer, Joel, Keckler, Stephen W..  2017.  Understanding Error Propagation in Deep Learning Neural Network (DNN) Accelerators and Applications. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. :8:1–8:12.
Deep learning neural networks (DNNs) have been successful in solving a wide range of machine learning problems. Specialized hardware accelerators have been proposed to accelerate the execution of DNN algorithms for high-performance and energy efficiency. Recently, they have been deployed in datacenters (potentially for business-critical or industrial applications) and safety-critical systems such as self-driving cars. Soft errors caused by high-energy particles have been increasing in hardware systems, and these can lead to catastrophic failures in DNN systems. Traditional methods for building resilient systems, e.g., Triple Modular Redundancy (TMR), are agnostic of the DNN algorithm and the DNN accelerator's architecture. Hence, these traditional resilience approaches incur high overheads, which makes them challenging to deploy. In this paper, we experimentally evaluate the resilience characteristics of DNN systems (i.e., DNN software running on specialized accelerators). We find that the error resilience of a DNN system depends on the data types, values, data reuses, and types of layers in the design. Based on our observations, we propose two efficient protection techniques for DNN systems.
2017-12-20
Raiola, P., Erb, D., Reddy, S. M., Becker, B..  2017.  Accurate Diagnosis of Interconnect Open Defects Based on the Robust Enhanced Aggressor Victim Model. 2017 30th International Conference on VLSI Design and 2017 16th International Conference on Embedded Systems (VLSID). :135–140.

Interconnect opens are known to be one of the predominant defects in nanoscale technologies. Automatic test pattern generation for open faults is challenging, because of their rather unstable behavior and the numerous electrical parameters which need to be considered. Thus, most approaches try to avoid accurate modeling of all constraints like the influence of the aggressors on the open net and use simplified fault models in order to detect as many faults as possible or make assumptions which decrease both complexity and accuracy. Yet, this leads to the problem that not only generated tests may be invalidated but also the localization of a specific fault may fail - in case such a model is used as basis for diagnosis. Furthermore, most of the models do not consider the problem of oscillating behavior, caused by feedback introduced by coupling capacitances, which occurs in almost all designs. In [1], the Robust Enhanced Aggressor Victim Model (REAV) and in [2] an extension to address the problem of oscillating behavior were introduced. The resulting model does not only consider the influence of all aggressors accurately but also guarantees robustness against oscillating behavior as well as process variations affecting the thresholds of gates driven by an open interconnect. In this work we present the first diagnostic classification algorithm for this model. This algorithm considers all constraints enforced by the REAV model accurately - and hence handles unknown values as well as oscillating behavior. In addition, it allows to distinguish faults at the same interconnect and thus reducing the area that has to be considered for physical failure analysis. Experimental results show the high efficiency of the new method handling circuits with up to 500,000 non-equivalent faults and considerably increasing the diagnostic resolution.

2018-01-23
Malathi, V., Balamurugan, B., Eshwar, S..  2017.  Achieving Privacy and Security Using QR Code by Means of Encryption Technique in ATM. 2017 Second International Conference on Recent Trends and Challenges in Computational Models (ICRTCCM). :281–285.

Smart Card has complications with validation and transmission process. Therefore, by using peeping attack, the secret code was stolen and secret filming while entering Personal Identification Number at the ATM machine. We intend to develop an authentication system to banks that protects the asset of user's. The data of a user is to be ensured that secure and isolated from the data leakage and other attacks Therefore, we propose a system, where ATM machine will have a QR code in which the information's are encrypted corresponding to the ATM machine and a mobile application in the customer's mobile which will decrypt the encoded QR information and sends the information to the server and user's details are displayed in the ATM machine and transaction can be done. Now, the user securely enters information to transfer money without risk of peeping attack in Automated Teller Machine by just scanning the QR code at the ATM by mobile application. Here, both the encryption and decryption technique are carried out by using Triple DES Algorithm (Data Encryption Standard).

2018-10-26
Douzi, Samira, Amar, Meryem, El Ouahidi, Bouabid.  2017.  Advanced Phishing Filter Using Autoencoder and Denoising Autoencoder. Proceedings of the International Conference on Big Data and Internet of Thing. :125–129.

Phishing is referred as an attempt to obtain sensitive information, such as usernames, passwords, and credit card details (and, indirectly, money), for malicious reasons, by disguising as a trustworthy entity in an electronic communication [1]. Hackers and malicious users, often use Emails as phishing tools to obtain the personal data of legitimate users, by sending Emails with authentic identities, legitimate content, but also with malicious URL, which help them to steal consumer's data. The high dimensional data in phishing context contains large number of redundant features that significantly elevate the classification error. Additionally, the time required to perform classification increases with the number of features. So extracting complex Features from phishing Emails requires us to determine which Features are relevant and fundamental in phishing detection. The dominant approaches in phishing are based on machine learning techniques; these rely on manual feature engineering, which is time consuming. On the other hand, deep learning is a promising alternative to traditional methods. The main idea of deep learning techniques is to learn complex features extracted from data with minimum external contribution [2]. In this paper, we propose new phishing detection and prevention approach, based first on our previous spam filter [3] to classify textual content of Email. Secondly it's based on Autoencoder and on Denoising Autoencoder (DAE), to extract relevant and robust features set of URL (to which the website is actually directed), therefore the features space could be reduced considerably, and thus decreasing the phishing detection time.

2018-05-09
Korman, Matus, Välja, Margus, Björkman, Gunnar, Ekstedt, Mathias, Vernotte, Alexandre, Lagerström, Robert.  2017.  Analyzing the Effectiveness of Attack Countermeasures in a SCADA System. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :73–78.

The SCADA infrastructure is a key component for power grid operations. Securing the SCADA infrastructure against cyber intrusions is thus vital for a well-functioning power grid. However, the task remains a particular challenge, not the least since not all available security mechanisms are easily deployable in these reliability-critical and complex, multi-vendor environments that host modern systems alongside legacy ones, to support a range of sensitive power grid operations. This paper examines how effective a few countermeasures are likely to be in SCADA environments, including those that are commonly considered out of bounds. The results show that granular network segmentation is a particularly effective countermeasure, followed by frequent patching of systems (which is unfortunately still difficult to date). The results also show that the enforcement of a password policy and restrictive network configuration including whitelisting of devices contributes to increased security, though best in combination with granular network segmentation.

2018-02-06
Eslami, M., Zheng, G., Eramian, H., Levchuk, G..  2017.  Anomaly Detection on Bipartite Graphs for Cyber Situational Awareness and Threat Detection. 2017 IEEE International Conference on Big Data (Big Data). :4741–4743.

Data from cyber logs can often be represented as a bipartite graph (e.g. internal IP-external IP, user-application, or client-server). State-of-the-art graph based anomaly detection often generalizes across all types of graphs — namely bipartite and non-bipartite. This confounds the interpretation and use of specific graph features such as degree, page rank, and eigencentrality that can provide a security analyst with rapid situational awareness of their network. Furthermore, graph algorithms applied to data collected from large, distributed enterprise scale networks require accompanying methods that allow them to scale to the data collected. In this paper, we provide a novel, scalable, directional graph projection framework that operates on cyber logs that can be represented as bipartite graphs. This framework computes directional graph projections and identifies a set of interpretable graph features that describe anomalies within each partite.

2018-07-18
Kreimel, Philipp, Eigner, Oliver, Tavolato, Paul.  2017.  Anomaly-Based Detection and Classification of Attacks in Cyber-Physical Systems. Proceedings of the 12th International Conference on Availability, Reliability and Security. :40:1–40:6.

Cyber-physical systems are found in industrial and production systems, as well as critical infrastructures. Due to the increasing integration of IP-based technology and standard computing devices, the threat of cyber-attacks on cyber-physical systems has vastly increased. Furthermore, traditional intrusion defense strategies for IT systems are often not applicable in operational environments. In this paper we present an anomaly-based approach for detection and classification of attacks in cyber-physical systems. To test our approach, we set up a test environment with sensors, actuators and controllers widely used in industry, thus, providing system data as close as possible to reality. First, anomaly detection is used to define a model of normal system behavior by calculating outlier scores from normal system operations. This valid behavior model is then compared with new data in order to detect anomalies. Further, we trained an attack model, based on supervised attacks against the test setup, using the naive Bayes classifier. If an anomaly is detected, the classification process tries to classify the anomaly by applying the attack model and calculating prediction confidences for trained classes. To evaluate the statistical performance of our approach, we tested the model by applying an unlabeled dataset, which contains valid and anomalous data. The results show that this approach was able to detect and classify such attacks with satisfactory accuracy.