Biblio

Found 5756 results

Filters: Keyword is Human Behavior  [Clear All Filters]
2017-08-02
Chabanne, Hervé, Keuffer, Julien, Lescuyer, Roch.  2016.  Study of a Verifiable Biometric Matching. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :183–184.

In this paper, we apply verifiable computing techniques to a biometric matching. The purpose of verifiable computation is to give the result of a computation along with a proof that the calculations were correctly performed. We adapt a protocol called sumcheck protocol and present a system that performs verifiable biometric matching in the case of a fast border control. This is a work in progress and we focus on verifying an inner product. We then give some experimental results of its implementation. Verifiable computation here helps to enforce the authentication phase bringing in the process a proof that the biometric verification has been correctly performed.

2017-10-25
Nykänen, Riku, Kärkkäinen, Tommi.  2016.  Supporting Cyber Resilience with Semantic Wiki. Proceedings of the 12th International Symposium on Open Collaboration. :21:1–21:8.

Cyber resilient organizations, their functions and computing infrastructures, should be tolerant towards rapid and unexpected changes in the environment. Information security is an organization-wide common mission; whose success strongly depends on efficient knowledge sharing. For this purpose, semantic wikis have proved their strength as a flexible collaboration and knowledge sharing platforms. However, there has not been notable academic research on how semantic wikis could be used as information security management platform in organizations for improved cyber resilience. In this paper, we propose to use semantic wiki as an agile information security management platform. More precisely, the wiki contents are based on the structured model of the NIST Special Publication 800-53 information security control catalogue that is extended in the research with the additional properties that support the information security management and especially the security control implementation. We present common uses cases to manage the information security in organizations and how the use cases can be implemented using the semantic wiki platform. As organizations seek cyber resilience, where focus is in the availability of cyber-related assets and services, we extend the control selection with option to focus on availability. The results of the study show that a semantic wiki based information security management and collaboration platform can provide a cost-efficient solution for improved cyber resilience, especially for small and medium sized organizations that struggle to develop information security with the limited resources.

2017-04-24
Tall, Anne, Wang, Jun, Han, Dezhi.  2016.  Survey of Data Intensive Computing Technologies Application to to Security Log Data Management. Proceedings of the 3rd IEEE/ACM International Conference on Big Data Computing, Applications and Technologies. :268–273.

Data intensive computing research and technology developments offer the potential of providing significant improvements in several security log management challenges. Approaches to address the complexity, timeliness, expense, diversity, and noise issues have been identified. These improvements are motivated by the increasingly important role of analytics. Machine learning and expert systems that incorporate attack patterns are providing greater detection insights. Finding actionable indicators requires the analysis to combine security event log data with other network data such and access control lists, making the big-data problem even bigger. Automation of threat intelligence is recognized as not complete with limited adoption of standards. With limited progress in anomaly signature detection, movement towards using expert systems has been identified as the path forward. Techniques focus on matching behaviors of attackers to patterns of abnormal activity in the network. The need to stream, parse, and analyze large volumes of small, semi-structured data files can be feasibly addressed through a variety of techniques identified by researchers. This report highlights research in key areas, including protection of the data, performance of the systems and network bandwidth utilization.

2017-11-20
Mallikarjunan, K. N., Muthupriya, K., Shalinie, S. M..  2016.  A survey of distributed denial of service attack. 2016 10th International Conference on Intelligent Systems and Control (ISCO). :1–6.

Information security deals with a large number of subjects like spoofed message detection, audio processing, video surveillance and cyber-attack detections. However the biggest threat for the homeland security is cyber-attacks. Distributed Denial of Service attack is one among them. Interconnected systems such as database server, web server, cloud computing servers etc., are now under threads from network attackers. Denial of service is common attack in the internet which causes problem for both the user and the service providers. Distributed attack sources can be used to enlarge the attack in case of Distributed Denial of Service so that the effect of the attack will be high. Distributed Denial of Service attacks aims at exhausting the communication and computational power of the network by flooding the packets through the network and making malicious traffic in the network. In order to be an effective service the DDoS attack must be detected and mitigated quickly before the legitimate user access the attacker's target. The group of systems that is used to perform the DoS attack is known as the botnets. This paper introduces the overview of the state of art in DDoS attack detection strategies.

2017-05-19
Nagesh, K., Sumathy, R., Devakumar, P., Sathiyamurthy, K..  2016.  A Survey on Denial of Service Attacks and Preclusions. Proceedings of the International Conference on Informatics and Analytics. :118:1–118:10.

Security is concerned with protecting assets. The aspects of security can be applied to any situation- defense, detection and deterrence. Network security plays important role of protecting information, hardware and software on a computer network. Denial of service (DOS) attacks causes great impacts on the internet world. These attacks attempt to disrupt legitimate user's access to services. By exploiting computer's vulnerabilities, attackers easily consume victim's resources. Many special techniques have been developed to protest against DOS attacks. Some organizations constitute several defense mechanism tools to tackle the security problems. This paper has proposed various types of attacks and solutions associated with each layers of OSI model. These attacks and solutions have different impacts on the different environment. Thus the rapid growth of new technologies may constitute still worse impacts of attacks in the future.

2017-06-05
Xu, Guangwu, Yan, Zheng.  2016.  A Survey on Trust Evaluation in Mobile Ad Hoc Networks. Proceedings of the 9th EAI International Conference on Mobile Multimedia Communications. :140–148.

Mobile Ad Hoc Network (MANET) is a multi-hop temporary and autonomic network comprised of a set of mobile nodes. MANETs have the features of non-center, dynamically changing topology, multi-hop routing, mobile nodes, limited resources and so on, which make it face more threats. Trust evaluation is used to support nodes to cooperate in a secure and trustworthy way through evaluating the trust of participating nodes in MANETs. However, many trust evaluation models proposed for MANETs still have many problems and shortcomings. In this paper, we review the existing researches, then analyze and compare the proposed trust evaluation models by presenting and applying uniform criteria in order to point out a number of open issues and challenges and suggest future research trends.

2017-11-27
Holm, H., Sommestad, T..  2016.  SVED: Scanning, Vulnerabilities, Exploits and Detection. MILCOM 2016 - 2016 IEEE Military Communications Conference. :976–981.

This paper presents the Scanning, Vulnerabilities, Exploits and Detection tool (SVED). SVED facilitates reliable and repeatable cyber security experiments by providing a means to design, execute and log malicious actions, such as software exploits, as well the alerts provided by intrusion detection systems. Due to its distributed architecture, it is able to support large experiments with thousands of attackers, sensors and targets. SVED is automatically updated with threat intelligence information from various services.

2017-05-19
Xia, Lixue, Tang, Tianqi, Huangfu, Wenqin, Cheng, Ming, Yin, Xiling, Li, Boxun, Wang, Yu, Yang, Huazhong.  2016.  Switched by Input: Power Efficient Structure for RRAM-based Convolutional Neural Network. Proceedings of the 53rd Annual Design Automation Conference. :125:1–125:6.

Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging metal-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80% area and more than 95% energy while maintaining the same or comparable classification accuracy of CNN on MNIST.

2017-08-02
Emmi, Michael, Enea, Constantin.  2016.  Symbolic Abstract Data Type Inference. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. :513–525.

Formal specification is a vital ingredient to scalable verification of software systems. In the case of efficient implementations of concurrent objects like atomic registers, queues, and locks, symbolic formal representations of their abstract data types (ADTs) enable efficient modular reasoning, decoupling clients from implementations. Writing adequate formal specifications, however, is a complex task requiring rare expertise. In practice, programmers write reference implementations as informal specifications. In this work we demonstrate that effective symbolic ADT representations can be automatically generated from the executions of reference implementations. Our approach exploits two key features of naturally-occurring ADTs: violations can be decomposed into a small set of representative patterns, and these patterns manifest in executions with few operations. By identifying certain algebraic properties of naturally-occurring ADTs, and exhaustively sampling executions up to a small number of operations, we generate concise symbolic ADT representations which are complete in practice, enabling the application of efficient symbolic verification algorithms without the burden of manual specification. Furthermore, the concise ADT violation patterns we generate are human-readable, and can serve as useful, formal documentation.

2017-05-22
Pawar, Shwetambari, Jain, Nilakshi, Deshpande, Swati.  2016.  System Attribute Measures of Network Security Analyzer. Proceedings of the ACM Symposium on Women in Research 2016. :51–54.

In this paper, we have mentioned a method to find the performance of projectwhich detects various web - attacks. The project is capable to identifying and preventing attacks like SQL Injection, Cross – Site Scripting, URL rewriting, Web server 400 error code etc. The performance of system is detected using the system attributes that are mentioned in this paper. This is also used to determine efficiency of the system.

2017-09-05
Luh, Robert, Schrittwieser, Sebastian, Marschalek, Stefan.  2016.  TAON: An Ontology-based Approach to Mitigating Targeted Attacks. Proceedings of the 18th International Conference on Information Integration and Web-based Applications and Services. :303–312.

Targeted attacks on IT systems are a rising threat against the confidentiality of sensitive data and the availability of systems and infrastructures. Planning for the eventuality of a data breach or sabotage attack has become an increasingly difficult task with the emergence of advanced persistent threats (APTs), a class of highly sophisticated cyber-attacks that are nigh impossible to detect using conventional signature-based systems. Understanding, interpreting, and correlating the particulars of such advanced targeted attacks is a major research challenge that needs to be tackled before behavior-based approaches can evolve from their current state to truly semantics-aware solutions. Ontologies offer a versatile foundation well suited for depicting the complex connections between such behavioral data and the diverse technical and organizational properties of an IT system. In order to facilitate the development of novel behavior-based detection systems, we present TAON, an OWL-based ontology offering a holistic view on actors, assets, and threat details, which are mapped to individual abstracted events and anomalies that can be detected by today's monitoring data providers. TOAN offers a straightforward means to plan an organization's defense against APTs and helps to understand how, why, and by whom certain resources are targeted. Populated by concrete data, the proposed ontology becomes a smart correlation framework able to combine several data sources into a semantic assessment of any targeted attack.

2017-05-22
de Chérisey, Eloi, Guilley, Sylvain, Rioul, Olivier, Jayasinghe, Darshana.  2016.  Template Attacks with Partial Profiles and Dirichlet Priors: Application to Timing Attacks. Proceedings of the Hardware and Architectural Support for Security and Privacy 2016. :7:1–7:8.

In order to retrieve the secret key in a side-channel attack, the attacker computes distinguisher values using all the available data. A profiling stage is very useful to provide some a priori information about the leakage model. However, profiling is essentially empirical and may not be exhaustive. Therefore, during the attack, the attacker may come up on previously unseen data, which can be troublesome. A lazy workaround is to ignore all such novel observations altogether. In this paper, we show that this is not optimal and can be avoided. Our proposed techniques eventually improve the performance of classical information-theoretic distinguishers in terms of success rate.

2016-04-10
Olga A. Zielinska, Allaire K. Welk, Emerson Murphy-Hill, Christopher B. Mayhorn.  2016.  A temporal analysis of persuasion principles in phishing emails. Human Factors and Ergonomics Society 60th Annual Meeting.

Eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University were assessed by two reviewers for Cialdini’s six principles of persuasion: authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation. A correlational analysis of email characteristics by year revealed that the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing. Results from this study can inform user training of phishing emails and help cybersecurity software to become more effective. 

2017-04-24
Kuang, Liwei, Yang, Laurence T., Rho, Seungmin(Charlie), Yan, Zheng, Qiu, Kai.  2016.  A Tensor-Based Framework for Software-Defined Cloud Data Center. ACM Trans. Multimedia Comput. Commun. Appl.. 12:74:1–74:23.

Multimedia has been exponentially increasing as the biggest big data, which consist of video clips, images, and audio files. Processing and analyzing them on a cloud data center have become a preferred solution that can utilize the large pool of cloud resources to address the problems caused by the tremendous amount of unstructured multimedia data. However, there exist many challenges in processing multimedia big data on a cloud data center, such as multimedia data representation approach, an efficient networking model, and an estimation method for traffic patterns. The primary purpose of this article is to develop a novel tensor-based software-defined networking model on a cloud data center for multimedia big-data computation and communication. First, an overview of the proposed framework is provided, in which the functions of the representative modules are briefly illustrated. Then, three models,—forwarding tensor, control tensor, and transition tensor—are proposed for management of networking devices and prediction of network traffic patterns. Finally, two algorithms about single-mode and multimode tensor eigen-decomposition are developed, and the incremental method is employed for efficiently updating the generated eigen-vector and eigen-tensor. Experimental results reveal that the proposed framework is feasible and efficient to handle multimedia big data on a cloud data center.

2017-09-26
Bertolino, Antonia, Daoudagh, Said, Lonetti, Francesca, Marchetti, Eda.  2016.  Testing Access Control Policies Against Intended Access Rights. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :1641–1647.

Access Control Policies are used to specify who can access which resource under which conditions, and ensuring their correctness is vital to prevent security breaches. As access control policies can be complex and error-prone, we propose an original framework that supports the validation of the implemented policies (specified in the standard XACML notation) against the intended rights, which can be informally expressed, e.g. in tabular form. The framework relies on well-known software testing technology, such as mutation and combinatorial techniques. The paper presents the implemented environment and an application example.

2017-09-19
Xie, Tao, Enck, William.  2016.  Text Analytics for Security: Tutorial. Proceedings of the Symposium and Bootcamp on the Science of Security. :124–125.

Computing systems that make security decisions often fail to take into account human expectations. This failure occurs because human expectations are typically drawn from in textual sources (e.g., mobile application description and requirements documents) and are hard to extract and codify. Recently, researchers in security and software engineering have begun using text analytics to create initial models of human expectation. In this tutorial, we provide an introduction to popular techniques and tools of natural language processing (NLP) and text mining, and share our experiences in applying text analytics to security problems. We also highlight the current challenges of applying these techniques and tools for addressing security problems. We conclude the tutorial with discussion of future research directions.

2017-05-19
Carter, Lemuria, McBride, Maranda.  2016.  Texting While Driving Among Teens: Exploring User Perceptions to Identify Policy Recommendations. Proceedings of the 17th International Digital Government Research Conference on Digital Government Research. :375–378.

Texting while driving has emerged as a significant threat to citizen safety. In this study, we utilize general deterrence theory (GDT), protection motivation theory and personality traits to evaluate texting while driving (TWD) compliance intentions among teenage drivers. This paper presents the results of our pilot study. We administered an online survey to 105 teenage and young adult drivers. The potential implications for research and practice and policy are discussed.

2017-09-05
Huang, Haixing, Song, Jinghe, Lin, Xuelian, Ma, Shuai, Huai, Jinpeng.  2016.  TGraph: A Temporal Graph Data Management System. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :2469–2472.

Temporal graphs are a class of graphs whose nodes and edges, together with the associated properties, continuously change over time. Recently, systems have been developed to support snapshot queries over temporal graphs. However, these systems barely support aggregate time range queries. Moreover, these systems cannot guarantee ACID transactions, an important feature for data management systems as long as concurrent processing is involved. To solve these issues, we design and develop TGraph, a temporal graph data management system, that assures the ACID transaction feature, and supports fast temporal graph queries.

2017-08-02
Chu, Pin-Yu, Tseng, Hsien-Lee.  2016.  A Theoretical Framework for Evaluating Government Open Data Platform. Proceedings of the International Conference on Electronic Governance and Open Society: Challenges in Eurasia. :135–142.

Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.

2017-05-17
Shrivastava, Aviral, Derler, Patricia, Baboud, Ya-Shian Li, Stanton, Kevin, Khayatian, Mohammad, Andrade, Hugo A., Weiss, Marc, Eidson, John, Chandhoke, Sundeep.  2016.  Time in Cyber-physical Systems. Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. :4:1–4:10.

Many modern cyber-physical systems (CPS), especially industrial automation systems, require the actions of multiple computational systems to be performed at much higher rates and more tightly synchronized than is possible with ad hoc designs. Time is the common entity that computing and physical systems in CPS share, and correct interfacing of that is essential to flawless functionality of a CPS. Fundamental research is needed on ways to synchronize clocks of computing systems to a high degree, and on design methods that enable building blocks of CPS to perform actions at specified times. To realize the potential of CPS in the coming decades, suitable ways to specify distributed CPS applications are needed, including their timing requirements, ways to specify the timing of the CPS components (e.g. sensors, actuators, computing platform), timing analysis to determine if the application design is possible using the components, confident top-down design methodologies that can ensure that the system meets its timing requirements, and ways and methodologies to test and verify that the system meets the timing requirements. Furthermore, strategies for securing timing need to be carefully considered at every CPS design stage and not simply added on. This paper exposes these challenges of CPS development, points out limitations of previous approaches, and provides some research directions towards solving these challenges.

2017-08-22
Jarrah, Hazim, Chong, Peter, Sarkar, Nurul I., Gutierrez, Jairo.  2016.  A Time-Free Comparison-Based System-Level Fault Diagnostic Model for Highly Dynamic Networks. Proceedings of the 11th International Conference on Queueing Theory and Network Applications. :12:1–12:6.

This paper considers the problem of system-level fault diagnosis in highly dynamic networks. The existing fault diagnostic models deal mainly with static faults and have limited capabilities to handle dynamic networks. These fault diagnostic models are based on timers that work on a simple timeout mechanism to identify the node status, and often make simplistic assumptions for system implementations. To overcome the above problems, we propose a time-free comparison-based diagnostic model. Unlike the traditional models, the proposed model does not rely on timers and is more suitable for use in dynamic network environments. We also develop a novel comparison-based fault diagnosis protocol for identifying and diagnosing dynamic faults. The performance of the protocol has been analyzed and its correctness has been proved.

2017-09-26
Walfield, Neal H., Koch, Werner.  2016.  TOFU for OpenPGP. Proceedings of the 9th European Workshop on System Security. :2:1–2:6.

We present the design and implementation of a trust-on-first-use (TOFU) policy for OpenPGP. When an OpenPGP user verifies a signature, TOFU checks that the signer used the same key as in the past. If not, this is a strong indicator that a key is a forgery and either the message is also a forgery or an active man-in-the-middle attack (MitM) is or was underway. That is, TOFU can proactively detect new attacks if the user had previously verified a message from the signer. And, it can reactively detect an attack if the signer gets a message through. TOFU cannot, however, protect against sustained MitM attacks. Despite this weakness, TOFU's practical security is stronger than the Web of Trust (WoT), OpenPGP's current trust policy, for most users. The problem with the WoT is that it requires too much user support. TOFU is also better than the most popular alternative, an X.509-based PKI, which relies on central servers whose certification processes are often sloppy. In this paper, we outline how TOFU can be integrated into OpenPGP; we address a number of potential attacks against TOFU; and, we show how TOFU can work alongside the WoT. Our implementation demonstrates the practicality of the approach.

2017-09-05
Wang, Wei, Yang, Lin, Zhang, Qian.  2016.  Touch-and-guard: Secure Pairing Through Hand Resonance. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. :670–681.

Securely pairing wearables with another device is the key to many promising applications, such as mobile payment, sensitive data transfer and secure interactions with smart home devices. This paper presents Touch-And-Guard (TAG), a system that uses hand touch as an intuitive manner to establish a secure connection between a wristband wearable and the touched device. It generates secret bits from hand resonant properties, which are obtained using accelerometers and vibration motors. The extracted secret bits are used by both sides to authenticate each other and then communicate confidentially. The ubiquity of accelerometers and motors presents an immediate market for our system. We demonstrate the feasibility of our system using an experimental prototype and conduct experiments involving 12 participants with 1440 trials. The results indicate that we can generate secret bits at a rate of 7.84 bit/s, which is 58% faster than conventional text input PIN authentication. We also show that our system is resistant to acoustic eavesdroppers in proximity.

2017-10-25
Slavin, Rocky, Wang, Xiaoyin, Hosseini, Mitra Bokaei, Hester, James, Krishnan, Ram, Bhatia, Jaspreet, Breaux, Travis D., Niu, Jianwei.  2016.  Toward a Framework for Detecting Privacy Policy Violations in Android Application Code. Proceedings of the 38th International Conference on Software Engineering. :25–36.

Mobile applications frequently access sensitive personal information to meet user or business requirements. Because such information is sensitive in general, regulators increasingly require mobile-app developers to publish privacy policies that describe what information is collected. Furthermore, regulators have fined companies when these policies are inconsistent with the actual data practices of mobile apps. To help mobile-app developers check their privacy policies against their apps' code for consistency, we propose a semi-automated framework that consists of a policy terminology-API method map that links policy phrases to API methods that produce sensitive information, and information flow analysis to detect misalignments. We present an implementation of our framework based on a privacy-policy-phrase ontology and a collection of mappings from API methods to policy phrases. Our empirical evaluation on 477 top Android apps discovered 341 potential privacy policy violations.

2017-09-05
Ghanim, Yasser.  2016.  Toward a Specialized Quality Management Maturity Assessment Model. Proceedings of the 2Nd Africa and Middle East Conference on Software Engineering. :1–8.

SW Quality Assessment models are either too broad such as CMMI-DEV and SPICE that cover the full software development life cycle (SDLC), or too narrow such as TMMI and TPI that focus on testing. Quality Management as a main concern within the software industry is broader than the concept of testing. The V-Model sets a broader view with the concepts of Verification and Validation. Quality Assurance (QA) is another broader term that includes quality of processes. Configuration audits add more scope. In parallel there are some less visible dimensions in quality not often addressed in traditional models such as business alignment of QA efforts. This paper compares the commonly accepted models related to software quality management and proposes a model that fills an empty space in this area. The paper provides some analysis of the concepts of maturity and capability levels and provides some proposed adaptations for quality management assessment.