Biblio

Found 1261 results

Filters: First Letter Of Title is I  [Clear All Filters]
2018-05-17
Sam, Monica, Boddhu, Sanjay K., Duncan, Kayleigh, Botha, Hermanus V., Gallagher, John C..  2016.  Improving In-Flight Learning in a Flapping Wing Micro Air Vehicle. International Journal of Monitoring and Surveillance Technologies Research (IJMSTR). 4:14.
2017-05-30
Shelke, Priya M., Prasad, Rajesh S..  2016.  Improving JPEG Image Anti-forensics. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :75:1–75:5.

This paper proposes a forensic method for identifying whether an image was previously compressed by JPEG and also proposes an improved anti-forensics method to enhance the quality of noise added image. Stamm and Liu's anti-forensics method disable the detection capabilities of various forensics methods proposed in the literature, used for identifying the compressed images. However, it also degrades the quality of the image. First, we analyze the anti-forensics method and then use the decimal histogram of the coefficients to distinguish the never compressed images from the previously compressed; even the compressed image processed anti-forensically. After analyzing the noise distribution in the AF image, we propose a method to remove the Gaussian noise caused by image dithering which in turn enhances the image quality. The paper is organized in the following manner: Section I is the introduction, containing previous literature. Section II briefs Anti-forensic method proposed by Stamm et al. In section III, we have proposed a forensic approach and section IV comprises of improved anti-forensic approach. Section V covers details of experimentation followed by the conclusion.

2017-09-15
Shi, Tianlin, Agostinelli, Forest, Staib, Matthew, Wipf, David, Moscibroda, Thomas.  2016.  Improving Survey Aggregation with Sparsely Represented Signals. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1845–1854.

In this paper, we develop a new aggregation technique to reduce the cost of surveying. Our method aims to jointly estimate a vector of target quantities such as public opinion or voter intent across time and maintain good estimates when using only a fraction of the data. Inspired by the James-Stein estimator, we resolve this challenge by shrinking the estimates to a global mean which is assumed to have a sparse representation in some known basis. This assumption has lead to two different methods for estimating the global mean: orthogonal matching pursuit and deep learning. Both of which significantly reduce the number of samples needed to achieve good estimates of the true means of the data and, in the case of presidential elections, can estimate the outcome of the 2012 United States elections while saving hundreds of thousands of samples and maintaining accuracy.

2017-10-13
Hoole, Alexander M., Traore, Issa, Delaitre, Aurelien, de Oliveira, Charles.  2016.  Improving Vulnerability Detection Measurement: [Test Suites and Software Security Assurance]. Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. :27:1–27:10.

The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.

2017-11-20
Hoole, Alexander M., Traore, Issa, Delaitre, Aurelien, de Oliveira, Charles.  2016.  Improving Vulnerability Detection Measurement: [Test Suites and Software Security Assurance]. Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering. :27:1–27:10.

The Software Assurance Metrics and Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST) has created the Software Assurance Reference Dataset (SARD) to provide researchers and software security assurance tool developers with a set of known security flaws. As part of an empirical evaluation of a runtime monitoring framework, two test suites were executed and monitored, revealing deficiencies which led to a collaboration with the NIST SAMATE team to provide replacements. Test Suites 45 and 46 are analyzed, discussed, and updated to improve accuracy, consistency, preciseness, and automation. Empirical results show metrics such as recall, precision, and F-Measure are all impacted by invalid base assumptions regarding the test suites.

2016-04-10
2016-10-06
2016-10-07
Pearson, C. J., Welk, A. K., Mayhorn, C. B..  2016.  In Automation We Trust? Identifying Varying Levels of Trust in Human and Automated Information Sources Human Factors and Ergonomics Society. :201-205.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that trust is an antecedent to reliance, and often influences how individuals prioritize and integrate information presented from a human and/or automated information source. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measured how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information regarding which route was safest from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs and systems for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

 

2017-08-22
Olagunju, Amos O., Samu, Farouk.  2016.  In Search of Effective Honeypot and Honeynet Systems for Real-Time Intrusion Detection and Prevention. Proceedings of the 5th Annual Conference on Research in Information Technology. :41–46.

A honeypot is a deception tool for enticing attackers to make efforts to compromise the electronic information systems of an organization. A honeypot can serve as an advanced security surveillance tool for use in minimizing the risks of attacks on information technology systems and networks. Honeypots are useful for providing valuable insights into potential system security loopholes. The current research investigated the effectiveness of the use of centralized system management technologies called Puppet and Virtual Machines in the implementation automated honeypots for intrusion detection, correction and prevention. A centralized logging system was used to collect information of the source address, country and timestamp of intrusions by attackers. The unique contributions of this research include: a demonstration how open source technologies is used to dynamically add or modify hacking incidences in a high-interaction honeynet system; a presentation of strategies for making honeypots more attractive for hackers to spend more time to provide hacking evidences; and an exhibition of algorithms for system and network intrusion prevention.

2017-09-15
Olagunju, Amos O., Samu, Farouk.  2016.  In Search of Effective Honeypot and Honeynet Systems for Real-Time Intrusion Detection and Prevention. Proceedings of the 5th Annual Conference on Research in Information Technology. :41–46.

A honeypot is a deception tool for enticing attackers to make efforts to compromise the electronic information systems of an organization. A honeypot can serve as an advanced security surveillance tool for use in minimizing the risks of attacks on information technology systems and networks. Honeypots are useful for providing valuable insights into potential system security loopholes. The current research investigated the effectiveness of the use of centralized system management technologies called Puppet and Virtual Machines in the implementation automated honeypots for intrusion detection, correction and prevention. A centralized logging system was used to collect information of the source address, country and timestamp of intrusions by attackers. The unique contributions of this research include: a demonstration how open source technologies is used to dynamically add or modify hacking incidences in a high-interaction honeynet system; a presentation of strategies for making honeypots more attractive for hackers to spend more time to provide hacking evidences; and an exhibition of algorithms for system and network intrusion prevention.

2018-05-28
T.Luo, S.K.Das, H.Tan, L.Xia.  2016.  Incentive mechanism design for crowdsourcing: An all-pay auction approach. ACM Transactions on Intelligent Systems and Technology (TIST). 7:35.
T.Luo, S.Kanhere, S.K.Das, H.Tan.  2016.  Incentive mechanism design for heterogeneous crowdsourcing using all-pay contests. IEEE Transactions on Mobile Computing. 15:2234–2246.
F.Restuccia, S.K.Das, J.Payton.  2016.  Incentive mechanisms for participatory sensing: Survey and research challenges. ACM Transactions on Sensor Networks (TOSN). 12:13.
2017-05-19
Nahshon, Yoav, Peterfreund, Liat, Vansummeren, Stijn.  2016.  Incorporating Information Extraction in the Relational Database Model. Proceedings of the 19th International Workshop on Web and Databases. :6:1–6:7.

Modern information extraction pipelines are typically constructed by (1) loading textual data from a database into a special-purpose application, (2) applying a myriad of text-analytics functions to the text, which produce a structured relational table, and (3) storing this table in a database. Obviously, this approach can lead to laborious development processes, complex and tangled programs, and inefficient control flows. Towards solving these deficiencies, we embark on an effort to lay the foundations of a new generation of text-centric database management systems. Concretely, we extend the relational model by incorporating into it the theory of document spanners which provides the means and methods for the model to engage the Information Extraction (IE) tasks. This extended model, called Spannerlog, provides a novel declarative method for defining and manipulating textual data, which makes possible the automation of the typical work method described above. In addition to formally defining Spannerlog and illustrating its usefulness for IE tasks, we also report on initial results concerning its expressive power.

2017-05-18
Saurez, Enrique, Hong, Kirak, Lillethun, Dave, Ramachandran, Umakishore, Ottenwälder, Beate.  2016.  Incremental Deployment and Migration of Geo-distributed Situation Awareness Applications in the Fog. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :258–269.

Geo-distributed Situation Awareness applications are large in scale and are characterized by 24/7 data generation from mobile and stationary sensors (such as cameras and GPS devices); latency-sensitivity for converting sensed data to actionable knowledge; and elastic and bursty needs for computational resources. Fog computing [7] envisions providing computational resources close to the edge of the network, consequently reducing the latency for the sense-process-actuate cycle that exists in these applications. We propose Foglets, a programming infrastructure for the geo-distributed computational continuum represented by fog nodes and the cloud. Foglets provides APIs for a spatio-temporal data abstraction for storing and retrieving application generated data on the local nodes, and primitives for communication among the resources in the computational continuum. Foglets manages the application components on the Fog nodes. Algorithms are presented for launching application components and handling the migration of these components between Fog nodes, based on the mobility pattern of the sensors and the dynamic computational needs of the application. Evaluation results are presented for a Fog network consisting of 16 nodes using a simulated vehicular network as the workload. We show that the discovery and deployment protocol can be executed in 0.93 secs, and joining an already deployed application can be as quick as 65 ms. Also, QoS-sensitive proactive migration can be accomplished in 6 ms.

2018-05-23
Saurez, Enrique, Hong, Kirak, Lillethun, Dave, Ramachandran, Umakishore, Ottenwälder, Beate.  2016.  Incremental Deployment and Migration of Geo-distributed Situation Awareness Applications in the Fog. Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems. :258–269.
2018-07-06
Lampesberger, H..  2016.  An Incremental Learner for Language-Based Anomaly Detection in XML. 2016 IEEE Security and Privacy Workshops (SPW). :156–170.

The Extensible Markup Language (XML) is a complex language, and consequently, XML-based protocols are susceptible to entire classes of implicit and explicit security problems. Message formats in XML-based protocols are usually specified in XML Schema, and as a first-line defense, schema validation should reject malformed input. However, extension points in most protocol specifications break validation. Extension points are wildcards and considered best practice for loose composition, but they also enable an attacker to add unchecked content in a document, e.g., for a signature wrapping attack. This paper introduces datatyped XML visibly pushdown automata (dXVPAs) as language representation for mixed-content XML and presents an incremental learner that infers a dXVPA from example documents. The learner generalizes XML types and datatypes in terms of automaton states and transitions, and an inferred dXVPA converges to a good-enough approximation of the true language. The automaton is free from extension points and capable of stream validation, e.g., as an anomaly detector for XML-based protocols. For dealing with adversarial training data, two scenarios of poisoning are considered: a poisoning attack is either uncovered at a later time or remains hidden. Unlearning can therefore remove an identified poisoning attack from a dXVPA, and sanitization trims low-frequent states and transitions to get rid of hidden attacks. All algorithms have been evaluated in four scenarios, including a web service implemented in Apache Axis2 and Apache Rampart, where attacks have been simulated. In all scenarios, the learned automaton had zero false positives and outperformed traditional schema validation.

2017-05-30
Amir-Mohammadian, Sepehr, Skalka, Christian.  2016.  In-Depth Enforcement of Dynamic Integrity Taint Analysis. Proceedings of the 2016 ACM Workshop on Programming Languages and Analysis for Security. :43–56.

Dynamic taint analysis can be used as a defense against low-integrity data in applications with untrusted user interfaces. An important example is defense against XSS and injection attacks in programs with web interfaces. Data sanitization is commonly used in this context, and can be treated as a precondition for endorsement in a dynamic integrity taint analysis. However, sanitization is often incomplete in practice. We develop a model of dynamic integrity taint analysis for Java that addresses imperfect sanitization with an in-depth approach. To avoid false positives, results of sanitization are endorsed for access control (aka prospective security), but are tracked and logged for auditing and accountability (aka retrospective security). We show how this heterogeneous prospective/retrospective mechanism can be specified as a uniform policy, separate from code. We then use this policy to establish correctness conditions for a program rewriting algorithm that instruments code for the analysis. The rewriting itself is a model of existing, efficient Java taint analysis tools.

2017-03-07
Thibodeau, David, Cave, Andrew, Pientka, Brigitte.  2016.  Indexed Codata Types. Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming. :351–363.

Indexed data types allow us to specify and verify many interesting invariants about finite data in a general purpose programming language. In this paper we investigate the dual idea: indexed codata types, which allow us to describe data-dependencies about infinite data structures. Unlike finite data which is defined by constructors, we define infinite data by observations. Dual to pattern matching on indexed data which may refine the type indices, we define copattern matching on indexed codata where type indices guard observations we can make. Our key technical contributions are three-fold: first, we extend Levy's call-by-push value language with support for indexed (co)data and deep (co)pattern matching; second, we provide a clean foundation for dependent (co)pattern matching using equality constraints; third, we describe a small-step semantics using a continuation-based abstract machine, define coverage for indexed (co)patterns, and prove type safety. This is an important step towards building a foundation where (co)data type definitions and dependent types can coexist.

2016-10-24
2016-11-18
2016-10-06
Aiping Xiong, Weining Yang, Ninghui Li, Robert Proctor.  2016.  Ineffectiveness of domain highlighting as a tool to help users identify phishing webpages. 60th Annual Meeting of Human Factors and Ergonomics Society.

Domain highlighting has been implemented by popular browsers with the aim of helping users identify which sites they are visiting. But, its effectiveness in helping users identify fraudulent webpages has not been stringently tested. Thus, we conducted an online study to test the effectiveness of domain highlighting. 320 participants were recruited to evaluate the legitimacy of 6 webpages (half authentic and half fraudulent) in two study phases. In the first phase participants were instructed to determine the legitimacy based on any information on the webpage, whereas in the second phase they were instructed to focus specifically on the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to focus on the address bar, correct decisions were increased for fraudulent webpages (unsafe) but did not change significantly for the authentic webpages (safe). The percentage of correct judgments for fraudulent webpages showed no significant difference between domain highlighting and non-highlighting conditions, even when participants were directed to the address bar. Although the results showed some benefit to detecting fraudulent webpages from directing the user's attention to the address bar, the domain highlighting method itself did not provide effective protection against phishing attacks, suggesting that other measures need to be taken for successful detection of deception.

2017-05-17
Tymburibá, Mateus, Moreira, Rubens E. A., Quintão Pereira, Fernando Magno.  2016.  Inference of Peak Density of Indirect Branches to Detect ROP Attacks. Proceedings of the 2016 International Symposium on Code Generation and Optimization. :150–159.

A program subject to a Return-Oriented Programming (ROP) attack usually presents an execution trace with a high frequency of indirect branches. From this observation, several researchers have proposed to monitor the density of these instructions to detect ROP attacks. These techniques use universal thresholds: the density of indirect branches that characterizes an attack is the same for every application. This paper shows that universal thresholds are easy to circumvent. As an alternative, we introduce an inter-procedural semi-context-sensitive static code analysis that estimates the maximum density of indirect branches possible for a program. This analysis determines detection thresholds for each application; thus, making it more difficult for attackers to compromise programs via ROP. We have used an implementation of our technique in LLVM to find specific thresholds for the programs in SPEC CPU2006. By comparing these thresholds against actual execution traces of corresponding programs, we demonstrate the accuracy of our approach. Furthermore, our algorithm is practical: it finds an approximate solution to a theoretically undecidable problem, and handles programs with up to 700 thousand assembly instructions in 25 minutes.

2017-10-18
Emmerich, Katharina, Masuch, Maic.  2016.  The Influence of Virtual Agents on Player Experience and Performance. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. :10–21.

This paper contributes a systematic research approach as well as findings of an empirical study conducted to investigate the effect of virtual agents on task performance and player experience in digital games. As virtual agents are supposed to evoke social effects similar to real humans under certain conditions, the basic social phenomenon social facilitation is examined in a testbed game that was specifically developed to enable systematical variation of single impact factors of social facilitation. Independent variables were the presence of a virtual agent (present vs. not present) and the output device (ordinary monitor vs. head-mounted display). Results indicate social inhibition effects, but only for players using a head-mounted display. Additional potential impact factors and future research directions are discussed.

2017-09-26
Konigsmark, S. T. Choden, Chen, Deming, Wong, Martin D. F..  2016.  Information Dispersion for Trojan Defense Through High-level Synthesis. Proceedings of the 53rd Annual Design Automation Conference. :87:1–87:6.

Emerging technologies such as the Internet of Things (IoT) heavily rely on hardware security for data and privacy protection. However, constantly increasing integration complexity requires automatic synthesis to maintain the pace of innovation. We introduce the first High-Level Synthesis (HLS) flow that produces a security enhanced hardware design to directly prevent Hardware Trojan Horse (HTH) injection by a malicious foundry. Through analysis of entropy loss and criticality decay, the presented algorithms implement highly efficient resource-targeted information dispersion to counter HTH insertion. The flow is evaluated on existing HLS benchmarks and a new IoT-specific benchmark and shows significant resource savings.