International Conferences: IEEE Security and Privacy Workshops, San Jose, California
SoS Newsletter- Advanced Book Block
International Conferences: IEEE Security and Privacy Workshop (2014) |
The 2014 IEEE Security and Privacy Workshops were held 17-18 May 2014 in San Jose, California. Workshop subjects included insider threats, language-theoretic security, cyber crime, ethics, and data usage management.
Redfield, Catherine M.S.; Date, Hiroyuki, "Gringotts: Securing Data for Digital Evidence," Security and Privacy Workshops (SPW), 2014 IEEE, pp.10, 17, 17-18 May 2014. doi: 10.1109/SPW.2014.11 As digital storage and cloud processing become more common in business infrastructure and security systems, maintaining the provable integrity of accumulated institutional data that may be required as legal evidence also increases in complexity. Since data owners may have an interest in a proposed lawsuit, it is essential that any digital evidence be guaranteed against both outside attacks and internal tampering. Since the timescale required for legal disputes is unrelated to computational and mathematical advances, evidential data integrity must be maintained even after the cryptography that originally protected it becomes obsolete. In this paper we propose Gringotts, a system where data is signed on the device that generates it, transmitted from multiple sources to a server using a novel signature scheme, and stored with its signature on a database running Evidence Record Syntax, a protocol for long-term archival systems that maintains the data integrity of the signature, even over the course of changing cryptographic practices. Our proof of concept for a small surveillance camera network had a processing (throughput) overhead of 7.5%, and a storage overhead of 6.2%.
Keywords: Cameras; Cryptography; Databases; Protocols; Receivers; Servers; Digital Evidence; Digital Signatures; Evidence Record Syntax; Long-Term Authenticity; Stream Data (ID#: 15-3445)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957278&isnumber=6957265
Iyilade, Johnson; Vassileva, Julita, "P2U: A Privacy Policy Specification Language for Secondary Data Sharing and Usage," Security and Privacy Workshops (SPW), 2014 IEEE, pp.18, 22, 17-18 May 2014. doi: 10.1109/SPW.2014.12 Within the last decade, there are growing economic social incentives and opportunities for secondary use of data in many sectors, and strong market forces currently drive the active development of systems that aggregate user data gathered by many sources. This secondary use of data poses privacy threats due to unwanted use of data for the wrong purposes such as discriminating the user for employment, loan and insurance. Traditional privacy policy languages such as the Platform for Privacy Preferences (P3P) are inadequate since they were designed long before many of these technologies were invented and basically focus on enabling user-awareness and control during primary data collection (e.g. by a website). However, with the advent of Web 2.0 and Social Networking Sites, the landscape of privacy is shifting from limiting collection of data by websites to ensuring ethical use of the data after initial collection. To meet the current challenges of privacy protection in secondary context, we propose a privacy policy language, Purpose-to-Use (P2U), aimed at enforcing privacy while enabling secondary user information sharing across applications, devices, and services on the Web.Keywords: Context; Data privacy; Economics; Information management; Mobile communication; Organizations; Privacy; Policy Languages; Privacy; Secondary Use; Usage Control (ID#: 15-3446)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957279&isnumber=6957265
Lazouski, Aliaksandr; Mancini, Gaetano; Martinelli, Fabio; Mori, Paolo, "Architecture, Workflows, and Prototype for Stateful Data Usage Control in Cloud," Security and Privacy Workshops (SPW), 2014 IEEE, pp.23,30, 17-18 May 2014. doi: 10.1109/SPW.2014.13 This paper deals with the problem of continuous usage control of multiple copies of data objects in distributed systems. This work defines an architecture, a set of workflows, a set of policies and an implementation for the distributed enforcement. The policies, besides including access and usage rules, also specify the parties that will be involved in the decision process. Indeed, the enforcement requires collaboration of several entities because the access decision might be evaluated on one site, enforced on another, and the attributes needed for the policy evaluation might be stored in many distributed locations.
Keywords: Authorization; Concurrent computing ;Data models; Distributed databases; Process control; Resource management; Attributes; Cloud System; Concurrency Control; UCON; Usage Control (ID#: 15-3447)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957280&isnumber=6957265
Wohlgemuth, Sven, "Resilience as a New Enforcement Model for IT Security Based on Usage Control," Security and Privacy Workshops (SPW), 2014 IEEE, pp.31,38, 17-18 May 2014. doi: 10.1109/SPW.2014.14 Security and privacy are not only general requirements of a society but also indispensable enablers for innovative IT infrastructure applications aiming at increased, sustainable welfare and safety of a society. A critical activity of these IT applications is spontaneous information exchange. This information exchange, however, creates inevitable, unknown dependencies between the participating IT systems, which, in turn threaten security and privacy. With the current approach to IT security, security and privacy follow changes and incidents rather than anticipating them. By sticking to a given threat model, the current approach fails to consider vulnerabilities which arise during a spontaneous information exchange. With the goal of improving security and privacy, this work proposes adapting an IT security model and its enforcement to current and most probable incidents before they result in an unacceptable risk for the participating parties or failure of IT applications. Usage control is the suitable security policy model, since it allows changes during run-time without conceptually raising additional incidents.
Keywords: Adaptation models; Adaptive systems; Availability; Information exchange; Privacy; Resilience; Security; data provenance; identity management; resilience; security and privacy; usage control (ID#: 15-3448)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957281&isnumber=6957265
Lovat, Enrico; Kelbert, Florian, "Structure Matters - A New Approach for Data Flow Tracking," Security and Privacy Workshops (SPW), 2014 IEEE, pp.39,43, 17-18 May 2014. doi: 10.1109/SPW.2014.15 Usage control (UC) is concerned with how data may or may not be used after initial access has been granted. UC requirements are expressed in terms of data (e.g. a picture, a song) which exist within a system in forms of different technical representations (containers, e.g. files, memory locations, windows). A model combining UC enforcement with data flow tracking across containers has been proposed in the literature, but it exhibits a high false positives detection rate. In this paper we propose a refined approach for data flow tracking that mitigates this over approximation problem by leveraging information about the inherent structure of the data being tracked. We propose a formal model and show some exemplary instantiations.
Keywords: Containers; Data models; Discrete Fourier transforms; Operating systems; Postal services; Security; Semantics;data flow tracking; data structure; usage control (ID#: 15-3449)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957282&isnumber=6957265
Naveed, Muhammad, "Hurdles for Genomic Data Usage Management," Security and Privacy Workshops (SPW), 2014 IEEE, pp.44,48, 17-18 May 2014. doi: 10.1109/SPW.2014.44 Our genome determines our appearance, gender, diseases, reaction to drugs, and much more. It not only contains information about us but also about our relatives, past generations, and future generations. This creates many policy and technology challenges to protect privacy and manage usage of genomic data. In this paper, we identify various features of genomic data that make its usage management very challenging and different from other types of data. We also describe some ideas about potential solutions and propose some recommendations for the usage of genomic data.
Keywords: Bioinformatics; Cryptography; DNA; Data privacy; Genomics; Privacy; Sequential analysis (ID#: 15-3450)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957283&isnumber=6957265
Kang, Yuan J.; Schiffman, Allan M.; Shrager, Jeff, "RAPPD: A Language and Prototype for Recipient-Accountable Private Personal Data," Security and Privacy Workshops (SPW), 2014 IEEEpp.49,56, 17-18 May 2014. doi: 10.1109/SPW.2014.16 Often communicate private data in informal settings such as email, where we trust that the recipient shares our assumptions regarding the disposition of this data. Sometimes we informally express our desires in this regard, but there is no formal means in such settings to make our wishes explicit, nor to hold the recipient accountable. Here we describe a system and prototype implementation called Recipient-Accountable Private Personal Data, which lets the originator express his or her privacy desires regarding data transmitted in email, and provides some accountability. Our method only assumes that the recipient is reading the email online, and on an email reader that will execute HTML and JavaScript.
Keywords: Data privacy; Electronic mail; IP networks; Law; Medical services; Privacy; Prototypes; accountability; auditing; creative commons; email privacy; privacy; trust; usability; usable privacy (ID#: 15-3451)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957284&isnumber=6957265
Hanaei, Ebrahim Hamad Al; Rashid, Awais, "DF-C2M2: A Capability Maturity Model for Digital Forensics Organisations," Security and Privacy Workshops (SPW), 2014 IEEE, pp.57,60, 17-18 May 2014 doi: 10.1109/SPW.2014.17 The field of digital forensics has emerged as one of the fastest changing and most rapidly developing investigative specialisations in a wide range of criminal and civil cases. Increasingly there is a requirement from the various legal and judicial authorities throughout the world, that any digital evidence presented in criminal and civil cases should meet requirements regarding the acceptance and admissibility of digital evidence, e.g., Daubert or Frye in the US. There is also increasing expectation that digital forensics labs are accredited to ISO 17025 or the US equivalent ASCLD-Lab International requirements. On the one hand, these standards cover general requirements and are not geared specifically towards digital forensics. On the other hand, digital forensics labs are mostly left with costly piece-meal efforts in order to try and address such pressing legal and regulatory requirements. In this paper, we address these issues by proposing DF-C^2M^2, a capability maturity model that enables organisations to evaluate the maturity of their digital forensics capabilities and identify roadmaps for improving it in accordance with business or regulatory requirements. The model has been developed through consultations and interviews with digital forensics experts. The model has been evaluated by using it to assess the digital forensics capability maturity of a lab in a law enforcement agency.
Keywords: Capability maturity model; Conferences; Digital forensics; ISO standards; Law enforcement; ASCLD-Lab; Capability Maturity; Digital Forensics; ISO 17025 (ID#: 15-3452)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957285&isnumber=6957265
Hu, Xin; Wang, Ting; Stoecklin, Marc Ph.; Schales, Douglas L.; Jang, Jiyong; Sailer, Reiner, "Asset Risk Scoring in Enterprise Network with Mutually Reinforced Reputation Propagation," Security and Privacy Workshops (SPW), 2014 IEEE, pp.61,64, 17-18 May 2014. doi: 10.1109/SPW.2014.18 Cyber security attacks are becoming ever more frequent and sophisticated. Enterprises often deploy several security protection mechanisms, such as anti-virus software, intrusion detection prevention systems, and firewalls, to protect their critical assets against emerging threats. Unfortunately, these protection systems are typically "noisy", e.g., regularly generating thousands of alerts every day. Plagued by false positives and irrelevant events, it is often neither practical nor cost-effective to analyze and respond to every single alert. The main challenge faced by enterprises is to extract important information from the plethora of alerts and to infer potential risks to their critical assets. A better understanding of risks will facilitate effective resource allocation and prioritization of further investigation. In this paper, we present MUSE, a system that analyzes a large number of alerts and derives risk scores by correlating diverse entities in an enterprise network. Instead of considering a risk as an isolated and static property, MUSE models the dynamics of a risk based on the mutual reinforcement principle. We evaluate MUSE with real-world network traces and alerts from a large enterprise network, and demonstrate its efficacy in risk assessment and flexibility in incorporating a wide variety of data sets.
Keywords: Belief propagation; Bipartite graph; Data mining; Intrusion detection; Malware; Servers; Risk Scoring; mutually reinforced principles; reputation propagation (ID#: 15-3453)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957286&isnumber=6957265
Faria, Rubens Alexandre De; Fonseca, Keiko V.Ono; Schneider, Bertoldo; Nguang, Sing Kiong, "Collusion and Fraud Detection on Electronic Energy Meters - A Use Case of Forensics Investigation Procedures," Security and Privacy Workshops (SPW), 2014 IEEE, pp.65,68, 17-18 May 2014. doi: 10.1109/SPW.2014.19 Smart meters (gas, electricity, water, etc.) play a fundamental role on the implementation of the Smart Grid concept. Nevertheless, the rollout of smart meters needed to achieve the foreseen benefits of the integrated network of devices is still slow. Among the reasons for the slower pace is the lack of trust on electronic devices and new kinds of frauds based on clever tampering and collusion. These facts have been challenging service providers and imposing great revenues losses. This paper presents a use case of forensics investigation procedures applied to detect electricity theft based on tampered electronic devices. The collusion fraud draw our attention for the involved amounts (losses) caused to the provider and the technique applied to hide fraud evidences.
Keywords: Electricity; Energy consumption; Microcontrollers; Radio frequency; Security; Sensors; Switches; electricity measurement fraud; electronic meter; forensics investigation procedure; tampering technique (ID#: 15-3454)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957287&isnumber=6957265
Shulman, Haya; Waidner, Michael, "Towards Forensic Analysis of Attacks with DNSSEC," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 69, 76, 17-18 May 2014. doi: 10.1109/SPW.2014.20 DNS cache poisoning is a stepping stone towards advanced (cyber) attacks, and can be used to monitor users' activities, for censorship, to distribute malware and spam, and even to subvert correctness and availability of Internet networks and services. The DNS infrastructure relies on challenge-response defences, which are deemed effective for thwarting attacks by (the common) off-path adversaries. Such defences do not suffice against stronger adversaries, e.g., man-in-the-middle (MitM). However, there seems to be little willingness to adopt systematic, cryptographic mechanisms, since stronger adversaries are not believed to be common. In this work we validate this assumption and show that it is imprecise. In particular, we demonstrate that: (1) attackers can frequently obtain MitM capabilities, and (2) even weaker attackers can subvert DNS security. Indeed, as we show, despite wide adoption of challenge-response defences, cache-poisoning attacks against DNS infrastructure are highly prevalent. We evaluate security of domain registrars and name servers, experimentally, and find vulnerabilities, which expose DNS infrastructure to cache poisoning. We review DNSSEC, the defence against DNS cache poisoning, and argue that, not only it is the most suitable mechanism for preventing cache poisoning attacks, but it is also the only proposed defence that enables a-posteriori forensic analysis of attacks. Specifically, DNSSEC provides cryptographic evidences, which can be presented to, and validated by, any third party and can be used in investigations and for detection of attacks even long after the attack took place.
Keywords: Computer crime; Cryptography; Forensics; Internet; Routing; Servers; DNS cache-poisoning; DNSSEC; cryptographic evidences;cyber attacks; digital signatures; security (ID#: 15-3455)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957288&isnumber=6957265
Iedemska, Jane; Stringhini, Gianluca; Kemmerer, Richard; Kruegel, Christopher; Vigna, Giovanni, "The Tricks of the Trade: What Makes Spam Campaigns Successful?," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 77, 83, 17-18 May 2014. doi: 10.1109/SPW.2014.21 Spam is a profitable business for cyber criminals, with the revenue of a spam campaign that can be in the order of millions of dollars. For this reason, a wealth of research has been performed on understanding how spamming botnets operate, as well as what the economic model behind spam looks like. Running a spamming botnet is a complex task: the spammer needs to manage the infected machines, the spam content being sent, and the email addresses to be targeted, among the rest. In this paper, we try to understand which factors influence the spam delivery process and what characteristics make a spam campaign successful. To this end, we analyzed the data stored on a number of command and control servers of a large spamming botnet, together with the guidelines and suggestions that the botnet creators provide to spammers to improve the performance of their botnet.
Keywords: Databases; Guidelines; Manuals; Mathematical model; Servers; Unsolicited electronic mail; Botnet; Cybercrime; Spam (ID#: 15-3456)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957289&isnumber=6957265
Sarvari, Hamed; Abozinadah, Ehab; Mbaziira, Alex; Mccoy, Damon, "Constructing and Analyzing Criminal Networks," Security and Privacy Workshops (SPW), 2014 IEEE, pp.84,91, 17-18 May 2014 doi: 10.1109/SPW.2014.22 Analysis of criminal social graph structures can enable us to gain valuable insights into how these communities are organized. Such as, how large scale and centralized these criminal communities are currently? While these types of analysis have been completed in the past, we wanted to explore how to construct a large scale social graph from a smaller set of leaked data that included only the criminal's email addresses. We begin our analysis by constructing a 43 thousand node social graph from one thousand publicly leaked criminals' email addresses. This is done by locating Facebook profiles that are linked to these same email addresses and scraping the public social graph from these profiles. We then perform a large scale analysis of this social graph to identify profiles of high rank criminals, criminal organizations and large scale communities of criminals. Finally, we perform a manual analysis of these profiles that results in the identification of many criminally focused public groups on Facebook. This analysis demonstrates the amount of information that can be gathered by using limited data leaks.
Keywords: Communities; Electronic mail; Facebook; Joining processes; Manuals; Organizations; analysis; community detection; criminal networks; cybercrime; social graph (ID#: 15-3457)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957290&isnumber=6957265
Grabska, Iwona; Szczypiorski, Krzysztof, "Steganography in Long Term Evolution Systems," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 92, 99, 17-18 May 2014. doi: 10.1109/SPW.2014.23 This paper contains a description and analysis of a new steganographic method, called LaTEsteg, designed for LTE (Long Term Evolution) systems. The LaTEsteg uses physical layer padding of packets sent over LTE networks. This method allows users to gain additional data transfer that is invisible to unauthorized parties that are unaware of hidden communication. Three important parameters of the LaTESteg are defined and evaluated: performance, cost and security.
Keywords: Channel capacity; IP networks; Long Term Evolution; Phase shift keying; Proposals; Protocols;Throughput;4G;LTE;Steganographic Algorithm; Steganographic Channel; Steganography (ID#: 15-3458)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957291&isnumber=6957265
Lipinski, Bartosz; Mazurczyk, Wojciech; Szczypiorski, Krzysztof, "Improving Hard Disk Contention-Based Covert Channel in Cloud Computing," Security and Privacy Workshops (SPW), 2014 IEEE, pp.100,107, 17-18 May 2014. doi: 10.1109/SPW.2014.24 Steganographic methods allow the covert exchange of secret data between parties aware of the procedure. The cloud computing environment is a new and emerging target for steganographers, but currently not many solutions have been proposed. This paper proposes CloudSteg, which is a steganographic method that creates a covert channel based on hard disk contention between the two cloud instances that reside on the same physical machine. Experimental results conducted using open-source cloud environment Open Stack show that CloudSteg is able to achieve a bandwidth of about 0.1 bps, which is 1000 times higher than is known from the state-of-the-art version.
Keywords: Bandwidth; Cloud computing; Computational modeling; Hard disks; Robustness; Synchronization; cloud computing; covert channel; information hiding; steganography (ID#: 15-3459)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957292&isnumber=6957265
Narang, Pratik; Ray, Subhajit; Hota, Chittaranjan; Venkatakrishnan, Venkat, "PeerShark: Detecting Peer-to-Peer Botnets by Tracking Conversations," Security and Privacy Workshops (SPW), 2014 IEEE, pp.108,115, 17-18 May 2014. doi: 10.1109/SPW.2014.25 The decentralized nature of Peer-to-Peer (P2P) botnets makes them difficult to detect. Their distributed nature also exhibits resilience against take-down attempts. Moreover, smarter bots are stealthy in their communication patterns, and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we propose Peer Shark, a novel methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Instead of the traditional 5-tuple 'flow-based' detection approach, we use a 2-tuple 'conversation-based' approach which is port-oblivious, protocol-oblivious and does not require Deep Packet Inspection. Peer Shark could also classify different P2P applications with an accuracy of more than 95%.
Keywords: Electronic mail; Feature extraction; Firewalls (computing); IP networks; Internet; Peer-to-peer computing; Ports (Computers); botnet; machine learning; peer-to-peer (ID#: 15-3460)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957293&isnumber=6957265
Drew, Jake; Moore, Tyler, "Automatic Identification of Replicated Criminal Websites Using Combined Clustering," Security and Privacy Workshops (SPW), 2014 IEEE, pp.116, 123, 17-18 May 2014 doi: 10.1109/SPW.2014.26 To be successful, cyber criminals must figure out how to scale their scams. They duplicate content on new websites, often staying one step ahead of defenders that shut down past schemes. For some scams, such as phishing and counterfeit-goods shops, the duplicated content remains nearly identical. In others, such as advanced-fee fraud and online Ponzi schemes, the criminal must alter content so that it appears different in order to evade detection by victims and law enforcement. Nevertheless, similarities often remain, in terms of the website structure or content, since making truly unique copies does not scale well. In this paper, we present a novel combined clustering method that links together replicated scam websites, even when the criminal has taken steps to hide connections. We evaluate its performance against two collected datasets of scam websites: fake-escrow services and high-yield investment programs (HYIPs). We find that our method more accurately groups similar websites together than does existing general-purpose consensus clustering methods.
Keywords: Clustering algorithms; Clustering methods; HTML; Indexes; Investment; Manuals; Sociology (ID#: 15-3461)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957294&isnumber=6957265
Peersman, Claudia; Schulze, Christian; Rashid, Awais; Brennan, Margaret; Fischer, Carl, "iCOP: Automatically Identifying New Child Abuse Media in P2P Networks," Security and Privacy Workshops (SPW), 2014 IEEE, pp.124,131, 17-18 May 2014. doi: 10.1109/SPW.2014.27 The increasing levels of child sex abuse (CSA) media being shared in peer-to-peer (P2P) networks pose a significant challenge for law enforcement agencies. Although a number of P2P monitoring tools to detect offender activity in such networks exist, they typically rely on hash value databases of known CSA media. Such an approach cannot detect new or previously unknown media being shared. Conversely, identifying such new previously unknown media is a priority for law enforcement - they can be indicators of recent or on-going child abuse. Furthermore, originators of such media can be hands-on abusers and their apprehension can safeguard children from further abuse. The sheer volume of activity on P2P networks, however, makes manual detection virtually infeasible. In this paper, we present a novel approach that combines sophisticated filename and media analysis techniques to automatically flag new previously unseen CSA media to investigators. The approach has been implemented into the iCOP toolkit. Our evaluation on real case data shows high degrees of accuracy while hands-on trials with law enforcement officers highlight iCOP's usability and its complementarity to existing investigative workflows.
Keywords: Engines; Feature extraction; Law enforcement; Media; Skin; Streaming media; Visualization; child protection; cyber crime; image classification; paedophilia; peer-to-peer computing; text analysis (ID#: 15-3462)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957295&isnumber=6957265
Gokcen, Yasemin; Foroushani, Vahid Aghaei; Heywood, A.Nur Zincir, "Can We Identify NAT Behavior by Analyzing Traffic Flows?," Security and Privacy Workshops (SPW), 2014 IEEE, pp.132,139, 17-18 May 2014. doi: 10.1109/SPW.2014.28 It is shown in the literature that network address translation devices have become a convenient way to hide the source of malicious behaviors. In this research, we explore how far we can push a machine learning (ML) approach to identify such behaviors using only network flows. We evaluate our proposed approach on different traffic data sets against passive fingerprinting approaches and show that the performance of a machine learning approach is very promising even without using any payload (application layer) information.
Keywords: Browsers; Classification algorithms; Computers; Fingerprint recognition; IP networks; Internet; Payloads; Network address translation classification; machine learning; traffic analysis; traffic flows (ID#: 15-3463)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957296&isnumber=6957265
Jaeger, Eric; Levillain, Olivier, "Mind Your Language(s): A Discussion about Languages and Security," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 140, 151, 17-18 May 2014. doi: 10.1109/SPW.2014.29 Following several studies conducted by the French Network and Information Security Agency (ANSSI), this paper discusses the question of the intrinsic security characteristics of programming languages. Through illustrations and discussions, it advocates for a different vision of well-known mechanisms and is intended to provide some food for thoughts regarding languages and development tools.
Keywords: Cryptography; Encapsulation; Java; Software; Standards; compilation; evaluation; programming languages; security; software development; software engineering (ID#: 15-3464)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957297&isnumber=6957265
Volpano, pp.152,157, 17-18 May 2014. doi: 10.1109/SPW.2014.30 A fundamental unit of computation is introduced for reactive programming called the LEGO(TM) brick. It is targeted for domains in which JavaScript runs in an attempt to allow a user to build a trustworthy reactive program on demand rather than try to analyze JavaScript. A formal definition is given for snapping bricks together based on the standard product construction for deterministic finite automata.
Keywords: Adders; Automata; Browsers; Delays; Keyboards; Mice; Programming; programming methodology (ID#: 15-3465)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957298&isnumber=6957265
Bangert, Julian; Zeldovich, Nickolai, "Nail: A Practical Interface Generator for Data Formats," Security and Privacy Workshops (SPW), 2014 IEEE, pp.158, 166, 17-18 May 2014. doi: 10.1109/SPW.2014.31 We present Nail, an interface generator that allows programmers to safely parse and generate protocols defined by a Parser-Expression based grammar. Nail uses a richer set of parser combinators that induce an internal representation, obviating the need to write semantic actions. Nail also provides solutions parsing common patterns such as length and offset fields within binary formats that are hard to process with existing parser generators.
Keywords: Data models; Generators; Grammar; Nails; Protocols; Semantics; Syntactics; Binary formats; LangSec; Offset field; Output; Parsing (ID#: 15-3470)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957299&isnumber=6957265
Petullo, W.Michael; Fei, Wenyuan; Solworth, Jon A.; Gavlin, Pat, "Ethos' Deeply Integrated Distributed Types," Security and Privacy Workshops (SPW), 2014 IEEE, pp.167,180, 17-18 May 2014. doi: 10.1109/SPW.2014.32 Programming languages have long incorporated type safety, increasing their level of abstraction and thus aiding programmers. Type safety eliminates whole classes of security-sensitive bugs, replacing the tedious and error-prone search for such bugs in each application with verifying the correctness of the type system. Despite their benefits, these protections often end at the process boundary, that is, type safety holds within a program but usually not to the file system or communication with other programs. Existing operating system approaches to bridge this gap require the use of a single programming language or common language runtime. We describe the deep integration of type safety in Ethos, a clean-slate operating system which requires that all program input and output satisfy a recognizer before applications are permitted to further process it. Ethos types are multilingual and runtime-agnostic, and each has an automatically generated unique type identifier. Ethos bridges the type-safety gap between programs by (1) providing a convenient mechanism for specifying the types each program may produce or consume, (2) ensuring that each type has a single, distributed-system-wide recognizer implementation, and (3) inescapably enforcing these type constraints.
Keywords: Kernel; Protocols; Robustness; Runtime; Safety; Security; Semantics; Operating system; language-theoretic security; type system (ID#: 15-3471)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957300&isnumber=6957265
Goodspeed, Travis, "Phantom Boundaries and Cross-Layer Illusions in 802.15.4 Digital Radio," Security and Privacy Workshops (SPW), 2014 IEEE, pp.181,184, 17-18 May 2014. doi: 10.1109/SPW.2014.33 The classic design of protocol stacks, where each layer of the stack receives and unwraps the payload of the next layer, implies that each layer has a parser that accepts Protocol Data Units and extracts the intended Service Data Units from them. The PHY layer plays a special role, because it must create frames, i.e., original PDUs, from a stream of bits or symbols. An important property implicitly expected from these parsers is that SDUs are passed to the next layer only if the encapsulating PDUs from all previous layers were received exactly as transmitted by the sender and were syntactically correct. The Packet-in-packet attack (WOOT 2011) showed that this false assumption could be easily violated and exploited on IEEE 802.15.4 and similar PHY layers, however, it did not challenge the assumption that symbols and bytes recognized by the receiver were as transmitted by the sender. This work shows that even that assumption is wrong: in fact, a valid received frame may share no symbols with the sent one! This property is due to a particular choice of low-level chip encoding of 802.15.4, which enables the attacker to co-opt the receiver's error correction. This case study demonstrates that PHY layer logic is as susceptible to the input language manipulation attacks as other layers, or perhaps more so. Consequently, when designing protocol stacks, language-theoretic considerations must be taken into account from the very bottom of the PHY layer, no layer is too low to be considered "mere engineering.''
Keywords: Automata; Error correction codes; IEEE 802.15 Standards; Noise; Protocols; Receivers; Security; 802.15.4; LangSec; Packet-in-packet (ID#: 15-3472)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957301&isnumber=6957265
Graham, Robert David; Johnson, Peter C., "Finite State Machine Parsing for Internet Protocols: Faster Than You Think," Security and Privacy Workshops (SPW), 2014 IEEE, pp.185,190, 17-18 May 2014. doi: 10.1109/SPW.2014.34 A parser's job is to take unstructured, opaque data and convert it to a structured, semantically meaningful format. As such, parsers often operate at the border between untrusted data sources (e.g., the Internet) and the soft, chewy center of computer systems, where performance and security are paramount. A firewall, for instance, is precisely a trust-creating parser for Internet protocols, permitting valid packets to pass through and dropping or actively rejecting malformed packets. Despite the prevalence of finite state machines (FSMs) in both protocol specifications and protocol implementations, they have gained little traction in parser code for such protocols. Typical reasons for avoiding the FSM computation model claim poor performance, poor scalability, poor expressibility, and difficult or time-consuming programming. In this research report, we present our motivations for and designs of finite state machines to parse a variety of existing Internet protocols, both binary and ASCII. Our hand-written parsers explicitly optimize around L1 cache hit latency, branch misprediction penalty, and program-wide memory overhead to achieve aggressive performance and scalability targets. Our work demonstrates that such parsers are, contrary to popular belief, sufficiently expressive for meaningful protocols, sufficiently performant for high-throughput applications, and sufficiently simple to construct and maintain. We hope that, in light of other research demonstrating the security benefits of such parsers over more complex, Turing-complete codes, our work serves as evidence that certain ``practical'' reasons for avoiding FSM-based parsers are invalid.
Keywords: Automata; Internet; Pipelines; Program processors; Protocols; Servers; Switches (ID#: 15-3473)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957302&isnumber=6957265
Levillain, Olivier, "Parsifal: A Pragmatic Solution to the Binary Parsing Problems," Security and Privacy Workshops (SPW), 2014 IEEE, pp.191, 197, 17-18 May 2014. doi: 10.1109/SPW.2014.35 Parsers are pervasive software basic blocks: as soon as a program needs to communicate with another program or to read a file, a parser is involved. However, writing robust parsers can be difficult, as is revealed by the amount of bugs and vulnerabilities related to programming errors in parsers. It is especially true for network analysis tools, which led the network and protocols laboratory of the French Network and Information Security Agency (ANSSI) to write custom tools. One of them, Parsifal, is a generic framework to describe parsers in OCaml, and gave us some insight into binary formats and parsers. After describing our tool, this article presents some use cases and lessons we learned about format complexity, parser robustness and the role the language used played.
Keywords: Containers; Density estimation robust algorithm; Internet; Protocols; Robustness; Standards; Writing; OCaml; Parsifal; binary parsers; code generation; preprocessor (ID#: 15-3474)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957303&isnumber=6957265
Bogk, Andreas; Schopl, Marco, "The Pitfalls of Protocol Design: Attempting to Write a Formally Verified PDF Parser," Security and Privacy Workshops (SPW), 2014 IEEE, pp.198, 203, 17-18 May 2014 doi: 10.1109/SPW.2014.36 Parsers for complex data formats generally present a big attack surface for input-driven exploitation. In practice, this has been especially true for implementations of the PDF data format, as witnessed by dozens of known vulnerabilities exploited in many real world attacks, with the Acrobat Reader implementation being the main target. In this report, we describe our attempts to use Coq, a theorem prover based on a functional programming language making use of dependent types and the Curry-Howard isomorphism, to implement a formally verified PDF parser. We ended up implementing a subset of the PDF format and proving termination of the combinator-based parser. Noteworthy results include a dependent type representing a list of strictly monotonically decreasing length of remaining symbols to parse, which allowed us to show termination of parser combinators. Also, difficulties showing termination of parsing some features of the PDF format readily translated into denial of service attacks against existing PDF parsers-we came up with a single PDF file that made all the existing PDF implementations we could test enter an endless loop.
Keywords: Indexes; Portable document format; Privacy; Security; Software; Syntactics; Writing (ID#: 15-3475)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957304&isnumber=6957265
Kompalli, Sarat, "Using Existing Hardware Services for Malware Detection," Security and Privacy Workshops (SPW), 2014 IEEE, pp.204,208, 17-18 May 2014. doi: 10.1109/SPW.2014.49 The paper is divided into two sections. First, we describe our experiments in using hardware-based metrics such as those collected by the BPU and MMU for detection of malware activity at runtime. Second, we sketch a defense-in-depth security model that combines such detection with hardware-aided proof-carrying code and input validation.
Keywords: Hardware; IP networks; Malware; Monitoring; Software; System-on-chip; data security; malware; security in hardware (ID#: 15-3476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957305&isnumber=6957265
Vanegue, Julien, "The Weird Machines in Proof-Carrying Code," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 209, 213, 17-18 May 2014. doi: 10.1109/SPW.2014.37 We review different attack vectors on Proof-Carrying Code (PCC) related to policy, memory model, machine abstraction, and formal system. We capture the notion of weird machines in PCC to formalize the shadow execution arising in programs when their proofs do not sufficiently capture and disallow the execution of untrusted computations. We suggest a few ideas to improve existing PCC systems so they are more resilient to memory attacks.
Keywords: Abstracts; Computational modeling; Program processors;Registers; Safety; Security; Semantics; FPCC; Machines; PCC; Weird (ID#: 15-3477)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957306&isnumber=6957265
Nurse, Jason R.C.; Buckley, Oliver; Legg, Philip A.; Goldsmith, Michael; Creese, Sadie; Wright, Gordon R.T.; Whitty, Monica, "Understanding Insider Threat: A Framework for Characterising Attacks," Security and Privacy Workshops (SPW), 2014 IEEE, pp.214,228, 17-18 May 2014. doi: 10.1109/SPW.2014.38 The threat that insiders pose to businesses, institutions and governmental organisations continues to be of serious concern. Recent industry surveys and academic literature provide unequivocal evidence to support the significance of this threat and its prevalence. Despite this, however, there is still no unifying framework to fully characterise insider attacks and to facilitate an understanding of the problem, its many components and how they all fit together. In this paper, we focus on this challenge and put forward a grounded framework for understanding and reflecting on the threat that insiders pose. Specifically, we propose a novel conceptualisation that is heavily grounded in insider-threat case studies, existing literature and relevant psychological theory. The framework identifies several key elements within the problem space, concentrating not only on noteworthy events and indicators- technical and behavioural- of potential attacks, but also on attackers (e.g., the motivation behind malicious threats and the human factors related to unintentional ones), and on the range of attacks being witnessed. The real value of our framework is in its emphasis on bringing together and defining clearly the various aspects of insider threat, all based on real-world cases and pertinent literature. This can therefore act as a platform for general understanding of the threat, and also for reflection, modelling past attacks and looking for useful patterns.
Keywords: Companies; Context; Educational institutions; Employment; History; Psychology; Security; attack chain; case studies; insider threat; psychological indicators; technical; threat framework (ID#: 15-3478)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957307&isnumber=6957265
Kammuller, Florian; Probst, Christian W., "Combining Generated Data Models with Formal Invalidation for Insider Threat Analysis," Security and Privacy Workshops (SPW), 2014 IEEE, pp.229,235, 17-18 May 2014. doi: 10.1109/SPW.2014.45 In this paper we revisit the advances made on invalidation policies to explore attack possibilities in organizational models. One aspect that has so far eloped systematic analysis of insider threat is the integration of data into attack scenarios and its exploitation for analyzing the models. We draw from recent insights into generation of insider data to complement a logic based mechanical approach. We show how insider analysis can be traced back to the early days of security verification and the Lowe-attack on NSPK. The invalidation of policies allows modelchecking organizational structures to detect insider attacks. Integration of higher order logic specification techniques allows the use of data refinement to explore attack possibilities beyond the initial system specification. We illustrate this combined invalidation technique on the classical example of the naughty lottery fairy. Data generation techniques support the automatic generation of insider attack data for research. The data generation is however always based on human generated insider attack scenarios that have to be designed based on domain knowledge of counter-intelligence experts. Introducing data refinement and invalidation techniques here allows the systematic exploration of such scenarios and exploit data centric views into insider threat analysis.
Keywords: Analytical models; Computational modeling; Data models; Internet; Protocols; Public key; Insider threats; policies; formal methods (ID#: 15-3479)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957308&isnumber=6957265
Greitzer, Frank L.; Strozer, Jeremy R.; Cohen, Sholom; Moore, Andrew P.; Mundie, David; Cowley, Jennifer, "Analysis of Unintentional Insider Threats Deriving from Social Engineering Exploits," Security and Privacy Workshops (SPW), 2014 IEEE, pp.236, 250, 17-18 May 2014. doi: 10.1109/SPW.2014.39 Organizations often suffer harm from individuals who bear no malice against them but whose actions unintentionally expose the organizations to risk-the unintentional insider threat (UIT). In this paper we examine UIT cases that derive from social engineering exploits. We report on our efforts to collect and analyze data from UIT social engineering incidents to identify possible behavioral and technical patterns and to inform future research and development of UIT mitigation strategies.
Keywords: Computers; Context; Educational institutions; Electronic mail; Organizations; Security; Taxonomy; social engineering; unintentional insider threat (ID#: 15-3480)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957309&isnumber=6957265
Bishop, Matt; Conboy, Heather M.; Phan, Huong; Simidchieva, Borislava I.; Avrunin, George S.; Clarke, Lori A.; Osterweil, Leon J.; Peisert, Sean, "Insider Threat Identification by Process Analysis," Security and Privacy Workshops (SPW), 2014 IEEE, pp.251,264, 17-18 May 2014. doi: 10.1109/SPW.2014.40 The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential "insider"). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack.
Keywords: Analytical models; Drugs; Fault trees; Hazards; Logic gates; Nominations and elections; Software; data exfiltration; elections; insider threat; process modeling; sabotage (ID#: 15-3481)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957310&isnumber=6957265
Sarkar, Anandarup; Kohler, Sven; Riddle, Sean; Ludaescher, Bertram; Bishop, Matt, "Insider Attack Identification and Prevention Using a Declarative Approach," Security and Privacy Workshops (SPW), 2014 IEEE, pp.265,276, 17-18 May 2014. doi: 10.1109/SPW.2014.41 A process is a collection of steps, carried out using data, by either human or automated agents, to achieve a specific goal. The agents in our process are insiders, they have access to different data and annotations on data moving in between the process steps. At various points in a process, they can carry out attacks on privacy and security of the process through their interactions with different data and annotations, via the steps which they control. These attacks are sometimes difficult to identify as the rogue steps are hidden among the majority of the usual non-malicious steps of the process. We define process models and attack models as data flow based directed graphs. An attack A is successful on a process P if there is a mapping relation from A to P that satisfies a number of conditions. These conditions encode the idea that an attack model needs to have a corresponding similarity match in the process model to be successful. We propose a declarative approach to vulnerability analysis. We encode the match conditions using a set of logic rules that define what a valid attack is. Then we implement an approach to generate all possible ways in which agents can carry out a valid attack A on a process P, thus informing the process modeler of vulnerabilities in P. The agents, in addition to acting by themselves, can also collude to carry out an attack. Once A is found to be successful against P, we automatically identify improvement opportunities in P and exploit them, eliminating ways in which A can be carried out against it. The identification uses information about which steps in P are most heavily attacked, and try to find improvement opportunities in them first, before moving onto the lesser attacked ones. We then evaluate the improved P to check if our improvement is successful. This cycle of process improvement and evaluation iterates until A is completely thwarted in all possible ways.
Keywords: Data models; Diamonds; Impedance matching; Nominations and elections; Process control; Robustness; Security; Declarative Programming; Process Modeling; Vulnerability Analysis (ID#: 15-3482)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957311&isnumber=6957265
Young, William T.; Memory, Alex; Goldberg, Henry G.; Senator, Ted E., "Detecting Unknown Insider Threat Scenarios," Security and Privacy Workshops (SPW), 2014 IEEE pp.277,288, 17-18 May 2014. doi: 10.1109/SPW.2014.42 This paper reports results from a set of experiments that evaluate an insider threat detection prototype on its ability to detect scenarios that have not previously been seen or contemplated by the developers of the system. We show the ability to detect a large variety of insider threat scenario instances imbedded in real data with no prior knowledge of what scenarios are present or when they occur. We report results of an ensemble-based, unsupervised technique for detecting potential insider threat instances over eight months of real monitored computer usage activity augmented with independently developed, unknown but realistic, insider threat scenarios that robustly achieves results within 5% of the best individual detectors identified after the fact. We explore factors that contribute to the success of the ensemble method, such as the number and variety of unsupervised detectors and the use of prior knowledge encoded in scenario-based detectors designed for known activity patterns. We report results over the entire period of the ensemble approach and of ablation experiments that remove the scenario-based detectors.
Keywords: Computers; Detectors ;Feature extraction; Monitoring; Organizations; Prototypes; Uniform resource locators; anomaly detection; experimental case study; insider threat; unsupervised ensembles (ID#: 15-3483)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957312&isnumber=6957265
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.