Visible to the public Biblio

Found 1474 results

Filters: First Letter Of Title is D  [Clear All Filters]
2017-04-24
Fabre, Arthur, Martinez, Kirk, Bragg, Graeme M., Basford, Philip J., Hart, Jane, Bader, Sebastian, Bragg, Olivia M..  2016.  Deploying a 6LoWPAN, CoAP, Low Power, Wireless Sensor Network: Poster Abstract. Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. :362–363.

In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.

2017-04-21
Yu Wang, University of Illinois at Urbana-Champaign, Zhenqi Huang, University of Illinois at Urbana-Champaign, Sayan Mitra, University of Illinois at Urbana-Champaign, Geir Dullerud, University of Illinois at Urbana-Champaign.  2017.  Differential Privacy in Linear Distributed Control Systems: Entropy Minimizing Mechanisms and Performance Tradeoffs. IEEE Transactions on Network Control Systems. 4(1)

In distributed control systems with shared resources, participating agents can improve the overall performance of the system by sharing data about their personal references. In this paper, we formulate and study a natural tradeoff arising in these problems between the privacy of the agent’s data and the performance of the control system.We formalize privacy in terms of differential privacy of agents’ preference vectors. The overall control system consists of N agents with linear discrete-time coupled dynamics, each controlled to track its preference vector. Performance of the system is measured by the mean squared tracking error. We present a mechanism that achieves differential privacy by adding Laplace noise to the shared information in a way that depends on the sensitivity of the control system to the private data. We show that for stable systems the performance cost of using this type of privacy preserving mechanism grows as O(T/Nε2), where T is the time horizon and ε is the privacy parameter. For unstable systems, the cost grows exponentially with time. From an estimation point of view, we establish a lower-bound for the entropy of any unbiased estimator of the private data from any noise-adding mechanism that gives ε-differential privacy. We show that the mechanism achieving this lower-bound is a randomized mechanism that also uses Laplace noise.

2017-04-20
Wolf, Flynn.  2016.  Developing a Wearable Tactile Prototype to Support Situational Awareness. Proceedings of the 13th Web for All Conference. :37:1–37:2.

Research towards my dissertation has involved a series of perceptual and accessibility-focused studies concerned with the use of tactile cues for spatial and situational awareness, displayed through head-mounted wearables. These studies were informed by an initial participatory design study of mobile technology multitasking and tactile interaction habits. This research has yielded a number of actionable conclusions regarding the development of tactile interfaces for the head, and endeavors to provide greater insight into the design of advanced tactile alerting for contextual and spatial understanding in assistive applications (e.g. for individuals who are blind or those encountering situational impairments), as well as guidance for developers regarding assessment of interaction between under-utilized sensory modalities and underlying perceptual and cognitive processes.

Wurzenberger, Markus, Skopik, Florian, Fiedler, Roman, Kastner, Wolfgang.  2016.  Discovering Insider Threats from Log Data with High-Performance Bioinformatics Tools. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :109–112.

Since the number of cyber attacks by insider threats and the damage caused by them has been increasing over the last years, organizations are in need for specific security solutions to counter these threats. To limit the damage caused by insider threats, the timely detection of erratic system behavior and malicious activities is of primary importance. We observed a major paradigm shift towards anomaly-focused detection mechanisms, which try to establish a baseline of system behavior – based on system logging data – and report any deviations from this baseline. While these approaches are promising, they usually have to cope with scalability issues. As the amount of log data generated during IT operations is exponentially growing, high-performance security solutions are required that can handle this huge amount of data in real time. In this paper, we demonstrate how high-performance bioinformatics tools can be leveraged to tackle this issue, and we demonstrate their application to log data for outlier detection, to timely detect anomalous system behavior that points to insider attacks.

Sonewar, P. A., Thosar, S. D..  2016.  Detection of SQL injection and XSS attacks in three tier web applications. 2016 International Conference on Computing Communication Control and automation (ICCUBEA). :1–4.

Web applications are used on a large scale worldwide, which handles sensitive personal data of users. With web application that maintains data ranging from as simple as telephone number to as important as bank account information, security is a prime point of concern. With hackers aimed to breakthrough this security using various attacks, we are focusing on SQL injection attacks and XSS attacks. SQL injection attack is very common attack that manipulates the data passing through web application to the database servers through web servers in such a way that it alters or reveals database contents. While Cross Site Scripting (XSS) attacks focuses more on view of the web application and tries to trick users that leads to security breach. We are considering three tier web applications with static and dynamic behavior, for security. Static and dynamic mapping model is created to detect anomalies in the class of SQL Injection and XSS attacks.

Bronzino, F., Raychaudhuri, D., Seskar, I..  2016.  Demonstrating Context-Aware Services in the Mobility First Future Internet Architecture. 2016 28th International Teletraffic Congress (ITC 28). 01:201–204.

As the amount of mobile devices populating the Internet keeps growing at tremendous pace, context-aware services have gained a lot of traction thanks to the wide set of potential use cases they can be applied to. Environmental sensing applications, emergency services, and location-aware messaging are just a few examples of applications that are expected to increase in popularity in the next few years. The MobilityFirst future Internet architecture, a clean-slate Internet architecture design, provides the necessary abstractions for creating and managing context-aware services. Starting from these abstractions we design a context services framework, which is based on a set of three fundamental mechanisms: an easy way to specify context based on human understandable techniques, i.e. use of names, an architecture supported management mechanism that allows both to conveniently deploy the service and efficiently provide management capabilities, and a native delivery system that reduces the tax on the network components and on the overhead cost of deploying such applications. In this paper, we present an emergency alert system for vehicles assisting first responders that exploits users location awareness to support quick and reliable alert messages for interested vehicles. By deploying a demo of the system on a nationwide testbed, we aim to provide better understanding of the dynamics involved in our designed framework.

2017-04-08
Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2017.  Is domain highlighting actually helpful in identifying phishing webpages?

Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.

Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. 

Method: Two phishing detection experiments were conducted. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied.  Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.

Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the domain highlighting.

Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.

Application: Potential applications include development of phishing-prevention training incorporating domain highlighting with other methods to help users identify phishing webpages. 

2017-04-03
Wadhawan, Yatin, Neuman, Clifford.  2016.  Defending Cyber-Physical Attacks on Oil Pipeline Systems: A Game-Theoretic Approach. Proceedings of the 1st International Workshop on AI for Privacy and Security. :7:1–7:8.

The security of critical infrastructures such as oil and gas cyber-physical systems is a significant concern in today's world where malicious activities are frequent like never before. On one side we have cyber criminals who compromise cyber infrastructure to control physical processes; we also have physical criminals who attack the physical infrastructure motivated to destroy the target or to steal oil from pipelines. Unfortunately, due to limited resources and physical dispersion, it is impossible for the system administrator to protect each target all the time. In this research paper, we tackle the problem of cyber and physical attacks on oil pipeline infrastructure by proposing a Stackelberg Security Game of three players: system administrator as a leader, cyber and physical attackers as followers. The novelty of this paper is that we have formulated a real world problem of oil stealing using a game theoretic approach. The game has two different types of targets attacked by two distinct types of adversaries with different motives and who can coordinate to maximize their rewards. The solution to this game assists the system administrator of the oil pipeline cyber-physical system to allocate the cyber security controls for the cyber targets and to assign patrol teams to the pipeline regions efficiently. This paper provides a theoretical framework for formulating and solving the above problem.

2017-04-01
Aiping Xiong, Robert W. Proctor, Ninghui Li, Weining Yang.  2017.  Is domain highlighting actually helpful in identifying phishing webpages? Human Factors: The Journal of the Human Factors and Ergonomics Society.

To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

2017-03-29
Stan, Oana, Carpov, Sergiu, Sirdey, Renaud.  2016.  Dynamic Execution of Secure Queries over Homomorphic Encrypted Databases. Proceedings of the 4th ACM International Workshop on Security in Cloud Computing. :51–58.

The wide use of cloud computing and of data outsourcing rises important concerns with regards to data security resulting thus in the necessity of protection mechanisms such as encryption of sensitive data. The recent major theoretical breakthrough of finding the Holy Grail of encryption, i.e. fully homomorphic encryption guarantees the privacy of queries and their results on encrypted data. However, there are only a few studies proposing a practical performance evaluation of the use of homomorphic encryption schemes in order to perform database queries. In this paper, we propose and analyse in the context of a secure framework for a generic database query interpreter two different methods in which client requests are dynamically executed on homomorphically encrypted data. Dynamic compilation of the requests allows to take advantage of the different optimizations performed during an off-line step on an intermediate code representation, taking the form of boolean circuits, and, moreover, to specialize the execution using runtime information. Also, for the returned encrypted results, we assess the complexity and the efficiency of the different protocols proposed in the literature in terms of overall execution time, accuracy and communication overhead.

White, Martin, Tufano, Michele, Vendome, Christopher, Poshyvanyk, Denys.  2016.  Deep Learning Code Fragments for Code Clone Detection. Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. :87–98.

Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93% of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.

Lou, Jian, Vorobeychik, Yevgeniy.  2016.  Decentralization and Security in Dynamic Traffic Light Control. Proceedings of the Symposium and Bootcamp on the Science of Security. :90–92.

Complex traffic networks include a number of controlled intersections, and, commonly, multiple districts or municipalities. The result is that the overall traffic control problem is extremely complex computationally. Moreover, given that different municipalities may have distinct, non-aligned, interests, traffic light controller design is inherently decentralized, a consideration that is almost entirely absent from related literature. Both complexity and decentralization have great bearing both on the quality of the traffic network overall, as well as on its security. We consider both of these issues in a dynamic traffic network. First, we propose an effective local search algorithm to efficiently design system-wide control logic for a collection of intersections. Second, we propose a game theoretic (Stackelberg game) model of traffic network security in which an attacker can deploy denial-of-service attacks on sensors, and develop a resilient control algorithm to mitigate such threats. Finally, we propose a game theoretic model of decentralization, and investigate this model both in the context of baseline traffic network design, as well as resilient design accounting for attacks. Our methods are implemented and evaluated using a simple traffic network scenario in SUMO.

Ibrahim, Ahmad, Sadeghi, Ahmad-Reza, Tsudik, Gene, Zeitouni, Shaza.  2016.  DARPA: Device Attestation Resilient to Physical Attacks. Proceedings of the 9th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :171–182.

As embedded devices (under the guise of "smart-whatever") rapidly proliferate into many domains, they become attractive targets for malware. Protecting them from software and physical attacks becomes both important and challenging. Remote attestation is a basic tool for mitigating such attacks. It allows a trusted party (verifier) to remotely assess software integrity of a remote, untrusted, and possibly compromised, embedded device (prover). Prior remote attestation methods focus on software (malware) attacks in a one-verifier/one-prover setting. Physical attacks on provers are generally ruled out as being either unrealistic or impossible to mitigate. In this paper, we argue that physical attacks must be considered, particularly, in the context of many provers, e.g., a network, of devices. As- suming that physical attacks require capture and subsequent temporary disablement of the victim device(s), we propose DARPA, a light-weight protocol that takes advantage of absence detection to identify suspected devices. DARPA is resilient against a very strong adversary and imposes minimal additional hardware requirements. We justify and identify DARPA's design goals and evaluate its security and costs.

Martin, Jeremy, Rye, Erik, Beverly, Robert.  2016.  Decomposition of MAC Address Structure for Granular Device Inference. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :78–88.

Common among the wide variety of ubiquitous networked devices in modern use is wireless 802.11 connectivity. The MAC addresses of these devices are visible to a passive adversary, thereby presenting security and privacy threats - even when link or application-layer encryption is employed. While it is well-known that the most significant three bytes of a MAC address, the OUI, coarsely identify a device's manufacturer, we seek to better understand the ways in which the remaining low-order bytes are allocated in practice. From a collection of more than two billion 802.11 frames observed in the wild, we extract device and model information details for over 285K devices, as leaked by various management frames and discovery protocols. From this rich dataset, we characterize overall device populations and densities, vendor address allocation policies and utilization, OUI sharing among manufacturers, discover unique models occurring in multiple OUIs, and map contiguous address blocks to specific devices. Our mapping thus permits fine-grained device type and model predictions for unknown devices solely on the basis of their MAC address. We validate our inferences on both ground-truth data and a third-party dataset, where we obtain high accuracy. Our results empirically demonstrate the extant structure of the low-order MAC bytes due to manufacturer's sequential allocation policies, and the security and privacy concerns therein.

2017-03-20
Ferreira, Gabriel, Malik, Momin, Kästner, Christian, Pfeffer, Jürgen, Apel, Sven.  2016.  Do İfdefs Influence the Occurrence of Vulnerabilities? An Empirical Study of the Linux Kernel Proceedings of the 20th International Systems and Software Product Line Conference. :65–73.

Preprocessors support the diversification of software products with \#ifdefs, but also require additional effort from developers to maintain and understand variable code. We conjecture that \#ifdefs cause developers to produce more vulnerable code because they are required to reason about multiple features simultaneously and maintain complex mental models of dependencies of configurable code. We extracted a variational call graph across all configurations of the Linux kernel, and used configuration complexity metrics to compare vulnerable and non-vulnerable functions considering their vulnerability history. Our goal was to learn about whether we can observe a measurable influence of configuration complexity on the occurrence of vulnerabilities. Our results suggest, among others, that vulnerable functions have higher variability than non-vulnerable ones and are also constrained by fewer configuration options. This suggests that developers are inclined to notice functions appear in frequently-compiled product variants. We aim to raise developers' awareness to address variability more systematically, since configuration complexity is an important, but often ignored aspect of software product lines.

Fowler, James E..  2016.  Delta Encoding of Virtual-Machine Memory in the Dynamic Analysis of Malware. :592–592.

Malware is an ever-increasing threat to personal, corporate, and government computing systems alike. Particularly in the corporate and government sectors, the attribution of malware—including the identification of the authorship of malware as well as potentially the malefactor responsible for an attack—is of growing interest. Such malware attribution is often enabled by the fact that malware authors build on the work of others through the use of generators, libraries, and borrowed code. Determining malware phylogeny—the evolutionary history of and the derivative relations between malware—is consequently an endeavor of increasing importance, with a growing focus on the dynamic analysis of malware which involves executing a malware sample and determining the actions it takes after some period of operation. In most cases, such dynamic analysis occurs in a virtual machine, or "sandbox," in order to confine the malware to an environment in which it can do no harm to real systems. In sandbox-driven dynamic analysis of malware, a virtual machine is typically run starting from some known, malware-free baseline state. The malware is injected into the virtual machine, and the machine is allowed to run for some period of time during which the malware presumably activates. The machine is then suspended, and the current machine memory is dumped to disk. The process may then be repeated for other malware samples, each time starting from the baseline state. Stored in raw form on the disk, the dumped memory file is the same size as the virtual-machine memory, for virtual machines running modern operating systems, such memory would likely be no less than 512 MB but could be up to several GBs. If the corresponding memory dumps are to be retained for repeated analysis—as is likely to be required in order to determine a phylogeny for a large database of malware samples—lossless compression of the memory dumps is necessarily to prevent explosive disk usage. For example, the VirusShare project maintains a database of over 19 million malware samples, running these in a virtual machine with 512 MB of memory would require of 9 petabytes (PB) of storage to retain the memory dumps. In this paper, we develop a scheme for the lossless compression of memory dumps resulting from the repeated execution of malware samples in a virtual-machine sandbox. Rather than compress each memory dump individually, we capitalize on the fact that memory dumps stem from a known baseline virtual-machine state and code with respect to this baseline memory. Additionally, to further improve compression efficiency, we exploit the fact that a significant portion of the difference between the baseline memory and that of the currently running machine is the result of the loading of known executable programs and shared libraries. Consequently, we propose delta coding to compress the current virtual-machine memory dump by coding its differences with respect to a predicted memory image, with the latter formed by duplicating the loading of the executables and libraries into the baseline memory, resulting in a significant improvement in compression performance over straightforward delta coding alone. In experimental results for a body of malware samples, the proposed approach outperformed the widely used xdelta3 delta coder by approximately 20% and the popular generic gzip coder by 79%.

Ferreira, Gabriel, Malik, Momin, Kästner, Christian, Pfeffer, Jürgen, Apel, Sven.  2016.  Do İfdefs Influence the Occurrence of Vulnerabilities? An Empirical Study of the Linux Kernel Proceedings of the 20th International Systems and Software Product Line Conference. :65–73.

Preprocessors support the diversification of software products with \#ifdefs, but also require additional effort from developers to maintain and understand variable code. We conjecture that \#ifdefs cause developers to produce more vulnerable code because they are required to reason about multiple features simultaneously and maintain complex mental models of dependencies of configurable code. We extracted a variational call graph across all configurations of the Linux kernel, and used configuration complexity metrics to compare vulnerable and non-vulnerable functions considering their vulnerability history. Our goal was to learn about whether we can observe a measurable influence of configuration complexity on the occurrence of vulnerabilities. Our results suggest, among others, that vulnerable functions have higher variability than non-vulnerable ones and are also constrained by fewer configuration options. This suggests that developers are inclined to notice functions appear in frequently-compiled product variants. We aim to raise developers' awareness to address variability more systematically, since configuration complexity is an important, but often ignored aspect of software product lines.

Fowler, James E..  2016.  Delta Encoding of Virtual-Machine Memory in the Dynamic Analysis of Malware. :592–592.

Malware is an ever-increasing threat to personal, corporate, and government computing systems alike. Particularly in the corporate and government sectors, the attribution of malware—including the identification of the authorship of malware as well as potentially the malefactor responsible for an attack—is of growing interest. Such malware attribution is often enabled by the fact that malware authors build on the work of others through the use of generators, libraries, and borrowed code. Determining malware phylogeny—the evolutionary history of and the derivative relations between malware—is consequently an endeavor of increasing importance, with a growing focus on the dynamic analysis of malware which involves executing a malware sample and determining the actions it takes after some period of operation. In most cases, such dynamic analysis occurs in a virtual machine, or "sandbox," in order to confine the malware to an environment in which it can do no harm to real systems. In sandbox-driven dynamic analysis of malware, a virtual machine is typically run starting from some known, malware-free baseline state. The malware is injected into the virtual machine, and the machine is allowed to run for some period of time during which the malware presumably activates. The machine is then suspended, and the current machine memory is dumped to disk. The process may then be repeated for other malware samples, each time starting from the baseline state. Stored in raw form on the disk, the dumped memory file is the same size as the virtual-machine memory, for virtual machines running modern operating systems, such memory would likely be no less than 512 MB but could be up to several GBs. If the corresponding memory dumps are to be retained for repeated analysis—as is likely to be required in order to determine a phylogeny for a large database of malware samples—lossless compression of the memory dumps is necessarily to prevent explosive disk usage. For example, the VirusShare project maintains a database of over 19 million malware samples, running these in a virtual machine with 512 MB of memory would require of 9 petabytes (PB) of storage to retain the memory dumps. In this paper, we develop a scheme for the lossless compression of memory dumps resulting from the repeated execution of malware samples in a virtual-machine sandbox. Rather than compress each memory dump individually, we capitalize on the fact that memory dumps stem from a known baseline virtual-machine state and code with respect to this baseline memory. Additionally, to further improve compression efficiency, we exploit the fact that a significant portion of the difference between the baseline memory and that of the currently running machine is the result of the loading of known executable programs and shared libraries. Consequently, we propose delta coding to compress the current virtual-machine memory dump by coding its differences with respect to a predicted memory image, with the latter formed by duplicating the loading of the executables and libraries into the baseline memory, resulting in a significant improvement in compression performance over straightforward delta coding alone. In experimental results for a body of malware samples, the proposed approach outperformed the widely used xdelta3 delta coder by approximately 20% and the popular generic gzip coder by 79%.

2017-03-08
Sokol, P., Husak, M., Lipták, F..  2015.  Deploying Honeypots and Honeynets: Issue of Privacy. 2015 10th International Conference on Availability, Reliability and Security. :397–403.

Honey pots and honey nets are popular tools in the area of network security and network forensics. The deployment and usage of these tools are influenced by a number of technical and legal issues, which need to be carefully considered together. In this paper, we outline privacy issues of honey pots and honey nets with respect to technical aspects. The paper discusses the legal framework of privacy, legal ground to data processing, and data collection. The analysis of legal issues is based on EU law and is supported by discussions on privacy and related issues. This paper is one of the first papers which discuss in detail privacy issues of honey pots and honey nets in accordance with EU law.

Voyiatzis, I., Sgouropoulou, C., Estathiou, C..  2015.  Detecting untestable hardware Trojan with non-intrusive concurrent on line testing. 2015 10th International Conference on Design Technology of Integrated Systems in Nanoscale Era (DTIS). :1–2.

Hardware Trojans are an emerging threat that intrudes in the design and manufacturing cycle of the chips and has gained much attention lately due to the severity of the problems it draws to the chip supply chain. Hardware Typically, hardware Trojans are not detected during the usual manufacturing testing due to the fact that they are activated as an effect of a rare event. A class of published HTs are based on the geometrical characteristics of the circuit and claim to be undetectable, in the sense that their activation cannot be detected. In this work we study the effect of continuously monitoring the inputs of the module under test with respect to the detection of HTs possibly inserted in the module, either in the design or the manufacturing stage.

Ma, T., Zhang, H., Qian, J., Liu, S., Zhang, X., Ma, X..  2015.  The Design of Brand Cosmetics Anti-counterfeiting System Based on RFID Technology. 2015 International Conference on Network and Information Systems for Computers. :184–189.

The digital authentication security technology is widely used in the current brand cosmetics as key anti-counterfeiting technology, yet this technology is prone to "false security", "hard security" and "non-security" phenomena. This paper researches the current cosmetics brand distribution channels and sales methods also analyses the cosmetics brands' demand for RFID technology anti-counterfeiting security system, then proposes a security system based on RFID technology for brand cosmetics. The system is based on a typical distributed RFID tracking and tracing system which is the most widely used system-EPC system. This security system based on RFID technology for brand cosmetics in the paper is a visual information management system for luxury cosmetics brand. It can determine the source of the product timely and effectively, track and trace products' logistics information and prevent fake goods and gray goods getting into the normal supply chain channels.

Casola, V., Benedictis, A. D., Rak, M., Villano, U..  2015.  DoS Protection in the Cloud through the SPECS Services. 2015 10th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC). :677–682.

Security in cloud environments is always considered an issue, due to the lack of control over leased resources. In this paper, we present a solution that offers security-as-a-service by relying on Security Service Level Agreements (Security SLAs) as a means to represent the security features to be granted. In particular, we focus on a security mechanism that is automatically configured and activated in an as-a-service fashion in order to protect cloud resources against DoS attacks. The activities reported in this paper are part of a wider work carried out in the FP7-ICT programme project SPECS, which aims at building a framework offering Security-as-a-Service using an SLA-based approach. The proposed approach founds on the adoption of SPECS Services to negotiate, to enforce and to monitor suitable security metrics, chosen by cloud customers, negotiated with the provider and included in a signed Security SLA.

Buda, A., Främling, K., Borgman, J., Madhikermi, M., Mirzaeifar, S., Kubler, S..  2015.  Data supply chain in Industrial Internet. 2015 IEEE World Conference on Factory Communication Systems (WFCS). :1–7.

The Industrial Internet promises to radically change and improve many industry's daily business activities, from simple data collection and processing to context-driven, intelligent and pro-active support of workers' everyday tasks and life. The present paper first provides insight into a typical industrial internet application architecture, then it highlights one fundamental arising contradiction: “Who owns the data is often not capable of analyzing it”. This statement is explained by imaging a visionary data supply chain that would realize some of the Industrial Internet promises. To concretely implement such a system, recent standards published by The Open Group are presented, where we highlight the characteristics that make them suitable for Industrial Internet applications. Finally, we discuss comparable solutions and concludes with new business use cases.

Mahajan, S., Katti, J., Walunj, A., Mahalunkar, K..  2015.  Designing a database encryption technique for database security solution with cache. 2015 IEEE International Advance Computing Conference (IACC). :357–360.

A database is a vast collection of data which helps us to collect, retrieve, organize and manage the data in an efficient and effective manner. Databases are critical assets. They store client details, financial information, personal files, company secrets and other data necessary for business. Today people are depending more on the corporate data for decision making, management of customer service and supply chain management etc. Any loss, corrupted data or unavailability of data may seriously affect its performance. The database security should provide protected access to the contents of a database and should preserve the integrity, availability, consistency, and quality of the data This paper describes the architecture based on placing the Elliptical curve cryptography module inside database management software (DBMS), just above the database cache. Using this method only selected part of the database can be encrypted instead of the whole database. This architecture allows us to achieve very strong data security using ECC and increase performance using cache.

Chriskos, P., Zoidi, O., Tefas, A., Pitas, I..  2015.  De-identifying facial images using projections on hyperspheres. 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). 04:1–6.

A major issue that arises from mass visual media distribution in modern video sharing, social media and cloud services, is the issue of privacy. Malicious users can use these services to track the actions of certain individuals and/or groups thus violating their privacy. As a result the need to hinder automatic facial image identification in images and videos arises. In this paper we propose a method for de-identifying facial images. Contrary to most de-identification methods, this method manipulates facial images so that humans can still recognize the individual or individuals in an image or video frame, but at the same time common automatic identification algorithms fail to do so. This is achieved by projecting the facial images on a hypersphere. From the conducted experiments it can be verified that this method is effective in reducing the classification accuracy under 10%. Furthermore, in the resulting images the subject can be identified by human viewers.