Biblio

Found 3679 results

Filters: First Letter Of Last Name is C  [Clear All Filters]
2017-08-02
Chabanne, Hervé, Keuffer, Julien, Lescuyer, Roch.  2016.  Study of a Verifiable Biometric Matching. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :183–184.

In this paper, we apply verifiable computing techniques to a biometric matching. The purpose of verifiable computation is to give the result of a computation along with a proof that the calculations were correctly performed. We adapt a protocol called sumcheck protocol and present a system that performs verifiable biometric matching in the case of a fast border control. This is a work in progress and we focus on verifying an inner product. We then give some experimental results of its implementation. Verifiable computation here helps to enforce the authentication phase bringing in the process a proof that the biometric verification has been correctly performed.

2018-05-23
2017-05-17
Carrara, Brent, Adams, Carlisle.  2016.  A Survey and Taxonomy Aimed at the Detection and Measurement of Covert Channels. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :115–126.

New viewpoints of covert channels are presented in this work. First, the origin of covert channels is traced back to acc ess control and a new class of covert channel, air-gap covert channels, is presented. Second, we study the design of covert channels and provide novel insights that differentiate the research area of undetectable communication from that of covert channels. Third, we argue that secure systems can be characterized as fixed-source systems or continuous-source systems, i.e., systems whose security is compromised if their design allows a covert channel to communicate a small, fixed amount of information or communicate information at a sufficiently high, continuous rate, respectively. Consequently, we challenge the traditional method for measuring covert channels, which is based on Shannon capacity, and propose that a new measure, steganographic capacity, be used to accurately assess the risk posed by covert channels, particularly those affecting fixed-source systems. Additionally, our comprehensive review of covert channels has led us to the conclusion that important properties of covert channels have not been captured in previous taxonomies. We, therefore, present novel extensions to existing taxonomies to more accurately characterize covert channels.

2017-05-19
Xia, Lixue, Tang, Tianqi, Huangfu, Wenqin, Cheng, Ming, Yin, Xiling, Li, Boxun, Wang, Yu, Yang, Huazhong.  2016.  Switched by Input: Power Efficient Structure for RRAM-based Convolutional Neural Network. Proceedings of the 53rd Annual Design Automation Conference. :125:1–125:6.

Convolutional Neural Network (CNN) is a powerful technique widely used in computer vision area, which also demands much more computations and memory resources than traditional solutions. The emerging metal-oxide resistive random-access memory (RRAM) and RRAM crossbar have shown great potential on neuromorphic applications with high energy efficiency. However, the interfaces between analog RRAM crossbars and digital peripheral functions, namely Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs), consume most of the area and energy of RRAM-based CNN design due to the large amount of intermediate data in CNN. In this paper, we propose an energy efficient structure for RRAM-based CNN. Based on the analysis of data distribution, a quantization method is proposed to transfer the intermediate data into 1 bit and eliminate DACs. An energy efficient structure using input data as selection signals is proposed to reduce the ADC cost for merging results of multiple crossbars. The experimental results show that the proposed method and structure can save 80% area and more than 95% energy while maintaining the same or comparable classification accuracy of CNN on MNIST.

2017-11-27
Checkoway, Stephen, Maskiewicz, Jacob, Garman, Christina, Fried, Joshua, Cohney, Shaanan, Green, Matthew, Heninger, Nadia, Weinmann, Ralf-Philipp, Rescorla, Eric, Shacham, Hovav.  2016.  A Systematic Analysis of the Juniper Dual EC Incident. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :468–479.

In December 2015, Juniper Networks announced multiple security vulnerabilities stemming from unauthorized code in ScreenOS, the operating system for their NetScreen VPN routers. The more sophisticated of these vulnerabilities was a passive VPN decryption capability, enabled by a change to one of the elliptic curve points used by the Dual EC pseudorandom number generator. In this paper, we describe the results of a full independent analysis of the ScreenOS randomness and VPN key establishment protocol subsystems, which we carried out in response to this incident. While Dual EC is known to be insecure against an attacker who can choose the elliptic curve parameters, Juniper had claimed in 2013 that ScreenOS included countermeasures against this type of attack. We find that, contrary to Juniper's public statements, the ScreenOS VPN implementation has been vulnerable since 2008 to passive exploitation by an attacker who selects the Dual EC curve point. This vulnerability arises due to apparent flaws in Juniper's countermeasures as well as a cluster of changes that were all introduced concurrently with the inclusion of Dual EC in a single 2008 release. We demonstrate the vulnerability on a real NetScreen device by modifying the firmware to install our own parameters, and we show that it is possible to passively decrypt an individual VPN session in isolation without observing any other network traffic. We investigate the possibility of passively fingerprinting ScreenOS implementations in the wild. This incident is an important example of how guidelines for random number generation, engineering, and validation can fail in practice.

2018-05-16
C. Nowzari, J. Cortes.  2016.  Team-triggered coordination for real-time control of networked cyberphysical systems. 61:34-47.

This paper studies the real-time implementation of distributed controllers on networked cyberphysical systems. We build on the strengths of event- and self-triggered control to synthesize a unified approach, termed team-triggered, where agents make promises to one another about their future states and are responsible for warning each other if they later decide to break them. The information provided by these promises allows individual agents to autonomously schedule information requests in the future and sets the basis for maintaining desired levels of performance at lower implementation cost. We establish provably correct guarantees for the distributed strategies that result from the proposed approach and examine their robustness against delays, packet drops, and communication noise. The results are illustrated in simulations of a multi-agent formation control problem.

2016-04-10
Olga A. Zielinska, Allaire K. Welk, Emerson Murphy-Hill, Christopher B. Mayhorn.  2016.  A temporal analysis of persuasion principles in phishing emails. Human Factors and Ergonomics Society 60th Annual Meeting.

Eight hundred eighty-seven phishing emails from Arizona State University, Brown University, and Cornell University were assessed by two reviewers for Cialdini’s six principles of persuasion: authority, social proof, liking/similarity, commitment/consistency, scarcity, and reciprocation. A correlational analysis of email characteristics by year revealed that the persuasion principles of commitment/consistency and scarcity have increased over time, while the principles of reciprocation and social proof have decreased over time. Authority and liking/similarity revealed mixed results with certain characteristics increasing and others decreasing. Results from this study can inform user training of phishing emails and help cybersecurity software to become more effective. 

2017-03-27
Batselier, Kim, Chen, Zhongming, Liu, Haotian, Wong, Ngai.  2016.  A Tensor-based Volterra Series Black-box Nonlinear System Identification and Simulation Framework. Proceedings of the 35th International Conference on Computer-Aided Design. :17:1–17:7.

Tensors are a multi-linear generalization of matrices to their d-way counterparts, and are receiving intense interest recently due to their natural representation of high-dimensional data and the availability of fast tensor decomposition algorithms. Given the input-output data of a nonlinear system/circuit, this paper presents a nonlinear model identification and simulation framework built on top of Volterra series and its seamless integration with tensor arithmetic. By exploiting partially-symmetric polyadic decompositions of sparse Toeplitz tensors, the proposed framework permits a pleasantly scalable way to incorporate high-order Volterra kernels. Such an approach largely eludes the curse of dimensionality and allows computationally fast modeling and simulation beyond weakly nonlinear systems. The black-box nature of the model also hides structural information of the system/circuit and encapsulates it in terms of compact tensors. Numerical examples are given to verify the efficacy, efficiency and generality of this tensor-based modeling and simulation framework.

2017-10-13
Crockett, Eric, Peikert, Chris.  2016.  \$\textbackslashtextbackslashLambda\$ο\$\textbackslashtextbackslashlambda\$: Functional Lattice Cryptography. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :993–1005.

This work describes the design, implementation, and evaluation of Λολ, a general-purpose software framework for lattice-based cryptography. The Λολ framework has several novel properties that distinguish it from prior implementations of lattice cryptosystems, including the following. Generality, modularity, concision: Λολ defines a collection of general, highly composable interfaces for mathematical operations used across lattice cryptography, allowing for a wide variety of schemes to be expressed very naturally and at a high level of abstraction. For example, we implement an advanced fully homomorphic encryption (FHE) scheme in as few as 2–5 lines of code per feature, via code that very closely matches the scheme's mathematical definition. Theory affinity: Λολ is designed from the ground-up around the specialized ring representations, fast algorithms, and worst-case hardness proofs that have been developed for the Ring-LWE problem and its cryptographic applications. In particular, it implements fast algorithms for sampling from theory-recommended error distributions over arbitrary cyclotomic rings, and provides tools for maintaining tight control of error growth in cryptographic schemes. Safety: Λολ has several facilities for reducing code complexity and programming errors, thereby aiding the correct implementation of lattice cryptosystems. In particular, it uses strong typing to statically enforce—i.e., at compile time—a wide variety of constraints among the various parameters. Advanced features: Λολ exposes the rich hierarchy of cyclotomic rings to cryptographic applications. We use this to give the first-ever implementation of a collection of FHE operations known as "ring switching," and also define and analyze a more efficient variant that we call "ring tunneling." Lastly, this work defines and analyzes a variety of mathematical objects and algorithms for the recommended usage of Ring-LWE in cyclotomic rings, which we believe will serve as a useful knowledge base for future implementations.

2017-05-19
Carter, Lemuria, McBride, Maranda.  2016.  Texting While Driving Among Teens: Exploring User Perceptions to Identify Policy Recommendations. Proceedings of the 17th International Digital Government Research Conference on Digital Government Research. :375–378.

Texting while driving has emerged as a significant threat to citizen safety. In this study, we utilize general deterrence theory (GDT), protection motivation theory and personality traits to evaluate texting while driving (TWD) compliance intentions among teenage drivers. This paper presents the results of our pilot study. We administered an online survey to 105 teenage and young adult drivers. The potential implications for research and practice and policy are discussed.

2017-03-07
Zhang, Jiao, Ren, Fengyuan, Shu, Ran, Cheng, Peng.  2016.  TFC: Token Flow Control in Data Center Networks. Proceedings of the Eleventh European Conference on Computer Systems. :23:1–23:14.

Services in modern data center networks pose growing performance demands. However, the widely existed special traffic patterns, such as micro-burst, highly concurrent flows, on-off pattern of flow transmission, exacerbate the performance of transport protocols. In this work, an clean-slate explicit transport control mechanism, called Token Flow Control (TFC), is proposed for data center networks to achieve high link utilization, ultra-low latency, fast convergence, and rare packets dropping. TFC uses tokens to represent the link bandwidth resource and define the concept of effective flows to stand for consumers. The total tokens will be explicitly allocated to each consumer every time slot. TFC excludes in-network buffer space from the flow pipeline and thus achieves zero-queueing. Besides, a packet delay function is added at switches to prevent packets dropping with highly concurrent flows. The performance of TFC is evaluated using both experiments on a small real testbed and large-scale simulations. The results show that TFC achieves high throughput, fast convergence, near zero-queuing and rare packets loss in various scenarios.

2017-08-02
Chu, Pin-Yu, Tseng, Hsien-Lee.  2016.  A Theoretical Framework for Evaluating Government Open Data Platform. Proceedings of the International Conference on Electronic Governance and Open Society: Challenges in Eurasia. :135–142.

Regarding Information and Communication Technologies (ICTs) in the public sector, electronic governance is the first emerged concept which has been recognized as an important issue in government's outreach to citizens since the early 1990s. The most important development of e-governance recently is Open Government Data, which provides citizens with the opportunity to freely access government data, conduct value-added applications, provide creative public services, and participate in different kinds of democratic processes. Open Government Data is expected to enhance the quality and efficiency of government services, strengthen democratic participation, and create interests for the public and enterprises. The success of Open Government Data hinges on its accessibility, quality of data, security policy, and platform functions in general. This article presents a robust assessment framework that not only provides a valuable understanding of the development of Open Government Data but also provides an effective feedback mechanism for mid-course corrections. We further apply the framework to evaluate the Open Government Data platform of the central government, on which open data of nine major government agencies are analyzed. Our research results indicate that Financial Supervisory Commission performs better than other agencies; especially in terms of the accessibility. Financial Supervisory Commission mostly provides 3-star or above dataset formats, and the quality of its metadata is well established. However, most of the data released by government agencies are regulations, reports, operations and other administrative data, which are not immediately applicable. Overall, government agencies should enhance the amount and quality of Open Government Data positively and continuously, also strengthen the functions of discussion and linkage of platforms and the quality of datasets. Aside from consolidating collaborations and interactions to open data communities, government agencies should improve the awareness and ability of personnel to manage and apply open data. With the improvement of the level of acceptance of open data among personnel, the quantity and quality of Open Government Data would enhance as well.

2017-05-17
Shrivastava, Aviral, Derler, Patricia, Baboud, Ya-Shian Li, Stanton, Kevin, Khayatian, Mohammad, Andrade, Hugo A., Weiss, Marc, Eidson, John, Chandhoke, Sundeep.  2016.  Time in Cyber-physical Systems. Proceedings of the Eleventh IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis. :4:1–4:10.

Many modern cyber-physical systems (CPS), especially industrial automation systems, require the actions of multiple computational systems to be performed at much higher rates and more tightly synchronized than is possible with ad hoc designs. Time is the common entity that computing and physical systems in CPS share, and correct interfacing of that is essential to flawless functionality of a CPS. Fundamental research is needed on ways to synchronize clocks of computing systems to a high degree, and on design methods that enable building blocks of CPS to perform actions at specified times. To realize the potential of CPS in the coming decades, suitable ways to specify distributed CPS applications are needed, including their timing requirements, ways to specify the timing of the CPS components (e.g. sensors, actuators, computing platform), timing analysis to determine if the application design is possible using the components, confident top-down design methodologies that can ensure that the system meets its timing requirements, and ways and methodologies to test and verify that the system meets the timing requirements. Furthermore, strategies for securing timing need to be carefully considered at every CPS design stage and not simply added on. This paper exposes these challenges of CPS development, points out limitations of previous approaches, and provides some research directions towards solving these challenges.

2017-08-22
Jarrah, Hazim, Chong, Peter, Sarkar, Nurul I., Gutierrez, Jairo.  2016.  A Time-Free Comparison-Based System-Level Fault Diagnostic Model for Highly Dynamic Networks. Proceedings of the 11th International Conference on Queueing Theory and Network Applications. :12:1–12:6.

This paper considers the problem of system-level fault diagnosis in highly dynamic networks. The existing fault diagnostic models deal mainly with static faults and have limited capabilities to handle dynamic networks. These fault diagnostic models are based on timers that work on a simple timeout mechanism to identify the node status, and often make simplistic assumptions for system implementations. To overcome the above problems, we propose a time-free comparison-based diagnostic model. Unlike the traditional models, the proposed model does not rely on timers and is more suitable for use in dynamic network environments. We also develop a novel comparison-based fault diagnosis protocol for identifying and diagnosing dynamic faults. The performance of the protocol has been analyzed and its correctness has been proved.

2018-05-25
L. Ratliff, C. Dowling, E. Mazumdar, B. Zhang.  2016.  To Observe or Not to Observe: Queuing Game Framework for Urban Parking. Proc. 55th IEEE Conference on Decision and Control. :5286–5291.
2017-05-16
Bandyopadhyay, Bortik, Fuhry, David, Chakrabarti, Aniket, Parthasarathy, Srinivasan.  2016.  Topological Graph Sketching for Incremental and Scalable Analytics. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :1231–1240.

We propose a novel, scalable, and principled graph sketching technique based on minwise hashing of local neighborhood. For an n-node graph with e-edges (e textgreatertextgreater n), we incrementally maintain in real-time a minwise neighbor sampled subgraph using k hash functions in O(n x k) memory, limit being user-configurable by the parameter k. Symmetrization and similarity based techniques can recover from these data structures a significant portion of the original graph. We present theoretical analysis of the minwise sampling strategy and also derive unbiased estimators for important graph properties such as triangle count and neighborhood overlap. We perform an extensive empirical evaluation of our graph sketch and it's derivatives on a wide variety of real-world graph data sets drawn from different application domains using important large network analysis algorithms: local and global clustering coefficient, PageRank, and local graph sparsification. With bounded memory, the quality of results using the sketch representation is competitive against baselines which use the full graph, and the computational performance is often better. Our framework is flexible and configurable to be leveraged by numerous other graph analytics algorithms, potentially reducing the information mining time on large streamed graphs for a variety of applications.

2017-10-19
Cerf, Sophie, Robu, Bogdan, Marchand, Nicolas, Boutet, Antoine, Primault, Vincent, Mokhtar, Sonia Ben, Bouchenak, Sara.  2016.  Toward an Easy Configuration of Location Privacy Protection Mechanisms. Proceedings of the Posters and Demos Session of the 17th International Middleware Conference. :11–12.

The widespread adoption of Location-Based Services (LBSs) has come with controversy about privacy. While leveraging location information leads to improving services through geo-contextualization, it rises privacy concerns as new knowledge can be inferred from location records, such as home/work places, habits or religious beliefs. To overcome this problem, several Location Privacy Protection Mechanisms (LPPMs) have been proposed in the literature these last years. However, every mechanism comes with its own configuration parameters that directly impact the privacy guarantees and the resulting utility of protected data. In this context, it can be difficult for a non-expert system designer to choose appropriate configuration parameters to use according to the expected privacy and utility. In this paper, we present a framework enabling the easy configuration of LPPMs. To achieve that, our framework performs an offline, in-depth automated analysis of LPPMs to provide the formal relationship between their configuration parameters and both privacy and the utility metrics. This framework is modular: by using different metrics, a system designer is able to fine-tune her LPPM according to her expected privacy and utility guarantees (i.e., the guarantee itself and the level of this guarantee). To illustrate the capability of our framework, we analyse Geo-Indistinguishability (a well known differentially private LPPM) and we provide the formal relationship between its &epsis; configuration parameter and two privacy and utility metrics.

2018-05-15
2018-05-27
Fallahzadeh, R., Aminikhanghahi, S., Gibson, A., Cook, D.  2016.  Toward personalized and context-aware prompting for smartphone-based intervention. 38th Annual International Conference of Engineering in Medicine and Biology Society (EMBC).
2018-05-15
2017-04-20
Gomes, T., Salgado, F., Pinto, S., Cabral, J., Tavares, A..  2016.  Towards an FPGA-based network layer filter for the Internet of Things edge devices. 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA). :1–4.

In the near future, billions of new smart devices will connect the big network of the Internet of Things, playing an important key role in our daily life. Allowing IPv6 on the low-power resource constrained devices will lead research to focus on novel approaches that aim to improve the efficiency, security and performance of the 6LoWPAN adaptation layer. This work in progress paper proposes a hardware-based Network Packet Filtering (NPF) and an IPv6 Link-local address calculator which is able to filter the received IPv6 packets, offering nearly 18% overhead reduction. The goal is to obtain a System-on-Chip implementation that can be deployed in future IEEE 802.15.4 radio modules.

2017-05-18
Wang, Huangxin, Li, Fei, Chen, Songqing.  2016.  Towards Cost-Effective Moving Target Defense Against DDoS and Covert Channel Attacks. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :15–25.

Traditionally, network and system configurations are static. Attackers have plenty of time to exploit the system's vulnerabilities and thus they are able to choose when to launch attacks wisely to maximize the damage. An unpredictable system configuration can significantly lift the bar for attackers to conduct successful attacks. Recent years, moving target defense (MTD) has been advocated for this purpose. An MTD mechanism aims to introduce dynamics to the system through changing its configuration continuously over time, which we call adaptations. Though promising, the dynamic system reconfiguration introduces overhead to the applications currently running in the system. It is critical to determine the right time to conduct adaptations and to balance the overhead afforded and the security levels guaranteed. This problem is known as the MTD timing problem. Little prior work has been done to investigate the right time in making adaptations. In this paper, we take the first step to both theoretically and experimentally study the timing problem in moving target defenses. For a broad family of attacks including DDoS attacks and cloud covert channel attacks, we model this problem as a renewal reward process and propose an optimal algorithm in deciding the right time to make adaptations with the objective of minimizing the long-term cost rate. In our experiments, both DDoS attacks and cloud covert channel attacks are studied. Simulations based on real network traffic traces are conducted and we demonstrate that our proposed algorithm outperforms known adaptation schemes.

2017-09-05
Queiroz, Rodrigo, Berger, Thorsten, Czarnecki, Krzysztof.  2016.  Towards Predicting Feature Defects in Software Product Lines. Proceedings of the 7th International Workshop on Feature-Oriented Software Development. :58–62.

Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.

2018-05-23
2016-11-09