Biblio

Found 19604 results

2015-01-12
Omar, Cyrus, Kurilova, Darya, Nistor, Ligia, Chung, Benjamin, Potanin, Alex, Aldrich, Jonathan.  2014.  Safely Composable Type-Specific Languages. . European Conference on Object-Oriented Programming (ECOOP), 2014.

Programming languages often include specialized syntax for common
datatypes (e.g. lists) and some also build in support for specific specialized
datatypes (e.g. regular expressions), but user-defined types must use generalpurpose
syntax. Frustration with this causes developers to use strings, rather than
structured data, with alarming frequency, leading to correctness, performance,
security, and usability issues. Allowing library providers to modularly extend a
language with new syntax could help address these issues. Unfortunately, prior
mechanisms either limit expressiveness or are not safely composable: individually
unambiguous extensions can still cause ambiguities when used together.
We introduce type-specific languages (TSLs): logic associated with a type that
determines how the bodies of generic literals, able to contain arbitrary syntax,
are parsed and elaborated, hygienically. The TSL for a type is invoked only
when a literal appears where a term of that type is expected, guaranteeing noninterference.
We give evidence supporting the applicability of this approach and
formally specify it with a bidirectionally typed elaboration semantics for the
Wyvern programming language.

2018-05-23
[Anonymous].  2014.  A Safety Argument Strategy for PCA Closed-Loop Systems: A Preliminary Proposal. Workshop on Medical Cyber Physical Systems.
M. Pajic, Z. Jiang, O. Sokolsky, I. Lee, R. Mangharam.  2014.  Safety-critical Medical Device Development using the UPP2SF Model Translation Tool. ACM Transactions on Embedded Computing. 13},foo number = {4s
2015-05-05
Baek, J., Vu, Q., Liu, J., Huang, X., Xiang, Y..  2014.  A secure cloud computing based framework for big data information management of smart grid. Cloud Computing, IEEE Transactions on. PP:1-1.

Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability, and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call “Smart-Frame.” The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.
 

2015-05-06
Odelu, Vanga, Das, Ashok Kumar, Goswami, Adrijit.  2014.  A Secure Effective Key Management Scheme for Dynamic Access Control in a Large Leaf Class Hierarchy. Inf. Sci.. 269:270–285.

Lo et al. (2011) proposed an efficient key assignment scheme for access control in a large leaf class hierarchy where the alternations in leaf classes are more frequent than in non-leaf classes in the hierarchy. Their scheme is based on the public-key cryptosystem and hash function where operations like modular exponentiations are very much costly compared to symmetric-key encryptions and decryptions, and hash computations. Their scheme performs better than the previously proposed schemes. However, in this paper, we show that Lo et al.’s scheme fails to preserve the forward security property where a security class can also derive the secret keys of its successor classes ’s even after deleting the security class  from the hierarchy. We aim to propose a new key management scheme for dynamic access control in a large leaf class hierarchy, which makes use of symmetric-key cryptosystem and one-way hash function. We show that our scheme requires significantly less storage and computational overheads as compared to Lo et al.’s scheme and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against all possible attacks including the forward security. In addition, our scheme supports efficiently dynamic access control problems compared to Lo et al.’s scheme and other related schemes. Thus, higher security along with low storage and computational costs make our scheme more suitable for practical applications compared to other schemes.

2014-09-17
Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Yu, Xianqing, Ning, Peng, Vouk, Mladen A..  2014.  Securing Hadoop in Cloud. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :26:1–26:2.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

2015-10-11
Xianqing Yu, Peng Ning, Mladen A. Vouk.  2014.  Securing Hadoop in cloud. HotSoS 2014 Symposium and Bootcamp on the Science of Security. :ArticleNo.26.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

2015-04-30
Wei, Lifei, Zhu, Haojin, Cao, Zhenfu, Dong, Xiaolei, Jia, Weiwei, Chen, Yunlu, Vasilakos, Athanasios V..  2014.  Security and Privacy for Storage and Computation in Cloud Computing. Inf. Sci.. 258:371–386.

Cloud computing emerges as a new computing paradigm that aims to provide reliable, customized and quality of service guaranteed computation environments for cloud users. Applications and databases are moved to the large centralized data centers, called cloud. Due to resource virtualization, global replication and migration, the physical absence of data and machine in the cloud, the stored data in the cloud and the computation results may not be well managed and fully trusted by the cloud users. Most of the previous work on the cloud security focuses on the storage security rather than taking the computation security into consideration together. In this paper, we propose a privacy cheating discouragement and secure computation auditing protocol, or SecCloud, which is a first protocol bridging secure storage and secure computation auditing in cloud and achieving privacy cheating discouragement by designated verifier signature, batch verification and probabilistic sampling techniques. The detailed analysis is given to obtain an optimal sampling size to minimize the cost. Another major contribution of this paper is that we build a practical secure-aware cloud computing experimental environment, or SecHDFS, as a test bed to implement SecCloud. Further experimental results have demonstrated the effectiveness and efficiency of the proposed SecCloud.

2015-05-01
Y. Seifi, S. Suriadi, E. Foo, C. Boyd.  2014.  Security properties analysis in a TPM-based protocol. Int. J. of Security and Networks, 2014 Vol.9, No.2, pp.85 - 103.

Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.

2020-01-21
Azimi, Mahdi, Sami, Ashkan, Khalili, Abdullah.  2014.  A Security Test-Bed for Industrial Control Systems. Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation. :26–31.

Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA), Distributed Control Systems (DCS) and Distributed Automation Systems (DAS) control and monitor critical infrastructures. In recent years, proliferation of cyber-attacks to ICS revealed that a large number of security vulnerabilities exist in such systems. Excessive security solutions are proposed to remove the vulnerabilities and improve the security of ICS. However, to the best of our knowledge, none of them presented or developed a security test-bed which is vital to evaluate the security of ICS tools and products. In this paper, a test-bed is proposed for evaluating the security of industrial applications by providing different metrics for static testing, dynamic testing and network testing in industrial settings. Using these metrics and results of the three tests, industrial applications can be compared with each other from security point of view. Experimental results on several real world applications indicate that proposed test-bed can be successfully employed to evaluate and compare the security level of industrial applications.

2015-05-01
2015-05-06
Macedonio, Damiano, Merro, Massimo.  2014.  A Semantic Analysis of Key Management Protocols for Wireless Sensor Networks. Sci. Comput. Program.. 81:53–78.

Gorrieri and Martinelli’s timed Generalized Non-Deducibility on Compositions () schema is a well-known general framework for the formal verification of security protocols in a concurrent scenario. We generalise the  schema to verify wireless network security protocols. Our generalisation relies on a simple timed broadcasting process calculus whose operational semantics is given in terms of a labelled transition system which is used to derive a standard simulation theory. We apply our  framework to perform a security analysis of three well-known key management protocols for wireless sensor networks: , LEAP+ and LiSP.

2018-05-27
Weicong Ding, Prakash Ishwar, Venkatesh Saligrama, W. Clem Karl.  2014.  Sensing-aware kernel SVM. {IEEE} International Conference on Acoustics, Speech and Signal Processing, {ICASSP} 2014, Florence, Italy, May 4-9, 2014. :2947–2951.
2015-01-09
2015-01-11
2014-10-24
Yu, Tingting, Srisa-an, Witawas, Rothermel, Gregg.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. Proceedings of the 36th International Conference on Software Engineering. :48–59.

Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.

2015-01-12
Yu, Tingting, Srisa-an, Witawas, Rothermel, Gregg.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. International Conference on Software Engineering (ICSE) 2014, .

Concurrent programs are prone to various classes of difficult-to- detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SIMRT, an automated regression testing framework for use in de- tecting races introduced by code modifications. SIMRT employs a regression test selection technique, focused on sets of program ele- ments related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritiza- tion technique to improve the rate at which such races are detected. Our empirical study of SIMRT reveals that it is more efficient and effective for revealing races than other approaches, and that its con- stituent test selection and prioritization components each contribute to its performance.

2015-11-23
Craig Buchanan, University of Illinois at Urbana-Champaign.  2014.  Simulation Debugging and Visualization in the Mobius Modeling Framework. Department of Electrical and Computer Engineering. M.S.

Large and complex models can be difficult to analyze using static analysis results from current tools, including the M¨obius modeling framework, which provides a powerful, formalism- independent, discrete-event simulator that outputs static results such as execution traces. The M¨obius Simulation Debugger and Visualization (MSDV) feature adds user interaction to running simulations to provide a more transparent view into the dynamics of the models under consideration. This thesis discusses the details of the design and implementation of this feature in the M¨obius modeling environment. Also, a case study is presented to demonstrate the new capabilities provided by the feature.

2015-05-01
Chen, L.M., Hsiao, S.-W., Chen, M.C., Liao, W..  2014.  Slow-Paced Persistent Network Attacks Analysis and Detection Using Spectrum Analysis. Systems Journal, IEEE. PP:1-12.

A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.

2015-01-11
P. Gao, H. Miao, J.S. Baras.  2014.  Social Network Ad Allocation via Hyperbolic Embedding. Proceedings 53rd IEEE Conference on Decision and Control.
2015-05-01
Keivanloo, Iman, Rilling, Juergen.  2014.  Software Trustworthiness 2.0-A Semantic Web Enabled Global Source Code Analysis Approach. J. Syst. Softw.. 89:33–50.

There has been an ongoing trend toward collaborative software development using open and shared source code published in large software repositories on the Internet. While traditional source code analysis techniques perform well in single project contexts, new types of source code analysis techniques are ermerging, which focus on global source code analysis challenges. In this article, we discuss how the Semantic Web, can become an enabling technology to provide a standardized, formal, and semantic rich representations for modeling and analyzing large global source code corpora. Furthermore, inference services and other services provided by Semantic Web technologies can be used to support a variety of core source code analysis techniques, such as semantic code search, call graph construction, and clone detection. In this paper, we introduce SeCold, the first publicly available online linked data source code dataset for software engineering researchers and practitioners. Along with its dataset, SeCold also provides some Semantic Web enabled core services to support the analysis of Internet-scale source code repositories. We illustrated through several examples how this linked data combined with Semantic Web technologies can be harvested for different source code analysis tasks to support software trustworthiness. For the case studies, we combine both our linked-data set and Semantic Web enabled source code analysis services with knowledge extracted from StackOverflow, a crowdsourcing website. These case studies, we demonstrate that our approach is not only capable of crawling, processing, and scaling to traditional types of structured data (e.g., source code), but also supports emerging non-structured data sources, such as crowdsourced information (e.g., StackOverflow.com) to support a global source code analysis context.

2015-11-23
[Anonymous].  2014.  Solving Complex Path Conditions through Heuristic Search on Induced Polytopes. 22nd ACM SIGSOFT Symposium on Foundations of Software Engineering.

Test input generators using symbolic and concolic execution must solve path conditions to systematically explore a program and generate high coverage tests. However, path conditions may contain complicated arithmetic constraints that are infeasible to solve: a solver may be unavailable, solving may be computationally intractable, or the constraints may be undecidable. Existing test generators either simplify such constraints with concrete values to make them decidable, or rely on strong but incomplete constraint solvers. Unfortunately, simplification yields coarse approximations whose solutions rarely satisfy the original constraint. Moreover, constraint solvers cannot handle calls to native library methods. We show how a simple combination of linear constraint solving and heuristic search can overcome these limitations. We call this technique Concolic Walk. On a corpus of 11 programs, an instance of our Concolic Walk algorithm using tabu search generates tests with two- to three-times higher coverage than simplification-based tools while being up to five-times as efficient. Furthermore, our algorithm improves the coverage of two state-of-the-art test generators by 21% and 32%. Other concolic and symbolic testing tools could integrate our algorithm to solve complex path conditions without having to sacrifice any of their own capabilities, leading to higher overall coverage.

2018-05-27
Delaram Motamedvaziri, Mohammad Hossein Rohban, Venkatesh Saligrama.  2014.  Sparse signal recovery under poisson statistics for online marketing applications. {IEEE} International Conference on Acoustics, Speech and Signal Processing, {ICASSP} 2014, Florence, Italy, May 4-9, 2014. :4953–4957.