Biblio

Found 19604 results

2018-05-14
2018-05-27
Rohit Kumar, David A. Castañón, Erhan Baki Ermis, Venkatesh Saligrama.  2010.  A new algorithm for outlier rejection in particle filters. 13th Conference on Information Fusion, {FUSION} 2010, Edinburgh, UK, July 26-29, 2010. :1–7.
Manqi Zhao, Venkatesh Saligrama.  2010.  Noisy filtered sparse processes: Reconstruction and compression. Proceedings of the 49th {IEEE} Conference on Decision and Control, {CDC} 2010, December 15-17, 2010, Atlanta, Georgia, {USA}. :2930–2935.
2018-05-14
2018-06-04
2017-05-18
Gil-Quijano, Javier, Sabouret, Nicolas.  2010.  Prediction of Humans' Activity for Learning the Behaviors of Electrical Appliances in an Intelligent Ambient Environment. Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02. :283–286.

In this paper we propose a mechanism of prediction of domestic human activity in a smart home context. We use those predictions to adapt the behavior of home appliances whose impact on the environment is delayed (for example the heating). The behaviors of appliances are built by a reinforcement learning mechanism. We compare the behavior built by the learning approach with both a merely reactive behavior and a state-remanent behavior.

2018-05-14
2018-05-27
Peter Jones, Venkatesh Saligrama, Sanjoy K. Mitter.  2010.  Probabilistic Belief Revision with Structural Constraints. Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada.. :1036–1044.
Peter Jones, Sanjoy K. Mitter, Venkatesh Saligrama.  2010.  Revision of marginal probability assessments. 13th Conference on Information Fusion, {FUSION} 2010, Edinburgh, UK, July 26-29, 2010. :1–8.
2015-11-18
Santiago Escobar, Universidad Politécnica de Valencia, Spain, Catherine Meadows, Naval Research Laboratory, Jose Meseguer, University of Illinois at Urbana-Champaign, Sonia Santiago, Universidad Politécnica de Valencia, Spain.  2010.  Sequential Protocol Composition in Maude-NPA. 15th European Conference on Research in Computer Security (ESORICS 2010).

Protocols do not work alone, but together, one protocol relying on another to provide needed services. Many of the problems in cryptographic protocols arise when such composition is done incorrectly or is not well understood. In this paper we discuss an extension to the Maude-NPA syntax and operational semantics to support dynamic sequential composition of protocols, so that protocols can be specified sepa- rately and composed when desired. This allows one to reason about many different compositions with minimal changes to the specification. Moreover, we show that, by a simple protocol transformation, we are able to analyze and verify this dynamic composition in the current Maude-NPA tool. We prove soundness and completeness of the protocol transforma- tion with respect to the extended operational semantics, and illustrate our results on some examples.

2018-06-04
2018-05-27
Shuchin Aeron, Sandip Bose, Henri{-}Pierre Valero, Venkatesh Saligrama.  2010.  Sparsity penalized reconstruction framework for broadband dispersion extraction. Proceedings of the {IEEE} International Conference on Acoustics, Speech, and Signal Processing, {ICASSP} 2010, 14-19 March 2010, Sheraton Dallas Hotel, Dallas, Texas, {USA}. :2638–2641.
2018-05-14
2018-06-04
2018-05-27
Venkatesh Saligrama, Janusz Konrad, Pierre{-}Marc Jodoin.  2010.  Video Anomaly Identification. {IEEE} Signal Process. Mag.. 27:18–33.
2021-04-08
Sarkar, M. Z. I., Ratnarajah, T..  2010.  Information-theoretic security in wireless multicasting. International Conference on Electrical Computer Engineering (ICECE 2010). :53–56.
In this paper, a wireless multicast scenario is considered in which the transmitter sends a common message to a group of client receivers through quasi-static Rayleigh fading channel in the presence of an eavesdropper. The communication between transmitter and each client receiver is said to be secured if the eavesdropper is unable to decode any information. On the basis of an information-theoretic formulation of the confidential communications between transmitter and a group of client receivers, we define the expected secrecy sum-mutual information in terms of secure outage probability and provide a complete characterization of maximum transmission rate at which the eavesdropper is unable to decode any information. Moreover, we find the probability of non-zero secrecy mutual information and present an analytical expression for ergodic secrecy multicast mutual information of the proposed model.
Zhang, T., Zhao, P..  2010.  Insider Threat Identification System Model Based on Rough Set Dimensionality Reduction. 2010 Second World Congress on Software Engineering. 2:111—114.
Insider threat makes great damage to the security of information system, traditional security methods are extremely difficult to work. Insider attack identification plays an important role in insider threat detection. Monitoring user's abnormal behavior is an effective method to detect impersonation, this method is applied to insider threat identification, to built user's behavior attribute information database based on weights changeable feedback tree augmented Bayes network, but data is massive, using the dimensionality reduction based on rough set, to establish the process information model of user's behavior attribute. Using the minimum risk Bayes decision can effectively identify the real identity of the user when user's behavior departs from the characteristic model.
2021-02-08
Wang Xiao, Mi Hong, Wang Wei.  2010.  Inner edge detection of PET bottle opening based on the Balloon Snake. 2010 2nd International Conference on Advanced Computer Control. 4:56—59.

Edge detection of bottle opening is a primary section to the machine vision based bottle opening detection system. This paper, taking advantage of the Balloon Snake, on the PET (Polyethylene Terephthalate) images sampled at rotating bottle-blowing machine producing pipelines, extracts the opening. It first uses the grayscale weighting average method to calculate the centroid as the initial position of Snake and then based on the energy minimal theory, it extracts the opening. Experiments show that compared with the conventional edge detection and center location methods, Balloon Snake is robust and can easily step over the weak noise points. Edge extracted thorough Balloon Snake is more integral and continuous which provides a guarantee to correctly judge the opening.

2014-09-26
Schwartz, E.J., Avgerinos, T., Brumley, D..  2010.  All You Ever Wanted to Know about Dynamic Taint Analysis and Forward Symbolic Execution (but Might Have Been Afraid to Ask). Security and Privacy (SP), 2010 IEEE Symposium on. :317-331.

Dynamic taint analysis and forward symbolic execution are quickly becoming staple techniques in security analyses. Example applications of dynamic taint analysis and forward symbolic execution include malware analysis, input filter generation, test case generation, and vulnerability discovery. Despite the widespread usage of these two techniques, there has been little effort to formally define the algorithms and summarize the critical issues that arise when these techniques are used in typical security contexts. The contributions of this paper are two-fold. First, we precisely describe the algorithms for dynamic taint analysis and forward symbolic execution as extensions to the run-time semantics of a general language. Second, we highlight important implementation choices, common pitfalls, and considerations when using these techniques in a security context.

Parno, B., McCune, J.M., Perrig, A.  2010.  Bootstrapping Trust in Commodity Computers. Security and Privacy (SP), 2010 IEEE Symposium on. :414-429.

Trusting a computer for a security-sensitive task (such as checking email or banking online) requires the user to know something about the computer's state. We examine research on securely capturing a computer's state, and consider the utility of this information both for improving security on the local computer (e.g., to convince the user that her computer is not infected with malware) and for communicating a remote computer's state (e.g., to enable the user to check that a web server will adequately protect her data). Although the recent "Trusted Computing" initiative has drawn both positive and negative attention to this area, we consider the older and broader topic of bootstrapping trust in a computer. We cover issues ranging from the wide collection of secure hardware that can serve as a foundation for trust, to the usability issues that arise when trying to convey computer state information to humans. This approach unifies disparate research efforts and highlights opportunities for additional work that can guide real-world improvements in computer security.

Bursztein, E., Bethard, S., Fabry, C., Mitchell, J.C., Jurafsky, D..  2010.  How Good Are Humans at Solving CAPTCHAs? A Large Scale Evaluation Security and Privacy (SP), 2010 IEEE Symposium on. :399-413.

Captchas are designed to be easy for humans but hard for machines. However, most recent research has focused only on making them hard for machines. In this paper, we present what is to the best of our knowledge the first large scale evaluation of captchas from the human perspective, with the goal of assessing how much friction captchas present to the average user. For the purpose of this study we have asked workers from Amazon’s Mechanical Turk and an underground captchabreaking service to solve more than 318 000 captchas issued from the 21 most popular captcha schemes (13 images schemes and 8 audio scheme). Analysis of the resulting data reveals that captchas are often difficult for humans, with audio captchas being particularly problematic. We also find some demographic trends indicating, for example, that non-native speakers of English are slower in general and less accurate on English-centric captcha schemes. Evidence from a week’s worth of eBay captchas (14,000,000 samples) suggests that the solving accuracies found in our study are close to real-world values, and that improving audio captchas should become a priority, as nearly 1% of all captchas are delivered as audio rather than images. Finally our study also reveals that it is more effective for an attacker to use Mechanical Turk to solve captchas than an underground service.

Sommer, R., Paxson, V..  2010.  Outside the Closed World: On Using Machine Learning for Network Intrusion Detection. Security and Privacy (SP), 2010 IEEE Symposium on. :305-316.

In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational "real world" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection.

Bau, J., Bursztein, E., Gupta, D., Mitchell, J..  2010.  State of the Art: Automated Black-Box Web Application Vulnerability Testing. Security and Privacy (SP), 2010 IEEE Symposium on. :332-345.

Black-box web application vulnerability scanners are automated tools that probe web applications for security vulnerabilities. In order to assess the current state of the art, we obtained access to eight leading tools and carried out a study of: (i) the class of vulnerabilities tested by these scanners, (ii) their effectiveness against target vulnerabilities, and (iii) the relevance of the target vulnerabilities to vulnerabilities found in the wild. To conduct our study we used a custom web application vulnerable to known and projected vulnerabilities, and previous versions of widely used web applications containing known vulnerabilities. Our results show the promise and effectiveness of automated tools, as a group, and also some limitations. In particular, "stored" forms of Cross Site Scripting (XSS) and SQL Injection (SQLI) vulnerabilities are not currently found by many tools. Because our goal is to assess the potential of future research, not to evaluate specific vendors, we do not report comparative data or make any recommendations about purchase of specific tools.

2019-12-18
Kessel, Ronald.  2010.  The positive force of deterrence: Estimating the quantitative effects of target shifting. 2010 International WaterSide Security Conference. :1–5.
The installation of a protection system can provide protection by either deterring or stopping an attacker. Both modes of effectiveness-deterring and stopping-are uncertain. Some have guessed that deterrence plays a much bigger role than stopping force. The force of deterrence should therefore be of considerable interest, especially if its effect could be estimated and incorporated into a larger risk analysis and business case for developing and buying new systems, but nowhere has it been estimated quantitatively. The effect of one type of deterrence, namely, influencing an attacker's choice of targets-or target shifting, biasing an attacker away from some targets toward others-is assessed quantitatively here using a game-theoretic approach. It is shown that its positive effects are significant. It features as a force multiplier on the order of magnitude or more, even for low-performance security countermeasures whose effectiveness may be compromised somewhat, of necessity, in order to keep the number of false alarms serviceably low. The analysis furthermore implies that there are certain minimum levels of stopping performance that a protection should provide in order to avoid attracting the choice of attackers (under deterrence). Nothing in the analysis argues for complacency in security. Developers must still design the best affordable systems. The analysis enters into the middle ground of security, between no protection and impossibly perfect protection. It counters the criticisms that some raise about lower-level, affordable, sustainable measures that security providers naturally gravitate toward. Although these measures might in some places be defeated in ways that a non-expert can imagine, the measures are not for that reason irresponsible or to be dismissed. Their effectiveness can be much greater than they first appear.