Biblio

Found 7524 results

Filters: Keyword is Metrics  [Clear All Filters]
2021-04-08
Tyagi, H., Vardy, A..  2015.  Universal Hashing for Information-Theoretic Security. Proceedings of the IEEE. 103:1781–1795.
The information-theoretic approach to security entails harnessing the correlated randomness available in nature to establish security. It uses tools from information theory and coding and yields provable security, even against an adversary with unbounded computational power. However, the feasibility of this approach in practice depends on the development of efficiently implementable schemes. In this paper, we review a special class of practical schemes for information-theoretic security that are based on 2-universal hash families. Specific cases of secret key agreement and wiretap coding are considered, and general themes are identified. The scheme presented for wiretap coding is modular and can be implemented easily by including an extra preprocessing layer over the existing transmission codes.
2017-05-18
Haitzer, Thomas, Navarro, Elena, Zdun, Uwe.  2015.  Architecting for Decision Making About Code Evolution. Proceedings of the 2015 European Conference on Software Architecture Workshops. :52:1–52:7.

During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.

2021-04-08
Cao, Z., Deng, H., Lu, L., Duan, X..  2014.  An information-theoretic security metric for future wireless communication systems. 2014 XXXIth URSI General Assembly and Scientific Symposium (URSI GASS). :1–4.
Quantitative analysis of security properties in wireless communication systems is an important issue; it helps us get a comprehensive view of security and can be used to compare the security performance of different systems. This paper analyzes the security of future wireless communication system from an information-theoretic point of view and proposes an overall security metric. We demonstrate that the proposed metric is more reasonable than some existing metrics and it is highly sensitive to some basic parameters and helpful to do fine-grained tuning of security performance.
Liu, S., Hong, Y., Viterbo, E..  2014.  On measures of information theoretic security. 2014 IEEE Information Theory Workshop (ITW 2014). :309–310.
While information-theoretic security is stronger than computational security, it has long been considered impractical. In this work, we provide new insights into the design of practical information-theoretic cryptosystems. Firstly, from a theoretical point of view, we give a brief introduction into the existing information theoretic security criteria, such as the notions of Shannon's perfect/ideal secrecy in cryptography, and the concept of strong secrecy in coding theory. Secondly, from a practical point of view, we propose the concept of ideal secrecy outage and define a outage probability. Finally, we show how such probability can be made arbitrarily small in a practical cryptosystem.
2016-11-15
2014-10-24
Hibshi, Hanan, Slavin, Rocky, Niu, Jianwei, Breaux, Travis D.  2014.  Rethinking Security Requirements in RE Research.

As information security became an increasing concern for software developers and users, requirements engineering (RE) researchers brought new insight to security requirements. Security requirements aim to address security at the early stages of system design while accommodating the complex needs of different stakeholders. Meanwhile, other research communities, such as usable privacy and security, have also examined these requirements with specialized goal to make security more usable for stakeholders from product owners, to system users and administrators. In this paper we report results from conducting a literature survey to compare security requirements research from RE Conferences with the Symposium on Usable Privacy and Security (SOUPS). We report similarities between the two research areas, such as common goals, technical definitions, research problems, and directions. Further, we clarify the differences between these two communities to understand how they can leverage each other’s insights. From our analysis, we recommend new directions in security requirements research mainly to expand the meaning of security requirements in RE to reflect the technological advancements that the broader field of security is experiencing. These recommendations to encourage cross- collaboration with other communities are not limited to the security requirements area; in fact, we believe they can be generalized to other areas of RE. 

2015-01-09
2015-10-11
Subramani, Shweta.  2014.  Security Profile of Fedora. Computer Science. MS:105.

The process of software development and evolution has proven difficult to improve. For example,  well documented security issues such as SQL injection (SQLi), after more than a decade, still top  most vulnerability lists. Quantitative security process and quality metrics are often subdued due to  lack of time and resources. Security problems are hard to quantify and even harder to predict or  relate to any process improvement activity.  The goal of this thesis is to assess usefulness of “classical” software reliability engineering (SRE)  models in the context of open source software security, the conditions under which they may be  useful, and the information that they can provide with respect to the security quality of a software  product.  We start with security problem reports for open source Fedora series of software releases.We  illustrate how one can learn from normal operational profile about the non-operational processes  related to security problems. One aspect is classification of security problems based on the human  traits that contribute to the injection of problems into code, whether due to poor practices or limited  knowledge (epistemic errors), or due to random accidental events (aleatoric errors). Knowing the  distribution aids in development of an attack profile. In the case of Fedora, the distribution of  security problems found post-release was consistent across four different releases of the software.  The security problem discovery rate appears to be roughly constant but much lower than the initial  non-security problem discovery rate. Previous work has shown that non-operational testing can help  accelerate and focus the problem discovery rate and that it can be successfully modeled.We find  that some classical reliability models can be used with success to estimate the residual number of  security problems, and through that provide a measure of the security characteristics of the software.  We propose an agile software testing process that combines operational and non-operational (or  attack related) testing with the intent of finding more security problems faster. 

2015-01-09
Liang Zhang, Dave Choffnes, Tudor Dumitras, Dave Levin, Alan Mislove, Aaron Schulman, Christo Wilson.  2014.  Analysis of SSL Certificate Reissues and Revocations in the Wake of Heartbleed.

Central to the secure operation of a public key infrastructure (PKI) is the ability to revoke certificates. While much of users' security rests on this process taking place quickly, in practice, revocation typically requires a human to decide to reissue a new certificate and revoke the old one. Thus, having a proper understanding of how often systems administrators reissue and revoke certificates is crucial to understanding the integrity of a PKI. Unfortunately, this is typically difficult to measure: while it is relatively easy to determine when a certificate is revoked, it is difficult to determine whether and when an administrator should have revoked.

In this paper, we use a recent widespread security vulnerability as a natural experiment. Publicly announced in April 2014, the Heartbleed OpenSSL bug, potentially (and undetectably) revealed servers' private keys. Administrators of servers that were susceptible to Heartbleed should have revoked their certificates and reissued new ones, ideally as soon as the vulnerability was publicly announced.

Using a set of all certificates advertised by the Alexa Top 1 Million domains over a period of six months, we explore the patterns of reissuing and revoking certificates in the wake of Heartbleed. We find that over 73% of vulnerable certificates had yet to be reissued and over 87% had yet to be revoked three weeks after Heartbleed was disclosed. Moreover, our results show a drastic decline in revocations on the weekends, even immediately following the Heartbleed announcement. These results are an important step in understanding the manual processes on which users rely for secure, authenticated communication.

2015-05-04
Hessami, A..  2014.  A framework for characterisation of complex systems and system of systems. World Automation Congress (WAC), 2014. :346-354.

The objective of this paper is to explore the current notions of systems and “System of Systems” and establish the case for quantitative characterization of their structural, behavioural and contextual facets that will pave the way for further formal development (mathematical formulation). This is partly driven by stakeholder needs and perspectives and also in response to the necessity to attribute and communicate the properties of a system more succinctly, meaningfully and efficiently. The systematic quantitative characterization framework proposed will endeavor to extend the notion of emergence that allows the definition of appropriate metrics in the context of a number of systems ontologies. The general characteristic and information content of the ontologies relevant to system and system of system will be specified but not developed at this stage. The current supra-system, system and sub-system hierarchy is also explored for the formalisation of a standard notation in order to depict a relative scale and order and avoid the seemingly arbitrary attributions.
 

2022-04-20
Qingxue, Meng, Jiajun, Lin.  2014.  The Modeling and Simulation of Vehicle Distance Control Based on Cyber-Physical System. 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference. :341–345.
With the advent of motorization, result in traffic system more congested, how to make the traffic system more effective and also take safety into account, namely build a intelligent transportation system, has become a hot spot of society. The vehicle distance control system studied in this paper is an important function in intelligent transportation system, through introducing cyber-physical systems (CPS) technology into it, set up system model, make the vehicles maintain a preset safety distance, thereby not only help improve the effective utilization of traffic system, but also help avoid the collision due to the speed change. Finally, use Simulink software to simulate and analyze the performance of the system, the result shows that the model can effectively cope with the distance change which is due to speed change, and ensure the vehicles maintain a preset safety distance within a short period of time.
2021-03-04
Sun, H., Liu, L., Feng, L., Gu, Y. X..  2014.  Introducing Code Assets of a New White-Box Security Modeling Language. 2014 IEEE 38th International Computer Software and Applications Conference Workshops. :116—121.

This paper argues about a new conceptual modeling language for the White-Box (WB) security analysis. In the WB security domain, an attacker may have access to the inner structure of an application or even the entire binary code. It becomes pretty easy for attackers to inspect, reverse engineer, and tamper the application with the information they steal. The basis of this paper is the 14 patterns developed by a leading provider of software protection technologies and solutions. We provide a part of a new modeling language named i-WBS (White-Box Security) to describe problems of WB security better. The essence of White-Box security problem is code security. We made the new modeling language focus on code more than ever before. In this way, developers who are not security experts can easily understand what they need to really protect.

2023-03-31
Shrivastva, Krishna Mohan Pd, Rizvi, M.A., Singh, Shailendra.  2014.  Big Data Privacy Based on Differential Privacy a Hope for Big Data. 2014 International Conference on Computational Intelligence and Communication Networks. :776–781.
In era of information age, due to different electronic, information & communication technology devices and process like sensors, cloud, individual archives, social networks, internet activities and enterprise data are growing exponentially. The most challenging issues are how to effectively manage these large and different type of data. Big data is one of the term named for this large and different type of data. Due to its extraordinary scale, privacy and security is one of the critical challenge of big data. At the every stage of managing the big data there are chances that privacy may be disclose. Many techniques have been suggested and implemented for privacy preservation of large data set like anonymization based, encryption based and others but unfortunately due to different characteristic (large volume, high speed, and unstructured data) of big data all these techniques are not fully suitable. In this paper we have deeply analyzed, discussed and suggested how an existing approach "differential privacy" is suitable for big data. Initially we have discussed about differential privacy and later analyze how it is suitable for big data.
2017-05-18
Ananth, Prabhanjan, Gupta, Divya, Ishai, Yuval, Sahai, Amit.  2014.  Optimizing Obfuscation: Avoiding Barrington's Theorem. Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. :646–658.

In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.

2015-01-09
2021-04-08
Claycomb, W. R., Huth, C. L., Phillips, B., Flynn, L., McIntire, D..  2013.  Identifying indicators of insider threats: Insider IT sabotage. 2013 47th International Carnahan Conference on Security Technology (ICCST). :1—5.
This paper describes results of a study seeking to identify observable events related to insider sabotage. We collected information from actual insider threat cases, created chronological timelines of the incidents, identified key points in each timeline such as when attack planning began, measured the time between key events, and looked for specific observable events or patterns that insiders held in common that may indicate insider sabotage is imminent or likely. Such indicators could be used by security experts to potentially identify malicious activity at or before the time of attack. Our process included critical steps such as identifying the point of damage to the organization as well as any malicious events prior to zero hour that enabled the attack but did not immediately cause harm. We found that nearly 71% of the cases we studied had either no observable malicious action prior to attack, or had one that occurred less than one day prior to attack. Most of the events observed prior to attack were behavioral, not technical, especially those occurring earlier in the case timelines. Of the observed technical events prior to attack, nearly one third involved installation of software onto the victim organizations IT systems.
2022-04-20
Zhang, Kailong, Li, Jiwei, Lu, Zhou, Luo, Mei, Wu, Xiao.  2013.  A Scene-Driven Modeling Reconfigurable Hardware-in-Loop Simulation Environment for the Verification of an Autonomous CPS. 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics. 1:446–451.
Cyber-Physical System(CPS) is now a new evolutional morphology of embedded systems. With features of merging computation and physical processes together, the traditional verification and simulation methods have being challenged recently. After analyzed the state-of-art of related research, a new simulation environment is studied according to the characters of a special autonomous cyber-physical system-Unmanned Aerial Vehicle, and designed to be scene-driven, modeling and reconfigurable. In this environment, a novel CPS-in-loop architecture, which can support simulations under different customized scenes, is studied firstly to ensure its opening and flexibility. And as another foundation, some dynamics models of CPS and atmospheric ones of relative sensors are introduced to simulate the motion of CPS and the change of its posture. On the basis above, the reconfigurable scene-driven mechanisms that are Based on hybrid events are mainly excogitated. Then, different scenes can be configured in terms of special verification requirements, and then each scene will be decomposed into a spatio-temporal event sequence and scheduled by a scene executor. With this environment, not only the posture of CPS, but also the autonomy of its behavior can be verified and observed. It will be meaningful for the design of such autonomous CPS.
Jun, Shen, Cuibo, Yu.  2013.  The Study on the Self-Similarity and Simulation of CPS Traffic. 2013 IEEE 11th International Conference on Dependable, Autonomic and Secure Computing. :215–219.
CPS traffic characteristics is one of key techniques of Cyber-Physical Systems (CPS). A deep research of CPS network traffic characteristics can help to better plan and design CPS networks. A brief overview of the key concepts of CPS is firstly presented. Then CPS application scenarios are analyzed in details and classified. The characteristics of CPS traffic is analyzed theoretically for different CPS application scenarios. At last, the characteristics of CPS traffic is verified using NS-2 simulation.
2021-04-08
Mundie, D. A., Perl, S., Huth, C. L..  2013.  Toward an Ontology for Insider Threat Research: Varieties of Insider Threat Definitions. 2013 Third Workshop on Socio-Technical Aspects in Security and Trust. :26—36.
The lack of standardization of the terms insider and insider threat has been a noted problem for researchers in the insider threat field. This paper describes the investigation of 42 different definitions of the terms insider and insider threat, with the goal of better understanding the current conceptual model of insider threat and facilitating communication in the research community.
2018-02-21
Ivars, Eugene, Armands, Vadim.  2013.  Alias-free compressed signal digitizing and recording on the basis of Event Timer. 2013 21st Telecommunications Forum Telfor (℡FOR). :443–446.

Specifics of an alias-free digitizer application for compressed digitizing and recording of wideband signals are considered. Signal sampling in this case is performed on the basis of picosecond resolution event timing, the digitizer actually is a subsystem of Event Timer A033-ET and specific events that are detected and then timed are the signal and reference sine-wave crossings. The used approach to development of this subsystem is described and some results of experimental studies are given.

2017-11-03
Dietrich, Christian J., Rossow, Christian, Pohlmann, Norbert.  2013.  Exploiting Visual Appearance to Cluster and Detect Rogue Software. Proceedings of the 28th Annual ACM Symposium on Applied Computing. :1776–1783.

Rogue software, such as Fake A/V and ransomware, trick users into paying without giving return. We show that using a perceptual hash function and hierarchical clustering, more than 213,671 screenshots of executed malware samples can be grouped into subsets of structurally similar images, reflecting image clusters of one malware family or campaign. Based on the clustering results, we show that ransomware campaigns favor prepay payment methods such as ukash, paysafecard and moneypak, while Fake A/V campaigns use credit cards for payment. Furthermore, especially given the low A/V detection rates of current rogue software – sometimes even as low as 11% – our screenshot analysis approach could serve as a complementary last line of defense.

2020-03-09
Niu, Yukun, Tan, Xiaobin, Zhou, Zifei, Zheng, Jiangyu, Zhu, Jin.  2013.  Privacy Protection Scheme in Smart Grid Using Rechargeable Battery. Proceedings of the 32nd Chinese Control Conference. :8825–8830.

It can get the user's privacy and home energy use information by analyzing the user's electrical load information in smart grid, and this is an area of concern. A rechargeable battery may be used in the home network to protect user's privacy. In this paper, the battery can neither charge nor discharge, and the power of battery is adjustable, at the same time, we model the real user's electrical load information and the battery power information and the recorded electrical power of smart meters which are processed with discrete way. Then we put forward a heuristic algorithm which can make the rate of information leakage less than existing solutions. We use statistical methods to protect user's privacy, the theoretical analysis and the examples show that our solution makes the scene design more reasonable and is more effective than existing solutions to avoid the leakage of the privacy.

2021-02-08
Geetha, C. R., Basavaraju, S., Puttamadappa, C..  2013.  Variable load image steganography using multiple edge detection and minimum error replacement method. 2013 IEEE Conference on Information Communication Technologies. :53—58.

This paper proposes a steganography method using the digital images. Here, we are embedding the data which is to be secured into the digital image. Human Visual System proved that the changes in the image edges are insensitive to human eyes. Therefore we are using edge detection method in steganography to increase data hiding capacity by embedding more data in these edge pixels. So, if we can increase number of edge pixels, we can increase the amount of data that can be hidden in the image. To increase the number of edge pixels, multiple edge detection is employed. Edge detection is carried out using more sophisticated operator like canny operator. To compensate for the resulting decrease in the PSNR because of increase in the amount of data hidden, Minimum Error Replacement [MER] method is used. Therefore, the main goal of image steganography i.e. security with highest embedding capacity and good visual qualities are achieved. To extract the data we need the original image and the embedding ratio. Extraction is done by taking multiple edges detecting the original image and the data is extracted corresponding to the embedding ratio.

2021-04-08
Colbaugh, R., Glass, K., Bauer, T..  2013.  Dynamic information-theoretic measures for security informatics. 2013 IEEE International Conference on Intelligence and Security Informatics. :45–49.
Many important security informatics problems require consideration of dynamical phenomena for their solution; examples include predicting the behavior of individuals in social networks and distinguishing malicious and innocent computer network activities based on activity traces. While information theory offers powerful tools for analyzing dynamical processes, to date the application of information-theoretic methods in security domains has focused on static analyses (e.g., cryptography, natural language processing). This paper leverages information-theoretic concepts and measures to quantify the similarity of pairs of stochastic dynamical systems, and shows that this capability can be used to solve important problems which arise in security applications. We begin by presenting a concise review of the information theory required for our development, and then address two challenging tasks: 1.) characterizing the way influence propagates through social networks, and 2.) distinguishing malware from legitimate software based on the instruction sequences of the disassembled programs. In each application, case studies involving real-world datasets demonstrate that the proposed techniques outperform standard methods.