Visible to the public Biblio

Found 256 results

Filters: Keyword is Complexity theory  [Clear All Filters]
2019-11-25
Abdulwahab, Walled Khalid, Abdulrahman Kadhim, Abdulkareem.  2018.  Comparative Study of Channel Coding Schemes for 5G. 2018 International Conference on Advanced Science and Engineering (ICOASE). :239–243.
In this paper we look into 5G requirements for channel coding and review candidate channel coding schemes for 5G. A comparative study is presented for possible channel coding candidates of 5G covering Convolutional, Turbo, Low Density Parity Check (LDPC), and Polar codes. It seems that polar code with Successive Cancellation List (SCL) decoding using small list length (such as 8) is a promising choice for short message lengths (≤128 bits) due to its error performance and relatively low complexity. Also adopting non-binary LDPC can provide good performance on the expense of increased complexity but with better spectral efficiency. Considering the implementation, polar code with decoding algorithms based on SCL required small area and low power consumption when compared to LDPC codes. For larger message lengths (≥256 bits) turbo code can provide better performance at low coding rates (\textbackslashtextless;1/2).
2019-11-12
Wei, Shengjun, Zhong, Hao, Shan, Chun, Ye, Lin, Du, Xiaojiang, Guizani, Mohsen.  2018.  Vulnerability Prediction Based on Weighted Software Network for Secure Software Building. 2018 IEEE Global Communications Conference (GLOBECOM). :1-6.

To build a secure communications software, Vulnerability Prediction Models (VPMs) are used to predict vulnerable software modules in the software system before software security testing. At present many software security metrics have been proposed to design a VPM. In this paper, we predict vulnerable classes in a software system by establishing the system's weighted software network. The metrics are obtained from the nodes' attributes in the weighted software network. We design and implement a crawler tool to collect all public security vulnerabilities in Mozilla Firefox. Based on these data, the prediction model is trained and tested. The results show that the VPM based on weighted software network has a good performance in accuracy, precision, and recall. Compared to other studies, it shows that the performance of prediction has been improved greatly in Pr and Re.

2019-10-23
Zieger, Andrej, Freiling, Felix, Kossakowski, Klaus-Peter.  2018.  The $\beta$-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115-133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

2019-10-22
Deb Nath, Atul Prasad, Bhunia, Swarup, Ray, Sandip.  2018.  ArtiFact: Architecture and CAD Flow for Efficient Formal Verification of SoC Security Policies. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :411–416.
Verification of security policies represents one of the most critical, complex, and expensive steps of modern SoC design validation. SoC security policies are typically implemented as part of functional design flow, with a diverse set of protection mechanisms sprinkled across various IP blocks. An obvious upshot is that their verification requires comprehension and analysis of the entire system, representing a scalability bottleneck for verification tools. The scale and complexity of industrial SoC is far beyond the analysis capacity of state-of-the-art formal tools; even simulation-based security verification is severely limited in effectiveness because of the need to exercise subtle corner-cases across the entire system. We address this challenge by developing a novel security architecture that accounts for verification needs from the ground up. Our framework, ArtiFact, provides an alternative architecture for security policy implementation that exploits a flexible, centralized, infrastructure IP and enables scalable, streamlined verification of these policies. With our architecture, verification of system-level security policies reduces to analysis of this single IP and its interfaces, enabling off-the-shelf formal tools to successfully verify these policies. We introduce a CAD flow that supports both formal and dynamic (simulation-based) verification, and is built on top of such off-the-shelf tools. Our approach reduces verification time by over 62X and bug detection time by 34X for illustrative policies.
2019-10-15
Zhang, F., Deng, Z., He, Z., Lin, X., Sun, L..  2018.  Detection Of Shilling Attack In Collaborative Filtering Recommender System By Pca And Data Complexity. 2018 International Conference on Machine Learning and Cybernetics (ICMLC). 2:673–678.

Collaborative filtering (CF) recommender system has been widely used for its well performing in personalized recommendation, but CF recommender system is vulnerable to shilling attacks in which shilling attack profiles are injected into the system by attackers to affect recommendations. Design robust recommender system and propose attack detection methods are the main research direction to handle shilling attacks, among which unsupervised PCA is particularly effective in experiment, but if we have no information about the number of shilling attack profiles, the unsupervised PCA will be suffered. In this paper, a new unsupervised detection method which combine PCA and data complexity has been proposed to detect shilling attacks. In the proposed method, PCA is used to select suspected attack profiles, and data complexity is used to pick out the authentic profiles from suspected attack profiles. Compared with the traditional PCA, the proposed method could perform well and there is no need to determine the number of shilling attack profiles in advance.

2019-09-26
Nelmiawati, Arifandi, W..  2018.  A Seamless Secret Sharing Scheme Implementation for Securing Data in Public Cloud Storage Service. 2018 International Conference on Applied Engineering (ICAE). :1-5.

Public cloud data storage services were considered as a potential alternative to store low-cost digital data in the short term. They are offered by different providers on the Internet. Some providers offer limited free plans for the users who are starting the service. However, data security concern arises when data stored are considered as a valuable asset. This study explores the usage of secret sharing scheme: Rabin's IDA and Shamir's SSA to implement a tool called dCloud for file protection stored in public cloud storage in a seamless way. It addresses data security by hiding its complexities when targeting ordinary non-technical users. The secret key is automatically generated by dCloud in a secure random way on Rabin's IDA. Shamir's SSA completes the process through dispersing the key into each of Rabin's IDA output files. Moreover, the Hash value of the original file is added to each of those output files to confirm the integrity of the file during reconstruction. Besides, the authentication key is used to communicate with all of the defined service providers during storage and reconstruction as well. It is stored into local secure key-store. By having a key to access the key-store, an ordinary non-technical user will be able to use dCloud to store and retrieve targeted file within defined public cloud storage services securely.

2019-07-01
Zieger, A., Freiling, F., Kossakowski, K..  2018.  The β-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115–133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose β-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a β-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

2019-06-28
Chen, G., Wang, D., Li, T., Zhang, C., Gu, M., Sun, J..  2018.  Scalable Verification Framework for C Program. 2018 25th Asia-Pacific Software Engineering Conference (APSEC). :129-138.

Software verification has been well applied in safety critical areas and has shown the ability to provide better quality assurance for modern software. However, as lines of code and complexity of software systems increase, the scalability of verification becomes a challenge. In this paper, we present an automatic software verification framework TSV to address the scalability issues: (i) the extended structural abstraction and property-guided program slicing to solve large-scale program verification problem, saving time and memory without losing accuracy; (ii) automatically select different verification methods according to the program and property context to improve the verification efficiency. For evaluation, we compare TSV's different configurations with existing C program verifiers based on open benchmarks. We found that TSV with auto-selection performs better than with bounded model checking only or with extended structural abstraction only. Compared to existing tools such as CMBC and CPAChecker, it acquires 10%-20% improvement of accuracy and 50%-90% improvement of memory consumption.

2019-05-20
Hu, W., Ardeshiricham, A., Gobulukoglu, M. S., Wang, X., Kastner, R..  2018.  Property Specific Information Flow Analysis for Hardware Security Verification. 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). :1-8.

Hardware information flow analysis detects security vulnerabilities resulting from unintended design flaws, timing channels, and hardware Trojans. These information flow models are typically generated in a general way, which includes a significant amount of redundancy that is irrelevant to the specified security properties. In this work, we propose a property specific approach for information flow security. We create information flow models tailored to the properties to be verified by performing a property specific search to identify security critical paths. This helps find suspicious signals that require closer inspection and quickly eliminates portions of the design that are free of security violations. Our property specific trimming technique reduces the complexity of the security model; this accelerates security verification and restricts potential security violations to a smaller region which helps quickly pinpoint hardware security vulnerabilities.

2019-05-08
Balogun, A. M., Zuva, T..  2018.  Criminal Profiling in Digital Forensics: Assumptions, Challenges and Probable Solution. 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC). :1–7.

Cybercrime has been regarded understandably as a consequent compromise that follows the advent and perceived success of the computer and internet technologies. Equally effecting the privacy, trust, finance and welfare of the wealthy and low-income individuals and organizations, this menace has shown no indication of slowing down. Reports across the world have consistently shown exponential increase in the numbers and costs of cyber-incidents, and more worriedly low conviction rates of cybercriminals, over the years. Stakeholders increasingly explore ways to keep up with containing cyber-incidents by devising tools and techniques to increase the overall efficiency of investigations, but the gap keeps getting wider. However, criminal profiling - an investigative technique that has been proven to provide accurate and valuable directions to traditional crime investigations - has not seen a widespread application, including a formal methodology, to cybercrime investigations due to difficulties in its seamless transference. This paper, in a bid to address this problem, seeks to preliminarily identify the exact benefits criminal profiling has brought to successful traditional crime investigations and the benefits it can translate to cybercrime investigations, identify the challenges posed by the cyber-scene to its implementation in cybercrime investigations, and proffer a practicable solution.

2019-05-01
Vagin, V. V., Butakova, N. G..  2019.  Mathematical Modeling of Group Authentication Based on Isogeny of Elliptic Curves. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1780–1785.

In this paper, we consider ways of organizing group authentication, as well as the features of constructing the isogeny of elliptic curves. The work includes the study of isogeny graphs and their application in postquantum systems. A hierarchical group authentication scheme has been developed using transformations based on the search for isogeny of elliptic curves.

2019-04-01
Li, Z., Liao, Q..  2018.  CAPTCHA: Machine or Human Solvers? A Game-Theoretical Analysis 2018 5th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2018 4th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :18–23.
CAPTCHAs have become an ubiquitous defense used to protect open web resources from being exploited at scale. Traditionally, attackers have developed automatic programs known as CAPTCHA solvers to bypass the mechanism. With the presence of cheap labor in developing countries, hackers now have options to use human solvers. In this research, we develop a game theoretical framework to model the interactions between the defender and the attacker regarding the design and countermeasure of CAPTCHA system. With the result of equilibrium analysis, both parties can determine the optimal allocation of software-based or human-based CAPTCHA solvers. Counterintuitively, instead of the traditional wisdom of making CAPTCHA harder and harder, it may be of best interest of the defender to make CAPTCHA easier. We further suggest a welfare-improving CAPTCHA business model by involving decentralized cryptocurrency computation.
2019-02-14
Leemaster, J., Vai, M., Whelihan, D., Whitman, H., Khazan, R..  2018.  Functionality and Security Co-Design Environment for Embedded Systems. 2018 IEEE High Performance Extreme Computing Conference (HPEC). :1-5.

For decades, embedded systems, ranging from intelligence, surveillance, and reconnaissance (ISR) sensors to electronic warfare and electronic signal intelligence systems, have been an integral part of U.S. Department of Defense (DoD) mission systems. These embedded systems are increasingly the targets of deliberate and sophisticated attacks. Developers thus need to focus equally on functionality and security in both hardware and software development. For critical missions, these systems must be entrusted to perform their intended functions, prevent attacks, and even operate with resilience under attacks. The processor in a critical system must thus provide not only a root of trust, but also a foundation to monitor mission functions, detect anomalies, and perform recovery. We have developed a Lincoln Asymmetric Multicore Processing (LAMP) architecture, which mitigates adversarial cyber effects with separation and cryptography and provides a foundation to build a resilient embedded system. We will describe a design environment that we have created to enable the co-design of functionality and security for mission assurance.

2019-02-08
Mertoguno, S., Craven, R., Koller, D., Mickelson, M..  2018.  Reducing Attack Surface via Executable Transformation. 2018 IEEE Cybersecurity Development (SecDev). :138-138.

Modern software development and deployment practices encourage complexity and bloat while unintentionally sacrificing efficiency and security. A major driver in this is the overwhelming emphasis on programmers' productivity. The constant demands to speed up development while reducing costs have forced a series of individual decisions and approaches throughout software engineering history that have led to this point. The current state-of-the-practice in the field is a patchwork of architectures and frameworks, packed full of features in order to appeal to: the greatest number of people, obscure use cases, maximal code reuse, and minimal developer effort. The Office of Naval Research (ONR) Total Platform Cyber Protection (TPCP) program seeks to de-bloat software binaries late in the life-cycle with little or no access to the source code or the development process.

Sisiaridis, D., Markowitch, O..  2018.  Reducing Data Complexity in Feature Extraction and Feature Selection for Big Data Security Analytics. 2018 1st International Conference on Data Intelligence and Security (ICDIS). :43-48.

Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cybersecurity threats and attacks by utilizing data mining techniques in the field of Artificial Intelligence. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection utilizing machine learning algorithms for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.

2019-01-16
Hasslinger, G., Ntougias, K., Hasslinger, F., Hohlfeld, O..  2018.  Comparing Web Cache Implementations for Fast O(1) Updates Based on LRU, LFU and Score Gated Strategies. 2018 IEEE 23rd International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–7.
To be applicable to high user request workloads, web caching strategies benefit from low implementation and update effort. In this regard, the Least Recently Used (LRU) replacement principle is a simple and widely-used method. Despite its popularity, LRU has deficits in the achieved hit rate performance and cannot consider transport and network optimization criteria for selecting content to be cached. As a result, many alternatives have been proposed in the literature, which improve the cache performance at the cost of higher complexity. In this work, we evaluate the implementation complexity and runtime performance of LRU, Least Frequently Used (LFU), and score based strategies in the class of fast O(1) updates with constant effort per request. We implement Window LFU (W-LFU) within this class and show that O(1) update effort can be achieved. We further compare fast update schemes of Score Gated LRU and new Score Gated Polling (SGP). SGP is simpler than LRU and provides full flexibility for arbitrary score assessment per data object as information basis for performance optimization regarding network cost and quality measures.
2018-12-10
Pulparambil, S., Baghdadi, Y., Al-Hamdani, A., Al-Badawi, M..  2018.  Service Design Metrics to Predict IT-Based Drivers of Service Oriented Architecture Adoption. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.

The key factors for deploying successful services is centered on the service design practices adopted by an enterprise. The design level information should be validated and measures are required to quantify the structural attributes. The metrics at this stage will support an early discovery of design flaws and help designers to predict the capabilities of service oriented architecture (SOA) adoption. In this work, we take a deeper look at how we can forecast the key SOA capabilities infrastructure efficiency and service reuse from the service designs modeled by SOA modeling language. The proposed approach defines metrics based on the structural and domain level similarity of service operations. The proposed metrics are analytically validated with respect to software engineering metrics properties. Moreover, a tool has been developed to automate the proposed approach and the results indicate that the metrics predict the SOA capabilities at the service design stage. This work can be further extended to predict the business based capabilities of SOA adoption such as flexibility and agility.

Volz, V., Majchrzak, K., Preuss, M..  2018.  A Social Science-based Approach to Explanations for (Game) AI. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–2.

The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.

2018-11-14
Krishna, M. B., Rodrigues, J. J. P. C..  2017.  Two-Phase Incentive-Based Secure Key System for Data Management in Internet of Things. 2017 IEEE International Conference on Communications (ICC). :1–6.

Internet of Things (IoT) distributed secure data management system is characterized by authentication, privacy policies to preserve data integrity. Multi-phase security and privacy policies ensure confidentiality and trust between the users and service providers. In this regard, we present a novel Two-phase Incentive-based Secure Key (TISK) system for distributed data management in IoT. The proposed system classifies the IoT user nodes and assigns low-level, high-level security keys for data transactions. Low-level secure keys are generic light-weight keys used by the data collector nodes and data aggregator nodes for trusted transactions. TISK phase-I Generic Service Manager (GSM-C) module verifies the IoT devices based on self-trust incentive and server-trust incentive levels. High-level secure keys are dedicated special purpose keys utilized by data manager nodes and data expert nodes for authorized transactions. TISK phase-II Dedicated Service Manager (DSM-C) module verifies the certificates issued by GSM-C module. DSM-C module further issues high-level secure keys to data manager nodes and data expert nodes for specific purpose transactions. Simulation results indicate that the proposed TISK system reduces the key complexity and key cost to ensure distributed secure data management in IoT network.

2018-09-05
Zhang, H., Lou, F., Fu, Y., Tian, Z..  2017.  A Conditional Probability Computation Method for Vulnerability Exploitation Based on CVSS. 2017 IEEE Second International Conference on Data Science in Cyberspace (DSC). :238–241.
Computing the probability of vulnerability exploitation in Bayesian attack graphs (BAGs) is a key process for the network security assessment. The conditional probability of vulnerability exploitation could be obtained from the exploitability of the NIST's Common Vulnerability Scoring System (CVSS). However, the method which N. Poolsappasit et al. proposed for computing conditional probability could be used only in the CVSS metric version v2.0, and can't be used in other two versions. In this paper, we present two methods for computing the conditional probability based on CVSS's other two metric versions, version 1.0 and version 3.0, respectively. Based on the CVSS, the conditional probability computation of vulnerability exploitation is complete by combining the method of N. Poolsappasit et al.
Teusner, R., Matthies, C., Giese, P..  2017.  Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :418–425.
In any sufficiently complex software system there are experts, having a deeper understanding of parts of the system than others. However, it is not always clear who these experts are and which particular parts of the system they can provide help with. We propose a framework to elicit the expertise of developers and recommend experts by analyzing complexity measures over time. Furthermore, teams can detect those parts of the software for which currently no, or only few experts exist and take preventive actions to keep the collective code knowledge and ownership high. We employed the developed approach at a medium-sized company. The results were evaluated with a survey, comparing the perceived and the computed expertise of developers. We show that aggregated code metrics can be used to identify experts for different software components. The identified experts were rated as acceptable candidates by developers in over 90% of all cases.
Gaikwad, V. S., Gandle, K. S..  2017.  Ideal complexity cryptosystem with high privacy data service for cloud databases. 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM). :267–270.

Data storage in cloud should come along with high safety and confidentiality. It is accountability of cloud service provider to guarantee the availability and security of client data. There exist various alternatives for storage services but confidentiality and complexity solutions for database as a service are still not satisfactory. Proposed system gives alternative solution for database as a service that integrates benefits of different services along with advance encryption techniques. It yields possibility of applying concurrency on encrypted data. This alternative provides supporting facility to connect dispersed clients with elimination of intermediate proxy by which simplicity can acquired. Performance of proposed system evaluated on basis of theoretical analyses.

2018-08-23
Arellanes, D., Lau, K..  2017.  D-XMAN: A Platform For Total Compositionality in Service-Oriented Architectures. 2017 IEEE 7th International Symposium on Cloud and Service Computing (SC2). :283–286.

Current software platforms for service composition are based on orchestration, choreography or hierarchical orchestration. However, such approaches for service composition only support partial compositionality; thereby, increasing the complexity of SOA development. In this paper, we propose DX-MAN, a platform that supports total compositionality. We describe the main concepts of DX-MAN with the help of a case study based on the popular MusicCorp.

Salah, H., Eltoweissy, M..  2017.  Towards Collaborative Trust Management. 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC). :198–208.

Current technologies to include cloud computing, social networking, mobile applications and crowd and synthetic intelligence, coupled with the explosion in storage and processing power, are evolving massive-scale marketplaces for a wide variety of resources and services. They are also enabling unprecedented forms and levels of collaborations among human and machine entities. In this new era, trust remains the keystone of success in any relationship between two or more parties. A primary challenge is to establish and manage trust in environments where massive numbers of consumers, providers and brokers are largely autonomous with vastly diverse requirements, capabilities, and trust profiles. Most contemporary trust management solutions are oblivious to diversities in trustors' requirements and contexts, utilize direct or indirect experiences as the only form of trust computations, employ hardcoded trust computations and marginally consider collaboration in trust management. We surmise the need for reference architecture for trust management to guide the development of a wide spectrum of trust management systems. In our previous work, we presented a preliminary reference architecture for trust management which provides customizable and reconfigurable trust management operations to accommodate varying levels of diversity and trust personalization. In this paper, we present a comprehensive taxonomy for trust management and extend our reference architecture to feature collaboration as a first-class object. Our goal is to promote the development of new collaborative trust management systems, where various trust management operations would involve collaborating entities. Using the proposed architecture, we implemented a collaborative personalized trust management system. Simulation results demonstrate the effectiveness and efficiency of our system.

2018-06-20
Shafiq, Z., Liu, A..  2017.  A graph theoretic approach to fast and accurate malware detection. 2017 IFIP Networking Conference (IFIP Networking) and Workshops. :1–9.

Due to the unavailability of signatures for previously unknown malware, non-signature malware detection schemes typically rely on analyzing program behavior. Prior behavior based non-signature malware detection schemes are either easily evadable by obfuscation or are very inefficient in terms of storage space and detection time. In this paper, we propose GZero, a graph theoretic approach fast and accurate non-signature malware detection at end hosts. GZero it is effective while being efficient in terms of both storage space and detection time. We conducted experiments on a large set of both benign software and malware. Our results show that GZero achieves more than 99% detection rate and a false positive rate of less than 1%, with less than 1 second of average scan time per program and is relatively robust to obfuscation attacks. Due to its low overheads, GZero can complement existing malware detection solutions at end hosts.