Biblio
Smartphones nowadays are customized to help users with their daily tasks such as storing important data or making transactions through the internet. With the sensitivity of the data involved, authentication mechanism such as fixed-text password, PIN, or unlock patterns are used to safeguard these data against intruders. However, these mechanisms have the risk from security threats such as cracking or shoulder surfing. To enhance mobile and/or information security, this study aimed to develop a free-form handwriting gesture user authentication for smartphones. It also tried to discover the static and dynamic handwriting features that significantly influence the recognition of a legitimate user. The experiment was then conducted by asking thirty (30) individuals to draw or swipe using their fingertip their desired free-form security pattern ten (10) times. These patterns were then cleaned and processed, and extracted seven (7) static and eleven (11) dynamic handwriting features. By means of Neural Network classifier of the RapidMiner data mining tool, these features were used to develop, validate, and test a model for user authentication. The model showed a very promising recognition rate of 96.67%. The model is further tested through a prototype, and it still gave a very satisfactory result.
There are more and more systems using mobile devices to perform sensing tasks, but these increase the risk of leakage of personal privacy and data. Data hiding is one of the important ways for information security. Even though many data hiding algorithms have worked on providing more hiding capacity or higher PSNR, there are few algorithms that can control PSNR effectively while ensuring hiding capacity. In this paper, with controllable PSNR based on LSBs substitution- PSNR-Controllable Data Hiding (PCDH), we first propose a novel encoding plan for data hiding. In PCDH, we use the remainder algorithm to calculate the hidden information, and hide the secret information in the last x LSBs of every pixel. Theoretical proof shows that this method can control the variation of stego image from cover image, and control PSNR by adjusting parameters in the remainder calculation. Then, we design the encoding and decoding algorithms with low computation complexity. Experimental results show that PCDH can control the PSNR in a given range while ensuring high hiding capacity. In addition, it can resist well some steganalysis. Compared to other algorithms, PCDH achieves better tradeoff among PSNR, hiding capacity, and computation complexity.
South Africa's lead-users predilections to tinker and innovate mobile banking services is driven by various constructs. Advanced technologies have made mobile banking services easy to use, attractive and beneficial. While this is welcome news to many, there are concerns that when lead-users tinker with these services, information security risks are exacerbated. The aim of this article is to present an insightful understanding of the demand-side predilections of South Africa's lead-users in such contexts. We assimilate the theories of Usage Control, (UCON), the Theory of Technology Acceptance Model (TAM), and the Theory of Perceived Risk (TPP) to explain predilections over technology. We demonstrate that constructs derived from these theories can explain the general demand-side predilection to tinker with mobile banking services. A quantitative approach was used to test this. From a sample of South African banking lead-users operating in Gauteng province of South Africa, data was collected and analysed with the help of a software package. We found unexpectedly that, lead-users predilections to tinker with mobile banking services was inhibited by perceived risk. Moreover, male lead-users were more domineering in the tinkering process than female lead-users. The implication for this is discussed and explained in the main body of work.
Searchable encryption is a new developing information security technique and it enables users to search over encrypted data through keywords without having to decrypt it at first. In the last decade, many researchers are engaging in the field of searchable encryption and have proposed a series of efficient search schemes over encrypted cloud data. It is the time to survey this field to conclude a comprehensive framework by analyzing individual contributions. This paper focuses on the searchable encryption schemes in cloud. We firstly summarize the general model and threat model in searchable encryption schemes, and then present the privacy-preserving issues in these schemes. In addition, we compare the efficiency and security between semantic search and preferred search in detail. At last, some open issues and research challenges in the future are proposed.
With the advancement of technology, the world has not only become a better place to live in but have also lost the privacy and security of shared data. Information in any form is never safe from the hands of unauthorized accessing individuals. Here, in our paper we propose an approach by which we can preserve data using visual cryptography. In this paper, two sixteen segment displayed text is broken into two shares that does not reveal any information about the original images. By this process we have obtained satisfactory results in statistical and structural testes.
The Polish Power System is becoming increasingly more dependent on Information and Communication Technologies which results in its exposure to cyberattacks, including the evolved and highly sophisticated threats such as Advanced Persistent Threats or Distributed Denial of Service attacks. The most exposed components are SCADA systems in substations and Distributed Control Systems in power plants. When addressing this situation the usual cyber security technologies are prerequisite, but not sufficient. With the rapidly evolving cyber threat landscape the use of partnerships and information sharing has become critical. However due to several anonymity concerns the relevant stakeholders may become reluctant to exchange sensitive information about security incidents. In the paper a multi-agent architecture is presented for the Polish Power System which addresses the anonymity concerns.
During an advanced persistent threat (APT), an attacker group usually establish more than one C&C server and these C&C servers will change their domain names and corresponding IP addresses over time to be unseen by anti-virus software or intrusion prevention systems. For this reason, discovering and catching C&C sites becomes a big challenge in information security. Based on our observations and deductions, a malware tends to contain a fixed user agent string, and the connection behaviors generated by a malware is different from that by a benign service or a normal user. This paper proposed a new method comprising filtering and clustering methods to detect C&C servers with a relatively higher coverage rate. The experiments revealed that the proposed method can successfully detect C&C Servers, and the can provide an important clue for detecting APT.
Objective: The overarching goal is to convey the concept of science of security and the contributions that a scientifically based, human factors approach can make to this interdisciplinary field.Background: Rather than a piecemeal approach to solving cybersecurity problems as they arise, the U.S. government is mounting a systematic effort to develop an approach grounded in science. Because humans play a central role in security measures, research on security-related decisions and actions grounded in principles of human information-processing and decision-making is crucial to this interdisciplinary effort.Method: We describe the science of security and the role that human factors can play in it, and use two examples of research in cybersecurity—detection of phishing attacks and selection of mobile applications—to illustrate the contribution of a scientific, human factors approach.Results: In these research areas, we show that systematic information-processing analyses of the decisions that users make and the actions they take provide a basis for integrating the human component of security science.Conclusion: Human factors specialists should utilize their foundation in the science of applied information processing and decision making to contribute to the science of cybersecurity.
Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
Skype has been a typical choice for providing VoIP service nowadays and is well-known for its broad range of features, including voice-calls, instant messaging, file transfer and video conferencing, etc. Considering its wide application, from the viewpoint of ISPs, it is essential to identify Skype flows and thus optimize network performance and forecast future needs. However, in general, a host is likely to run multiple network applications simultaneously, which makes it much harder to classify each and every Skype flow from mixed traffic exactly. Especially, current techniques usually focus on host-level identification and do not have the ability to identify Skype traffic at the flow-level. In this paper, we first reveal the unique sequence signatures of Skype UDP flows and then implement a practical online system named SkyTracer for precise Skype traffic identification. To the best of our knowledge, this is the first time to utilize the strong sequence signatures to carry out early identification of Skype traffic. The experimental results show that SkyTracer can achieve very high accuracy at fine-grained level in identifying Skype traffic.
As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.
Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.
The need for cyber security professionals continues to grow and education systems are responding in a variety of way. The US government has weighed in with two efforts, the NICE effort led by NIST and the CAE effort jointly led by NSA and DHS. Industry has unfilled needs and the CAE program is changing to meet both NICE and industry needs. This paper analyzes these efforts and examines several critical, yet unaddressed issues facing school programs as they adapt to new criteria and guidelines. Technical issues are easy to enumerate, yet it is the programmatic and student success factors that will define successful programs.
Complexity is ever increasing within our information environment and organisations, as interdependent dynamic relationships within sociotechnical systems result in high variety and uncertainty from a lack of information or control. A net-centric approach is a strategy to improve information value, to enable stakeholders to extend their reach to additional data sources, share Situational Awareness (SA), synchronise effort and optimise resource use to deliver maximum (or proportionate) effect in support of goals. This paper takes a systems perspective to understand the dynamics within a net-centric information system. This paper presents the first stages of the Soft Systems Methodology (SSM), to develop a conceptual model of the human activity system and develop a system dynamics model to represent system behaviour, that will inform future research into a net-centric approach with information security. Our model supports the net-centric hypothesis that participation within a information sharing community extends information reach, improves organisation SA allowing proactive action to mitigate vulnerabilities and reduce overall risk within the community. The system dynamics model provides organisations with tools to better understand the value of a net-centric approach, a framework to determine their own maturity and evaluate strategic relationships with collaborative communities.
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
One of the primary challenges when developing or implementing a security framework for any particular environment is determining the efficacy of the implementation. Does the implementation address all of the potential vulnerabilities in the environment, or are there still unaddressed issues? Further, if there is a choice between two frameworks, what objective measure can be used to compare the frameworks? To address these questions, we propose utilizing a technique of attack graph analysis to map the attack surface of the environment and identify the most likely avenues of attack. We show that with this technique we can quantify the baseline state of an application and compare that to the attack surface after implementation of a security framework, while simultaneously allowing for comparison between frameworks in the same environment or a single framework across multiple applications.
Summary form only given. In this presentation, several issues regarding operating system security will be investigated. The general problems of OS security are to be addressed. We also discuss why we should consider the security aspects of the OS, and when a secure OS is needed. We delve into the topic of secure OS design as well focusing on covert channel analysis. The specific operating systems under consideration include Windows and Android.
Encryption and decryption of data in an efficient manner is one of the challenging aspects of modern computer science. This paper introduces a new algorithm for Cryptography to achieve a higher level of security. In this algorithm it becomes possible to hide the meaning of a message in unprintable characters. The main issue of this paper is to make the encrypted message undoubtedly unprintable using several times of ASCII conversions and a cyclic mathematical function. Dividing the original message into packets binary matrices are formed for each packet to produce the unprintable encrypted message through making the ASCII value for each character below 32. Similarly, several ASCII conversions and the inverse cyclic mathematical function are used to decrypt the unprintable encrypted message. The final encrypted message received from three times of encryption becomes an unprintable text through which the algorithm possesses higher level of security without increasing the size of data or loosing of any data.
This paper presents the design and implementation of an information flow tracking framework based on code rewrite to prevent sensitive information leaks in browsers, combining the ideas of taint and information flow analysis. Our system has two main processes. First, it abstracts the semantic of JavaScript code and converts it to a general form of intermediate representation on the basis of JavaScript abstract syntax tree. Second, the abstract intermediate representation is implemented as a special taint engine to analyze tainted information flow. Our approach can ensure fine-grained isolation for both confidentiality and integrity of information. We have implemented a proof-of-concept prototype, named JSTFlow, and have deployed it as a browser proxy to rewrite web applications at runtime. The experiment results show that JSTFlow can guarantee the security of sensitive data and detect XSS attacks with about 3x performance overhead. Because it does not involve any modifications to the target system, our system is readily deployable in practice.
With the rapid development of information technology, information security management is ever more important. OpenSSL security incident told us, there's distinct disadvantages of security management of current hierarchical structure, the software and hardware facilities are necessary to enforce security management on their implements of crucial basic protocols, in order to ease the security threats against the facilities in a certain extent. This article expounded cross-layer security management and enumerated 5 contributory factors for the core problems that management facing to.
Wireless information security generates shared secret keys from reciprocal channel dynamics. Current solutions are mostly based on temporal per-frame channel measurements of signal strength and suffer from low key generate rate (KGR), large budget in channel probing, and poor secrecy if a channel does not temporally vary significantly. This paper designs a cross-layer solution that measures noise-free per-symbol channel dynamics across both time and frequency domain and derives keys from the highly fine-grained per-symbol reciprocal channel measurements. This solution consists of merits that: (1) the persymbol granularity improves the volume of available uncorrelated channel measurements by orders of magnitude over per-frame granularity in conventional solutions and so does KGR; 2) the solution exploits subtle channel fluctuations in frequency domain that does not force users to move to incur enough temporal variations as conventional solutions require; and (3) it measures noise-free channel response that suppresses key bit disagreement between trusted users. As a result, in every aspect, the proposed solution improves the security performance by orders of magnitude over conventional solutions. The performance has been evaluated on both a GNU SDR testbed in practice and a local GNU Radio simulator. The cross-layer solution can generate a KGR of 24.07 bits per probing frame on testbed or 19 bits in simulation, although conventional optimal solutions only has a KGR of at most one or two bit per probing frame. It also has a low key bit disagreement ratio while maintaining a high entropy rate. The derived keys show strong independence with correlation coefficients mostly less than 0.05. Furthermore, it is empirically shown that any slight physical change, e.g. a small rotation of antenna, results in fundamentally different cross-layer frequency measurements, which implies the strong secrecy and high efficiency of the proposed solution.
The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.
For the first time in the history of humanity, more them half of the population is now living in big cities. This scenario has raised concerns related systems that provide basic services to citizens. Even more, those systems has now the responsibility to empower the citizen with information and values that may aid people on daily decisions, such as related to education, transport, healthy and others. This environment creates a set of services that, interconnected, can develop a brand new range of solutions that refers to a term often called System of Systems. In this matter, focusing in a smart city, new challenges related to information security raises, those concerns may go beyond the concept of privacy issues exploring situations where the entire environment could be affected by issues different them only break the confidentiality of a data. This paper intends to discuss and propose 9 security issues that can be part of a smart city environment, and that explores more them just citizens privacy violations.
The need to protect big data, particularly those relating to information security (IS) maintenance (ISM) of an enterprise's IT infrastructure, is shown. A worldwide experience of addressing big data ISM issues is briefly summarized and a big data protection problem statement is formulated. An infrastructure for big data ISM is proposed. New applications areas for big data IT after addressing ISM issues are listed in conclusion.