Biblio
Chinese Remainder Theorem (CRT) is one of the spatial domain methods that is more implemented in the data hiding method watermarking. CRT is used to improve security and imperceptibility in the watermarking method. CRT is rarely studied in studies that discuss steganographic images. Steganography research focuses more on increasing imperceptibility, embedded payload, and message security, so methods like LSB are still popular to be developed to date. CRT and LSB have some similarities such as default payload capacity and both are methods in the spatial domain which can produce good imperceptibility quality of stego image. But CRT is very superior in terms of security, so CRT is also widely used in cryptographic algorithms. Some ways to increase imperceptibility in image steganography are edge detection and spread spectrum embedding. This research proposes a combination of edge detection techniques and spread-spectrum embedding based on the CRT method to produce imperceptibility and safe image steganography method. Based on the test results it is proven that the combination of the proposed methods can increase imperceptibility of CRT-based steganography based on SSIM metric.
The understanding of measured jitter is improved in three ways. First, it is shown that the measured jitter is not only governed by written-in jitter and the reader resolution along the cross-track direction but by remanence noise in the vicinity of transitions and the down-track reader resolution as well. Second, a novel data analysis scheme is introduced that allows for an unambiguous separation of these two contributions. Third, based on data analyses involving the first two learnings and micro-magnetic simulations, we identify and explain the root causes for variations of jitter with write current (WC) (write field), WC overshoot amplitude (write-field rise time), and linear disk velocity measured for heat-assisted magnetic recording.
Traditional network routing protocol exhibits high statics and singleness, which provide significant advantages for the attacker. There are two kinds of attacks on the network: active attacks and passive attacks. Existing solutions for those attacks are based on replication or detection, which can deal with active attacks; but are helpless to passive attacks. In this paper, we adopt the theory of network coding to fragment the data in the Software-Defined Networks and propose a network coding-based resilient multipath routing scheme. First, we present a new metric named expected eavesdropping ratio to measure the resilience in the presence of passive attacks. Then, we formulate the network coding-based resilient multipath routing problem as an integer-programming optimization problem by using expected eavesdropping ratio. Since the problem is NP-hard, we design a Simulated Annealing-based algorithm to efficiently solve the problem. The simulation results demonstrate that the proposed algorithms improve the defense performance against passive attacks by about 20% when compared with baseline algorithms.
The goal of this document is to provide knowledge of Security for Industrial Control Systems (ICS,) such as supervisory control and data acquisition (SCADA) which is implemented in power transmission network, power stations, power distribution grids and other big infrastructures that affect large number of persons and security of nations. A distinction between IT and ICS security is given to make a difference between the two disciplines. In order to avoid intrusion and destruction of industrials plants, some recommendations are given to preserve their security.
IIoT devices are sourced in many different countries and contain many components including hardware, software, and firmware. Each of these devices and components have a supply chain that can be compromised at many points including by the manufacturer, the software libraries, the shippers, the distributors and more.
In today's interconnected world, universities recognize the importance of protecting their information assets from internal and external threats. Being the possible insider threats to Information Security, employees are often coined as the weakest link. Both employees and organizations should be aware of this raising challenge. Understanding staff perception of compliance behaviour is critical for universities wanting to leverage their staff capabilities to mitigate Information Security risks. Therefore, this research seeks to get insights into staff perception based on factors adopted from several theories by using proposed constructs i.e. "perceived" practices/policies and "perceived" intention to comply. Drawing from the General Deterrence Theory, Protection Motivation Theory, Theory of Planned Behaviour and Information Reinforcement, within the context of Palestine universities, this paper integrates staff awareness of Information Security Policies (ISP) countermeasures as antecedents to ``perceived'' influencing factors (perceived sanctions, perceived rewards, perceived coping appraisal, and perceived information reinforcement). The empirical study is designed to follow a quantitative research approaches, use survey as a data collection method and questionnaires as the research instruments. Partial least squares structural equation modelling is used to inspect the reliability and validity of the measurement model and hypotheses testing for the structural model. The research covers ISP awareness among staff and seeks to assert that information security is the responsibility of all academic and administrative staff from all departments. Overall, our pilot study findings seem promising, and we found strong support for our theoretical model.
Cloud Computing is an important term of modern technology. The usefulness of Cloud is increasing day by day and simultaneously more and more security problems are arising as well. Two of the major threats of Cloud are improper authentication and multi-tenancy. According to the specialists both pros and cons belong to multi-tenancy. There are security protocols available but it is difficult to claim these protocols are perfect and ensure complete protection. The purpose of this paper is to propose an integrated model to ensure better Cloud security for Authentication and multi-tenancy. Multi-tenancy means sharing of resources and virtualization among clients. Since multi-tenancy allows multiple users to access same resources simultaneously, there is high probability of accessing confidential data without proper privileges. Our model includes Kerberos authentication protocol to enhance authentication security. During our research on Kerberos we have found some flaws in terms of encryption method which have been mentioned in couple of IEEE conference papers. Pondering about this complication we have elected Elliptic Curve Cryptography. On the other hand, to attenuate arose risks due to multi-tenancy we are proposing a Resource Allocation Manager Unit, a Control Database and Resource Allocation Map. This part of the model will perpetuate resource allocation for the users.
In the future, mixed traffic Highly Automated Vehicles (HAV) will have to resolve interactions with human operated traffic. A particular problem for HAVs is the detection of human states influencing safety, critical decisions, and driving behavior of humans. We demonstrate the value proposition of neurophysiological sensors and driver models for optimizing performance of HAVs under safety constraints in mixed traffic applications.
Sybil attacks, wherein a network is subverted by forging node identities, remains an open issue in wireless sensor networks (WSNs). This paper proposes a scheme, called Location and Communication ID (LCID) based detection, which employs residual energy, communication ID and location information of sensor nodes for Sybil attacks prevention. Moreover, LCID takes into account the resource constrained nature of WSNs and enhances energy conservation through hierarchical routing. Sybil nodes are purged before clusters formation to ensure that only legitimate nodes participate in clustering and data communication. CH selection is based on the average energy of the entire network to load-balance energy consumption. LCID selects a CH if its residual energy is greater than the average network energy. Furthermore, the workload of CHs is equally distributed among sensor nodes. A CH once selected cannot be selected again for 1/p rounds, where p is the CH selection probability. Simulation results demonstrate that, as compared to an eminent scheme, LCID has a higher Sybil attacks detection ratio, higher network lifetime, higher packet reception rate at the BS, lower energy consumption, and lower packet loss ratio.
The confidentiality of data stored in embedded and handheld devices has become an urgent necessity more than ever before. Encryption of sensitive data is a well-known technique to preserve their confidentiality, however it comes with certain costs that can heavily impact the device processing resources. Utilizing multicore processors, which are equipped with current embedded devices, has brought a new era to enhance data confidentiality while maintaining suitable device performance. Encrypting the complete storage area, also known as Full Disk Encryption (FDE) can still be challenging, especially with newly emerging massive storage systems. Alternatively, since the most user sensitive data are residing inside persisting databases, it will be more efficient to focus on securing SQLite databases, through encryption, where SQLite is the most common RDBMS in handheld and embedded systems. This paper addresses the problem of ensuring data protection in embedded and mobile devices while maintaining suitable device performance by mitigating the impact of encryption. We presented here a proposed design for a parallel database encryption system, called SQLite-XTS. The proposed system encrypts data stored in databases transparently on-the-fly without the need for any user intervention. To maintain a proper device performance, the system takes advantage of the commodity multicore processors available with most embedded and mobile devices.
Global networks like energy grids, transportation networks or financial IT-infrastructure are crucial for the wealth of modern societies. Reliable and resilient control of these infrastructures thus has gained much attention in the last years. Typical approaches to ensure stable operation of these infrastructures follow two contradictory paradigms, i.e. complexity reducing and complexity increasing measures. Whereas the first are supposed to encapsulate interdependencies and decision-making processes and typically reduce transparency as a sideeffect, the latter strengthen the role of the human actor in these systems by increasing transparency to allow for well-informed decision-making. In this paper, we will discuss these two paradigms and show why intra-actor conflicts arise from adding both complexity and reducing transparency at the same time. We will outline a research agenda to model the effect of these conflicts using the example of energy systems and current transparency-enhancing technologies like e.g. distributed ledger technology.
Cloud Computing is the most suitable environment for the collaboration of multiple organizations via its multi-tenancy architecture. However, due to the distributed management of policies within these collaborations, they may contain several anomalies, such as conflicts and redundancies, which may lead to both safety and availability problems. On the other hand, current cloud computing solutions do not offer verification tools to manage access control policies. In this paper, we propose a cloud policy verification service (CPVS), that facilitates to users the management of there own security policies within Openstack cloud environment. Specifically, the proposed cloud service offers a policy verification approach to dynamically choose the adequate policy using Aspect-Oriented Finite State Machines (AO-FSM), where pointcuts and advices are used to adopt Domain-Specific Language (DSL) state machine artifacts. The pointcuts define states' patterns representing anomalies (e.g., conflicts) that may occur in a security policy, while the advices define the actions applied at the selected pointcuts to remove the anomalies. In order to demonstrate the efficiency of our approach, we provide time and space complexities. The approach was implemented as middleware service within Openstack cloud environment. The implementation results show that the middleware can detect and resolve different policy anomalies in an efficient manner.
Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.
Perpetrators utilize different network reconnaissance techniques in order to discover vulnerabilities and conduct their attacks. Port scanning can be leveraged to conclude open ports, available services, and even running operating systems along with their versions. Even though these techniques are effective, their aggressiveness for information gain could leave an apparent sign of attack, which can be observed by the variety of security controls deployed at the network perimeter of an organization. However, not all such attacks can be stopped nor the corresponding security controls can defend against insiders. In this paper, we tackle the problem of reconnaissance detection using a different approach. We utilize the rich information that is kept in memory (or RAM). We observe that packets sent or received stay in memory for a while. Our results show that inspecting memory for attack signs is beneficial. Furthermore, correlating contents that are obtained from different memories empowers the investigation process and helps reach to conclusions.
Recently Vehicular Cloud Computing (VCC) has become an attractive solution that support vehicle's computing and storing service requests. This computing paradigm insures a reduced energy consumption and low traffic congestion. Additionally, VCC has emerged as a promising technology that provides a virtual platform for processing data using vehicles as infrastructures or centralized data servers. However, vehicles are deployed in open environments where they are vulnerable to various types of attacks. Furthermore, traditional cryptographic algorithms failed in insuring security once their keys compromised. In order to insure a secure vehicular platform, we introduce in this paper a new decoy technology DT and user behavior profiling (UBP) as an alternative solution to overcome data security, privacy and trust in vehicular cloud servers using a fog computing architecture. In the case of a malicious behavior, our mechanism shows a high efficiency by delivering decoy files in such a way making the intruder unable to differentiate between the original and decoy file.
Cooperative Intelligent Transport Systems (C-ITS) are expected to play an important role in our lives. They will improve the traffic safety and bring about a revolution on the driving experience. However, these benefits are counterbalanced by possible attacks that threaten not only the vehicle's security, but also passengers' lives. One of the most common attacks is the Sybil attack, which is even more dangerous than others because it could be the starting point of many other attacks in C-ITS. This paper proposes a distributed approach allowing the detection of Sybil attacks by using the traffic flow theory. The key idea here is that each vehicle will monitor its neighbourhood in order to detect an eventual Sybil attack. This is achieved by a comparison between the real accurate speed of the vehicle and the one estimated using the V2V communications with vehicles in the vicinity. The estimated speed is derived by using the traffic flow fundamental diagram of the road's portion where the vehicles are moving. This detection algorithm is validated through some extensive simulations conducted using the well-known NS3 network simulator with SUMO traffic simulator.
Several assessment techniques and methodologies exist to analyze the security of an application dynamically. However, they either are focused on a particular product or are mainly concerned about the assessment process rather than the product's security confidence. Most crucially, they tend to assess the security of a target application as a standalone artifact without assessing its host infrastructure. Such attempts can undervalue the overall security posture since the infrastructure becomes crucial when it hosts a critical application. We present an ontology-based security model that aims to provide the necessary knowledge, including network settings, application configurations, testing techniques and tools, and security metrics to evaluate the security aptitude of a critical application in the context of its hosting infrastructure. The objective is to integrate the current good practices and standards in security testing and virtualization to furnish an on-demand and test-ready virtual target infrastructure to execute the critical application and to initiate a context-aware and quantifiable security assessment process in an automated manner. Furthermore, we present a security assessment architecture to reflect on how the ontology can be integrated into a standard process.
One method to increase bit density in magnetic memory devices is to use multi-state structures, such as a ferromagnetic nanoring with multiple domain walls (DWs), to encode information. However, there is a competition between decreasing the ring size in order to more densely pack bits and increasing it to make multiple DWs stable. This paper examines the effects of ring geometry, specifically inner and outer diameters (ODs), on the formation of 360° DWs. By sequentially increasing the strength of an applied circular magnetic field, we examine how DWs form under the applied field and whether they remain when the field is returned to zero. We examine the relationships between field strength, number of walls initially formed, and the stability of these walls at zero field for different ring geometries. We demonstrate that there is a lower limit of 200 nm to the ring diameter for the formation of any 360° DWs under an applied field, and that a high number of 360° DWs are stable at remanence only for narrow rings with large ODs.
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.
Unlike faults in classical systems, faults in Cyber-Physical Systems will often be caused by the system's interaction with its physical environment and social context, rendering these faults harder to diagnose. To complicate matters further, knowledge about the behavior and failure modes of a system are often collected in different models. We show how three of those models, namely attack trees, fault trees, and timed failure propagation graphs can be converted into Halpern-Pearl causal models, combined into a single holistic causal model, and analyzed with actual causality reasoning to detect and explain unwanted events. Halpern-Pearl models have several advantages over their source models, particularly that they allow for modeling preemption, consider the non-occurrence of events, and can incorporate additional domain knowledge. Furthermore, such holistic models allow for analysis across model boundaries, enabling detection and explanation of events that are beyond a single model. Our contribution here delineates a semi-automatic process to (1) convert different models into Halpern-Pearl causal models, (2) combine these models into a single holistic model, and (3) reason about system failures. We illustrate our approach with the help of an Unmanned Aerial Vehicle case study.
Older adults (65+) are becoming primary users of emerging smart systems, especially in health care. However, these technologies are often not designed for older users and can pose serious privacy and security concerns due to their novelty, complexity, and propensity to collect and communicate vast amounts of sensitive information. Efforts to address such concerns must build on an in-depth understanding of older adults' perceptions and preferences about data privacy and security for these technologies, and accounting for variance in physical and cognitive abilities. In semi-structured interviews with 46 older adults, we identified a range of complex privacy and security attitudes and needs specific to this population, along with common threat models, misconceptions, and mitigation strategies. Our work adds depth to current models of how older adults' limited technical knowledge, experience, and age-related declines in ability amplify vulnerability to certain risks; we found that health, living situation, and finances play a notable role as well. We also found that older adults often experience usability issues or technical uncertainties in mitigating those risks -- and that managing privacy and security concerns frequently consists of limiting or avoiding technology use. We recommend educational approaches and usable technical protections that build on seniors' preferences.
In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.
Disaster is an unexpected event in a system lifetime, which can be made by nature or even human errors. Disaster recovery of information technology is an area of information security for protecting data against unsatisfactory events. It involves a set of procedures and tools for returning an organization to a state of normality after an occurrence of a disastrous event. So the organizations need to have a good plan in place for disaster recovery. There are many strategies for traditional disaster recovery and also for cloud-based disaster recovery. This paper focuses on using cloud-based disaster recovery strategies instead of the traditional techniques, since the cloud-based disaster recovery has proved its efficiency in providing the continuity of services faster and in less cost than the traditional ones. The paper introduces a proposed model for virtual private disaster recovery on cloud by using two metrics, which comprise a recovery time objective and a recovery point objective. The proposed model has been evaluated by experts in the field of information technology and the results show that the model has ensured the security and business continuity issues, as well as the faster recovery of a disaster that could face an organization. The paper also highlights the cloud computing services and illustrates the most benefits of cloud-based disaster recovery.
In this paper, we develop a statistical framework for image steganography in which the cover and stego messages are modeled as multivariate Gaussian random variables. By minimizing the detection error of an optimal detector within the generalized adopted statistical model, we propose a novel Gaussian embedding method. Furthermore, we extend the formulation to cost-based steganography, resulting in a universal embedding scheme that works with embedding costs as well as variance estimators. Experimental results show that the proposed approach avoids embedding in smooth regions and significantly improves the security of the state-of-the-art methods, such as HILL, MiPOD, and S-UNIWARD.
Due to the importance of securing electronic transactions, many cryptographic protocols have been employed, that mainly depend on distributed keys between the intended parties. In classical computers, the security of these protocols depends on the mathematical complexity of the encoding functions and on the length of the key. However, the existing classical algorithms 100% breakable with enough computational power, which can be provided by quantum machines. Moving to quantum computation, the field of security shifts into a new area of cryptographic solutions which is now the field of quantum cryptography. The era of quantum computers is at its beginning. There are few practical implementations and evaluations of quantum protocols. Therefore, the paper defines a well-known quantum key distribution protocol which is BB84 then provides a practical implementation of it on IBM QX software. The practical implementations showed that there were differences between BB84 theoretical expected results and the practical implementation results. Due to this, the paper provides a statistical analysis of the experiments by comparing the standard deviation of the results. Using the BB84 protocol the existence of a third-party eavesdropper can be detected. Thus, calculations of the probability of detecting/not detecting a third-party eavesdropping have been provided. These values are again compared to the theoretical expectation. The calculations showed that with the greater number of qubits, the percentage of detecting eavesdropper will be higher.



