Biblio
By applying power usage statistics from smart meters, users are able to save energy in their homes or control smart appliances via home automation systems. However, owing to security and privacy concerns, it is recommended that smart meters (SM) should not have direct communication with smart appliances. In this paper, we propose a design for a smart meter gateway (SMGW) associated with a two-phase authentication mechanism and key management scheme to link a smart grid with smart appliances. With placement of the SMGW, we can reduce the design complexity of SMs as well as enhance the strength of security.
With the rapid development of smart grid, smart meters are deployed at energy consumers' premises to collect real-time usage data. Although such a communication model can help the control center of the energy producer to improve the efficiency and reliability of electricity delivery, it also leads to some security issues. For example, this real-time data involves the customers' privacy. Attackers may violate the privacy for house breaking, or they may tamper with the transmitted data for their own benefits. For this purpose, many data aggregation schemes are proposed for privacy preservation. However, rare of them cares about both the data aggregation and fine-grained access control to improve the data utility. In this paper, we proposes a data aggregation scheme based on attribute decision tree. Security analysis illustrates that our scheme can achieve the data integrity, data privacy preservation and fine- grained data access control. Experiment results show that our scheme are more efficient than existing schemes.
Anomaly detection for cyber-security defence hasgarnered much attention in recent years providing an orthogonalapproach to traditional signature-based detection systems.Anomaly detection relies on building probability models ofnormal computer network behaviour and detecting deviationsfrom the model. Most data sets used for cyber-security havea mix of user-driven events and automated network events,which most often appears as polling behaviour. Separating theseautomated events from those caused by human activity is essentialto building good statistical models for anomaly detection. This articlepresents a changepoint detection framework for identifyingautomated network events appearing as periodic subsequences ofevent times. The opening event of each subsequence is interpretedas a human action which then generates an automated, periodicprocess. Difficulties arising from the presence of duplicate andmissing data are addressed. The methodology is demonstrated usingauthentication data from Los Alamos National Laboratory'senterprise computer network.
We use model-based testing techniques to detect logical vulnerabilities in implementations of the Wi-Fi handshake. This reveals new fingerprinting techniques, multiple downgrade attacks, and Denial of Service (DoS) vulnerabilities. Stations use the Wi-Fi handshake to securely connect with wireless networks. In this handshake, mutually supported capabilities are determined, and fresh pairwise keys are negotiated. As a result, a proper implementation of the Wi-Fi handshake is essential in protecting all subsequent traffic. To detect the presence of erroneous behaviour, we propose a model-based technique that generates a set of representative test cases. These tests cover all states of the Wi-Fi handshake, and explore various edge cases in each state. We then treat the implementation under test as a black box, and execute all generated tests. Determining whether a failed test introduces a security weakness is done manually. We tested 12 implementations using this approach, and discovered irregularities in all of them. Our findings include fingerprinting mechanisms, DoS attacks, and downgrade attacks where an adversary can force usage of the insecure WPA-TKIP cipher. Finally, we explain how one of our downgrade attacks highlights incorrect claims made in the 802.11 standard.
This paper presents the preliminary framework proposed by the authors for drivers of Smart Governance. The research question of this study is: What are the drivers for Smart Governance to achieve evidence-based policy-making? The framework suggests that in order to create a smart governance model, data governance and collaborative governance are the main drivers. These pillars are supported by legal framework, normative factors, principles and values, methods, data assets or human resources, and IT infrastructure. These aspects will guide a real time evaluation process in all levels of the policy cycle, towards to the implementation of evidence-based policies.
Recent years, the issue of cyber security has become ever more prevalent in the analysis and design of electrical cyber-physical systems (ECPSs). In this paper, we present the TrueTime Network Library for modeling the framework of ECPSs and focuses on the vulnerability analysis of ECPSs under DoS attacks. Model predictive control algorithm is used to control the ECPS under disturbance or attacks. The performance of decentralized and distributed control strategies are compared on the simulation platform. It has been proved that DoS attacks happen at dada collecting sensors or control instructions actuators will influence the system differently.
Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
Until recently, IT security received limited attention within the scope of Process Control Systems (PCS). In the past, PCS consisted of isolated, specialized components running closed process control applications, where hardware was placed in physically secured locations and connections to remote network infrastructures were forbidden. Nowadays, industrial communications are fully exploiting the plethora of features and novel capabilities deriving from the adoption of commodity off the shelf (COTS) hardware and software. Nonetheless, the reliance on COTS for remote monitoring, configuration and maintenance also exposed PCS to significant cyber threats. In light of these issues, this paper presents the steps for the design, verification and implementation of a lightweight remote attestation protocol. The protocol is aimed at providing a secure software integrity verification scheme that can be readily integrated into existing industrial applications. The main novelty of the designed protocol is that it encapsulates key elements for the protection of both participating parties (i.e., verifier and prover) against cyber attacks. The protocol is formally verified for correctness with the help of the Scyther model checking tool. The protocol implementation and experimental results are provided for a Phoenix-Contact industrial controller, which is widely used in the automation of gas transportation networks in Romania.
The IoT node works mostly in a specific scenario, and executes the fixed program. In order to make it suitable for more scenarios, this paper introduces a kind of the IoT node, which can change program at any time. And this node has intelligent and dynamic reconfigurable features. Then, a transport protocol is proposed. It enables this node to work in different scenarios and perform corresponding program. Finally, we use Verilog to design and FPGA to verify. The result shows that this protocol is feasible. It also offers a novel way of the IoT.
True random numbers have a fair role in modern digital transactions. In order to achieve secured authentication, true random numbers are generated as security keys which are highly unpredictable and non-repetitive. True random number generators are used mainly in the field of cryptography to generate random cryptographic keys for secure data transmission. The proposed work aims at the generation of true random numbers based on CMOS Boolean Chaotic Oscillator. As a part of this work, ASIC approach of CMOS Boolean Chaotic Oscillator is modelled and simulated using Cadence Virtuoso tool based on 45nm CMOS technology. Besides, prototype model has been implemented with circuit components and analysed using NI ELVIS platform. The strength of the generated random numbers was ensured by NIST (National Institute of Standards and Technology) Test Suite and ASIC approach was validated through various parameters by performing various analyses such as frequency, delay and power.
Ransomware has become a very significant cyber threat. The basic idea of ransomware was presented in the form of a cryptovirus in 1995. However, it was considered as merely a conceptual topic since then for over a decade. In 2017, ransomware has become a reality, with several famous cases of ransomware having compromised important computer systems worldwide. For example, the damage caused by CryptoLocker and WannaCry is huge, as well as global. They encrypt victims' files and require user's payment to decrypt them. Because they utilize public key cryptography, the key for recovery cannot be found in the footprint of the ransomware on the victim's system. Therefore, once infected, the system cannot be recovered without paying for restoration. Various methods to deal this threat have been developed by antivirus researchers and experts in network security. However, it is believed that cryptographic defense is infeasible because recovering a victim's files is computationally as difficult as breaking a public key cryptosystem. Quite recently, various approaches to protect the crypto-API of an OS from malicious codes have been proposed. Most ransomware generate encryption keys using the random number generation service provided by the victim's OS. Thus, if a user can control all random numbers generated by the system, then he/she can recover the random numbers used by the ransomware for the encryption key. In this paper, we propose a dynamic ransomware protection method that replaces the random number generator of the OS with a user-defined generator. As the proposed method causes the virus program to generate keys based on the output from the user-defined generator, it is possible to recover an infected file system by reproducing the keys the attacker used to perform the encryption.
Recommender systems have become ubiquitous in online applications where companies personalize the user experience based on explicit or inferred user preferences. Most modern recommender systems concentrate on finding relevant items for each individual user. In this paper, we describe the problem of directed edge recommendations where the system recommends the best item that a user can gift, share or recommend to another user that he/she is connected to. We propose algorithms that utilize the preferences of both the sender and the recipient by integrating individual user preference models (e.g., based on items each user purchased for themselves) with models of sharing preferences (e.g., gift purchases for others) into the recommendation process. We compare our work to group recommender systems and social network edge labeling, showing that incorporating the task context leads to more accurate recommendations.
The present age of digital information has presented a heterogeneous online environment which makes it a formidable mission for a noble user to search and locate the required online resources timely. Recommender systems were implemented to rescue this information overload issue. However, majority of recommendation algorithms focused on the accuracy of the recommendations, leaving out other important aspects in the definition of good recommendation such as diversity and serendipity. This results in low coverage, long-tail items often are left out in the recommendations as well. In this paper, we present and explore a recommendation technique that ensures that diversity, accuracy and serendipity are all factored in the recommendations. The proposed algorithm performed comparatively well as compared to other algorithms in literature.
The mitigation of insider threats against databases is a challenging problem as insiders often have legitimate access privileges to sensitive data. Therefore, conventional security mechanisms, such as authentication and access control, may be insufficient for the protection of databases against insider threats and need to be complemented with techniques that support real-time detection of access anomalies. The existing real-time anomaly detection techniques consider anomalies in references to the database entities and the amounts of accessed data. However, they are unable to track the access frequencies. According to recent security reports, an increase in the access frequency by an insider is an indicator of a potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this paper, we propose techniques for tracking users' access frequencies and detecting anomalous related activities in real-time. We present detailed algorithms for constructing accurate profiles that describe the access patterns of the database users and for matching subsequent accesses by these users to the profiles. Our methods report and log mismatches as anomalies that may need further investigation. We evaluated our techniques on the OLTP-Benchmark. The results of the evaluation indicate that our techniques are very effective in the detection of anomalies.
A significant advance in magnetic field management in a fully assembled superconducting radiofrequency cryomodule has been achieved and is reported here. Demagnetization of the entire cryomodule after assembly is a crucial step toward the goal of average magnetic flux density less than 0.5 μT at the location of the superconducting radio frequency cavities. An explanation of the physics of demagnetization and experimental results are presented.
The start-up value of an SRAM cell is unique, random, and unclonable as it is determined by the inherent process mismatch between transistors. These properties make SRAM an attractive circuit for generating encryption keys. The primary challenge for SRAM based key generation, however, is the poor stability when the circuit is subject to random noise, temperature and voltage changes, and device aging. Temporal majority voting (TMV) and bit masking were used in previous works to identify and store the location of unstable or marginally stable SRAM cells. However, TMV requires a long test time and significant hardware resources. In addition, the number of repetitive power-ups required to find the most stable cells is prohibitively high. To overcome the shortcomings of TMV, we propose a novel data remanence based technique to detect SRAM cells with the highest stability for reliable key generation. This approach requires only two remanence tests: writing `1' (or `0') to the entire array and momentarily shutting down the power until a few cells flip. We exploit the fact that the cells that are easily flipped are the most robust cells when written with the opposite data. The proposed method is more effective in finding the most stable cells in a large SRAM array than a TMV scheme with 1,000 power-up tests. Experimental studies show that the 256-bit key generated from a 512 kbit SRAM using the proposed data remanence method is 100% stable under different temperatures, power ramp up times, and device aging.
Digital signatures now become a crucial requirement in communication and digital messaging. Digital messaging is information that is very vulnerable to be manipulated by irresponsible people. Digital signatures seek to maintain the two security aspects that cryptography aims, such as integrity and non-repudiation. This research aims to applied MAC address with AES-128 and SHA-2 256 bit for digital signature. The use of MAC address in AES-128 could improve the security of the digital signature because of its uniqueness in every computer which could randomize the traditional processes of AES. SHA-2 256-bit will provides real unique randomized strings with reasonable speed. As result the proposed digital signature able to implement and work perfectly in many platforms.
This research aims to design a hardware random number generator running on wireless identification and sensing platform (WISP), which is a lightweight Internet of things device. The accelerometer sensor on WISP is used as the entropy source. This entropy source is post-processed with de-biasing and extraction methods to provide more uniformly distributed results that can be used in the authentication protocols between a radio frequency identification (RFID) tag and an RFID reader. The obtained random number outputs are tested using the well-known NIST random number test suite. It is seen that the numbers pass all the tests in the NIST randomness test suite.
Mixed-Criticality Systems (MCS) are real-time systems characterized by two or more distinct levels of criticality. In MCS, it is imperative that high-critical flows meet their deadlines while low critical flows can tolerate some delays. Sharing resources between flows in Network-On-Chip (NoC) can lead to different unpredictable latencies and subsequently complicate the implementation of MCS in many-core architectures. This paper proposes a new virtual channel router designed for MCS deployed over NoCs. The first objective of this router is to reduce the worst-case communication latency of high-critical flows. The second aim is to improve the network use rate and reduce the communication latency for low-critical flows. The proposed router, called DAS (Double Arbiter and Switching router), jointly uses Wormhole and Store And Forward techniques for low and high-critical flows respectively. Simulations with a cycle-accurate SystemC NoC simulator show that, with a 15% network use rate, the communication delay of high-critical flows is reduced by 80% while communication delay of low-critical flow is increased by 18% compared to usual solutions based on routers with multiple virtual channels.
Decoy Routing, the use of routers (rather than end hosts) as proxies, is a new direction in anti-censorship research. Decoy Routers (DRs), placed in Autonomous Systems, proxy traffic from users; so the adversary, e.g. a censorious government, attempts to avoid them. It is quite difficult to place DRs so the adversary cannot route around them – for example, we need the cooperation of 850 ASes to contain China alone [1]. In this paper, we consider a different approach. We begin by noting that DRs need not intercept all the network paths from a country, just those leading to Overt Destinations, i.e. unfiltered websites hosted outside the country (usually popular ones, so that client traffic to the OD does not make the censor suspicious). Our first question is – How many ASes are required for installing DRs to intercept a large fraction of paths from e.g. China to the top-n websites (as per Alexa)? How does this number grow with n ? To our surprise, the same few ($\approx$ 30) ASes intercept over 90% of paths to the top n sites worldwide, for n = 10, 20...200 and also to other destinations. Investigating further, we find that this result fits perfectly with the hierarchical model of the Internet [2]; our first contribution is to demonstrate with real paths that the number of ASes required for a world-wide DR framework is small ($\approx$ 30). Further, censor nations' attempts to filter traffic along the paths transiting these 30 ASes will not only block their own citizens, but others residing in foreign ASes. Our second contribution in this paper is to consider the details of DR placement: not just in which ASes DRs should be placed to intercept traffic, but exactly where in each AS. We find that even with our small number of ASes, we still need a total of about 11, 700 DRs. We conclude that, even though a DR system involves far fewer ASes than previously thought, it is still a major undertaking. For example, the current routers cost over 10.3 billion USD, so if Decoy Routing at line speed requires all-new hardware, the cost alone would make such a project unfeasible for most actors (but not for major nation states).