Biblio
With the rapid development of Wireless Sensor Networks (WSNs), besides the energy efficient, Quality of Service (QoS) supported and the validity of packet transmission should be considered under some circumstances. In this paper, according to summing up LEACH protocol's advantages and defects, combining with trust evaluation mechanism, energy and QoS control, a trust-based QoS routing algorithm is put forward. Firstly, energy control and coverage scale are adopted to keep load balance in the phase of cluster head selection. Secondly, trust evaluation mechanism is designed to increase the credibility of the network in the stage of node clusting. Finally, in the period of information transmission, verification and ACK mechanism also put to guarantee validity of data transmission. In this paper, it proposes the improved protocol. The improved protocol can not only prolong nodes' life expectancy, but also increase the credibility of information transmission and reduce the packet loss. Compared to typical routing algorithms in sensor networks, this new algorithm has better performance.
Cloud computing paradigm provides an alternative and economical service for resource-constrained clients to perform large-scale data computation. Since large matrix determinant computation (DC) is ubiquitous in the fields of science and engineering, a first step is taken in this paper to design a protocol that enables clients to securely, verifiably, and efficiently outsource DC to a malicious cloud. The main idea to protect the privacy is employing some transformations on the original matrix to get an encrypted matrix which is sent to the cloud; and then transforming the result returned from the cloud to get the correct determinant of the original matrix. Afterwards, a randomized Monte Carlo verification algorithm with one-sided error is introduced, whose superiority in designing inexpensive result verification algorithm for secure outsourcing is well demonstrated. In addition, it is analytically shown that the proposed protocol simultaneously fulfills the goals of correctness, security, robust cheating resistance, and high-efficiency. Extensive theoretical analysis and experimental evaluation also show its high-efficiency and immediate practicability. It is hoped that the proposed protocol can shed light in designing other novel secure outsourcing protocols, and inspire powerful companies and working groups to finish the programming of the demanded all-inclusive scientific computations outsourcing software system. It is believed that such software system can be profitable by means of providing large-scale scientific computation services for so many potential clients.
Wireless security has been an active research area since the last decade. A lot of studies of wireless security use cryptographic tools, but traditional cryptographic tools are normally based on computational assumptions, which may turn out to be invalid in the future. Consequently, it is very desirable to build cryptographic tools that do not rely on computational assumptions. In this paper, we focus on a crucial cryptographic tool, namely 1-out-of-2 oblivious transfer. This tool plays a central role in cryptography because we can build a cryptographic protocol for any polynomial-time computable function using this tool. We present a novel 1-out-of-2 oblivious transfer protocol based on wireless channel characteristics, which does not rely on any computational assumption. We also illustrate the potential broad applications of this protocol by giving two applications, one on private communications and the other on privacy preserving password verification. We have fully implemented this protocol on wireless devices and conducted experiments in real environments to evaluate the protocol. Our experimental results demonstrate that it has reasonable efficiency.
Long Term Evolution (LTE) networks designed by 3rd Generation Partnership Project (3GPP) represent a widespread technology. LTE is mainly influenced by high data rates, minimum delay and the capacity due to scalable bandwidth and its flexibility. With the rapid and widespread use LTE networks, and increase the use in data/video transmission and Internet applications in general, accordingly, the challenges of securing and speeding up data communication in such networks is also increased. Authentication in LTE networks is very important process because most of the coming attacks occur during this stage. Attackers try to be authenticated and then launch the network resources and prevent the legitimate users from the network services. The basics of Extensible Authentication Protocol-Authentication and Key Agreement (EAP-AKA) are used in LTE AKA protocol which is called Evolved Packet System AKA (EPS-AKA) protocol to secure LTE network, However it still suffers from various vulnerabilities such as disclosure of the user identity, computational overhead, Man In The Middle (MITM) attack and authentication delay. In this paper, an Efficient EPS-AKA protocol (EEPS-AKA) is proposed to overcome those problems. The proposed protocol is based on the Simple Password Exponential Key Exchange (SPEKE) protocol. Compared to previous proposed methods, our method is faster, since it uses a secret key method which is faster than certificate-based methods, In addition, the size of messages exchanged between User Equipment (UE) and Home Subscriber Server (HSS) is reduced, this reduces authentication delay and storage overhead effectively. The automated validation of internet security protocols and applications (AVISPA) tool is used to provide a formal verification. Results show that the proposed EEPS-AKA is efficient and secure against active and passive attacks.
Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multicloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier's cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multicloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie-Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client's authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification, and public verification.
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
A key characteristic of simultaneous fault diagnosis is that the features extracted from the original patterns are strongly dependent. This paper proposes a new model of Bayesian classifier, which removes the fundamental assumption of naive Bayesian, i.e., the independence among features. In our model, the optimal bandwidth selection is applied to estimate the class-conditional probability density function (p.d.f.), which is the essential part of joint p.d.f. estimation. Three well-known indices, i.e., classification accuracy, area under ROC curve, and probability mean square error, are used to measure the performance of our model in simultaneous fault diagnosis. Simulations show that our model is significantly superior to the traditional ones when the dependence exists among features.
This paper proposes a service operator-aware trust scheme (SOTS) for resource matchmaking across multiple clouds. Through analyzing the built-in relationship between the users, the broker, and the service resources, this paper proposes a middleware framework of trust management that can effectively reduces user burden and improve system dependability. Based on multidimensional resource service operators, we model the problem of trust evaluation as a process of multi-attribute decision-making, and develop an adaptive trust evaluation approach based on information entropy theory. This adaptive approach can overcome the limitations of traditional trust schemes, whereby the trusted operators are weighted manually or subjectively. As a result, using SOTS, the broker can efficiently and accurately prepare the most trusted resources in advance, and thus provide more dependable resources to users. Our experiments yield interesting and meaningful observations that can facilitate the effective utilization of SOTS in a large-scale multi-cloud environment.
In this paper, we propose SAFE (Security Aware FlexRay scheduling Engine), to provide a problem definition and a design framework for FlexRay static segment schedule to address the new challenge on security. From a high level specification of the application, the architecture and communication middleware are synthesized to satisfy security requirements, in addition to extensibility, costs, and end-to-end latencies. The proposed design process is applied to two industrial case studies consisting of a set of active safety functions and an X-by-wire system respectively.
Mobile Apps running on smartphones and tablet pes offer a new possibility to enhance the work of engineers because they provide an easy-to-use, touchscreen-based handling and can be used anytime and anywhere. Introducing mobile apps in the engineering domain is difficult because the IT environment is heterogeneous and engineering-specific challenges in the app development arise e. g., large amount of data and high security requirements. There is a need for an engineering-specific middleware to facilitate and standardize the app development. However, such a middleware does not yet exist as well as a holistic set of requirements for the development. Therefore, we propose a design method which offers a systematic procedure to develop Mobile Engineering-Application Middleware.
This paper presents a middleware solution to secure data and network in the e-healthcare system. The e-Healthcare Systems are a primary concern due to the easiest deployment area accessibility of the sensor devices. Furthermore, they are often interacting closely in cooperation with the physical environment and the surrounding people, where such exposure increases security vulnerabilities in cases of improperly managed security of the information sharing among different healthcare organizations. Hence, healthcare-specific security standards such as authentication, data integrity, system security and internet security are used to ensure security and privacy of patients' information. This paper discusses security threats on e-Healthcare Systems where an attacker can access both data and network using masquerade attack Moreover, an efficient and cost effective approach middleware solution is discussed for the delivery of secure services.
A Cyber-Physical System (CPS) integrates physical devices (i.e., sensors) with cyber (i.e., informational) components to form a context sensitive system that responds intelligently to dynamic changes in real-world situations. Such a system has wide applications in the scenarios of traffic control, battlefield surveillance, environmental monitoring, and so on. A core element of CPS is the collection and assessment of information from noisy, dynamic, and uncertain physical environments integrated with many types of cyber-space resources. The potential of this integration is unbounded. To achieve this potential the raw data acquired from the physical world must be transformed into useable knowledge in real-time. Therefore, CPS brings a new dimension to knowledge discovery because of the emerging synergism of the physical and the cyber. The various properties of the physical world must be addressed in information management and knowledge discovery. This paper discusses the problems of mining sensor data in CPS: With a large number of wireless sensors deployed in a designated area, the task is real time detection of intruders that enter the area based on noisy sensor data. The framework of IntruMine is introduced to discover intruders from untrustworthy sensor data. IntruMine first analyzes the trustworthiness of sensor data, then detects the intruders' locations, and verifies the detections based on a graph model of the relationships between sensors and intruders.
Sharing data with client-side encryption requires key management. Selecting an appropriate key management protocol for a given scenario is hard, since the interdependency between scenario parameters and the resource consumption of a protocol is often only known for artificial, simplified scenarios. In this paper, we explore the resource consumption of systems that offer sharing of encrypted data within real-world scenarios, which are typically complex and determined by many parameters. For this purpose, we first collect empirical data that represents real-world scenarios by monitoring large-scale services within our organization. We then use this data to parameterize a resource consumption model that is based on the key graph generated by each key management protocol. The preliminary simulation runs we did so far indicate that this key-graph based model can be used to estimate the resource consumption of real-world systems for sharing encrypted data.
Recent advances in Wireless Sensor Networks have given rise to many application areas in healthcare such as the new field of Wireless Body Area Networks. The health status of humans can be tracked and monitored using wearable and non-wearable sensor devices. Security in WBAN is very important to guarantee and protect the patient's personal sensitive data and establishing secure communications between BAN sensors and external users is key to addressing prevalent security and privacy concerns. In this paper, we propose secure and efficient key management scheme based on ECC algorithm to protect patient's medical information in healthcare system. Our scheme divided into three phases as setup, registration, verification and key exchange. And we use the identification code which is the SIM card number on a patient's smart phone with the private key generated by the legal use instead of the third party. Also to prevent the replay attack, we use counter number at every process of authenticated message exchange to resist.
A routing protocol in a mobile ad hoc network (MANET) should be secure against both the outside attackers which do not hold valid security credentials and the inside attackers which are the compromised nodes in the network. The outside attackers can be prevented with the help of an efficient key management protocol and cryptography. However, to prevent inside attackers, it should be accompanied with an intrusion detection system (IDS). In this paper, we propose a novel secure routing with an integrated localized key management (SR-LKM) protocol, which is aimed to prevent both inside and outside attackers. The localized key management mechanism is not dependent on any routing protocol. Thus, unlike many other existing schemes, the protocol does not suffer from the key management - secure routing interdependency problem. The key management mechanism is lightweight as it optimizes the use of public key cryptography with the help of a novel neighbor based handshaking and Least Common Multiple (LCM) based broadcast key distribution mechanism. The protocol is storage scalable and its efficiency is confirmed by the results obtained from simulation experiments.
The Internet of Things (IoT) becomes reality. But its restrictions become obvious as we try to connect solutions of different vendors and communities. Apart from communication protocols appropriate identity management mechanisms are crucial for a growing IoT. The recently founded Identities of Things Discussion Group within Kantara Initiative will work on open issues and solutions to manage “Identities of Things” as an enabler for a fast-growing ecosystem.
Cloud computing is an emerging paradigm shifting the shape of computing models from being a technology to a utility. However, security, privacy and trust are amongst the issues that can subvert the benefits and hence wide deployment of cloud computing. With the introduction of omnipresent mobile-based clients, the ubiquity of the model increases, suggesting a still higher integration in life. Nonetheless, the security issues rise to a higher degree as well. The constrained input methods for credentials and the vulnerable wireless communication links are among factors giving rise to serious security issues. To strengthen the access control of cloud resources, organizations now commonly acquire Identity Management Systems (IdM). This paper presents that the most popular IdM, namely OAuth, working in scope of Mobile Cloud Computing has many weaknesses in authorization architecture. In particular, authors find two major issues in current IdM. First, if the IdM System is compromised through malicious code, it allows a hacker to get authorization of all the protected resources hosted on a cloud. Second, all the communication links among client, cloud and IdM carries complete authorization token, that can allow hacker, through traffic interception at any communication link, an illegitimate access of protected resources. We also suggest a solution to the reported problems, and justify our arguments with experimentation and mathematical modeling.
Hash tables form a core component of many algorithms as well as network devices. Because of their large size, they often require a combined memory model, in which some of the elements are stored in a fast memory (for example, cache or on-chip SRAM) while others are stored in much slower memory (namely, the main memory or off-chip DRAM). This makes the implementation of real-life hash tables particularly delicate, as a suboptimal choice of the hashing scheme parameters may result in a higher average query time, and therefore in a lower throughput. In this paper, we focus on multiple-choice hash tables. Given the number of choices, we study the tradeoff between the load of a hash table and its average lookup time. The problem is solved by analyzing an equivalent problem: the expected maximum matching size of a random bipartite graph with a fixed left-side vertex degree. Given two choices, we provide exact results for any finite system, and also deduce asymptotic results as the fast memory size increases. In addition, we further consider other variants of this problem and model the impact of several parameters. Finally, we evaluate the performance of our models on Internet backbone traces, and illustrate the impact of the memories speed difference on the choice of parameters. In particular, we show that the common intuition of entirely avoiding slow memory accesses by using highly efficient schemes (namely, with many fast-memory choices) is not always optimal.
High-speed IP address lookup is essential to achieve wire-speed packet forwarding in Internet routers. Ternary content addressable memory (TCAM) technology has been adopted to solve the IP address lookup problem because of its ability to perform fast parallel matching. However, the applicability of TCAMs presents difficulties due to cost and power dissipation issues. Various algorithms and hardware architectures have been proposed to perform the IP address lookup using ordinary memories such as SRAMs or DRAMs without using TCAMs. Among the algorithms, we focus on two efficient algorithms providing high-speed IP address lookup: parallel multiple-hashing (PMH) algorithm and binary search on level algorithm. This paper shows how effectively an on-chip Bloom filter can improve those algorithms. A performance evaluation using actual backbone routing data with 15,000-220,000 prefixes shows that by adding a Bloom filter, the complicated hardware for parallel access is removed without search performance penalty in parallel-multiple hashing algorithm. Search speed has been improved by 30-40 percent by adding a Bloom filter in binary search on level algorithm.
Security of a computer system has been traditionally related to the security of the software or the information being processed. The underlying hardware used for information processing has been considered trusted. The emergence of hardware Trojan attacks violates this root of trust. These attacks, in the form of malicious modifications of electronic hardware at different stages of its life cycle, pose major security concerns in the electronics industry. An adversary can mount such an attack with an objective to cause operational failure or to leak secret information from inside a chip-e.g., the key in a cryptographic chip, during field operation. Global economic trend that encourages increased reliance on untrusted entities in the hardware design and fabrication process is rapidly enhancing the vulnerability to such attacks. In this paper, we analyze the threat of hardware Trojan attacks; present attack models, types, and scenarios; discuss different forms of protection approaches, both proactive and reactive; and describe emerging attack modes, defenses, and future research pathways.
The detectability of malicious circuitry on FPGAs with varying placement properties yet has to be investigated. The authors utilize a Xilinx Virtex-II Pro target platform in order to insert a sequential denial-of-service Trojan into an existing AES design by manipulating a Xilinx-specific, intermediate file format prior to the bitstream generation. Thereby, there is no need for an attacker to acquire access to the hardware description language representation of a potential target architecture. Using a side-channel analysis setup for electromagnetic emanation (EM) measurements, they evaluate the detectability of different Trojan designs with varying location and logic distribution properties. The authors successfully distinguish the malicious from the genuine designs and provide information on how the location and distribution properties of the Trojan logic affect its detectability. To the best of their knowledge, this has been the first practically conducted Trojan detection using localized EM measurements.
The strong development of the Internet of Things (IoT) is dramatically changing traditional perceptions of the current Internet towards an integrated vision of smart objects interacting with each other. While in recent years many technological challenges have already been solved through the extension and adaptation of wireless technologies, security and privacy still remain as the main barriers for the IoT deployment on a broad scale. In this emerging paradigm, typical scenarios manage particularly sensitive data, and any leakage of information could severely damage the privacy of users. This paper provides a concise description of some of the major challenges related to these areas that still need to be overcome in the coming years for a full acceptance of all IoT stakeholders involved. In addition, we propose a distributed capability-based access control mechanism which is built on public key cryptography in order to cope with some of these challenges. Specifically, our solution is based on the design of a lightweight token used for access to CoAP Resources, and an optimized implementation of the Elliptic Curve Digital Signature Algorithm (ECDSA) inside the smart object. The results obtained from our experiments demonstrate the feasibility of the proposal and show promising in order to cover more complex scenarios in the future, as well as its application in specific IoT use cases.
Signcryption is a cryptographic primitive that simultaneously realizes both the functions of public key encryption and digital signature in a logically single step, and with a cost significantly lower than that required by the traditional “signature and encryption” approach. Recently, an efficient certificateless signcryption scheme without using bilinear pairings was proposed by Zhu et al., which is claimed secure based on the assumptions that the compute Diffie-Hellman problem and the discrete logarithm problem are difficult. Although some security arguments were provided to show the scheme is secure, in this paper, we find that the signcryption construction due to Zhu et al. is not as secure as claimed. Specifically, we describe an adversary that can break the IND-CCA2 security of the scheme without any Unsigncryption query. Moreover, we demonstrate that the scheme is insecure against key replacement attack by describing a concrete attack approach.
This session reports on a workshop convened by the ACM Education Board with funding by the US National Science Foundation and invites discussion from the community on the workshop findings. The topic, curricular directions for cybersecurity, is one that resonates in many departments considering how best to prepare graduates to face the challenges of security issues in employment and future research. The session will include presentation of the workshop context and conclusions, but will be open to participant discussion. This will be the first public presentation of the results of the workshop and the first opportunity for significant response.
Smart government is possible only if the security of sensitive data can be assured. The more knowledgeable government officials and citizens are about cybersecurity, the better are the chances that government data is not compromised or abused. In this paper, we present two systems under development that aim at improving cybersecurity education. First, we are creating a taxonomy of cybersecurity topics that provides links to relevant educational or research material. Second, we are building a portal that serves as platform for users to discuss the security of websites. These sources can be linked together. This helps to strengthen the knowledge of government officials and citizens with regard to cybersecurity issues. These issues are a central concern for open government initiatives.