Biblio
Security has become the vital component of today's technology. People wish to safeguard their valuable items in bank lockers. With growing technology most of the banks have replaced the manual lockers by digital lockers. Even though there are numerous biometric approaches, these are not robust. In this work we propose a new approach for personal biometric identification based on features extracted from ECG.
Security in mobile handsets of telecommunication standards such as GSM, Project 25 and TETRA is very important, especially when governments and military forces use handsets and telecommunication devices. Although telecommunication could be quite secure by using encryption, coding, tunneling and exclusive channel, attackers create new ways to bypass them without the knowledge of the legitimate user. In this paper we introduce a new, simple and economical circuit to warn the user in cases where the message is not encrypted because of manipulation by attackers or accidental damage. This circuit not only consumes very low power but also is created to sustain telecommunication devices in aspect of security and using friendly. Warning to user causes the best practices of telecommunication devices without wasting time and energy for fault detection.
Compressive sensing (CS) is a novel technology for sparse signal acquisition with sub-Nyquist sampling rate but with relative high resolution. Photonics-assisted CS has attracted much attention recently due the benefit of wide bandwidth provided by photonics. This paper discusses the approaches to realizing photonics-assisted CS.
The transmission of data over a common transmission media revolute the world of information sharing from personal desktop to cloud computing. But the risk of the information theft has increased in the same ratio by the third party working on the same channel. The risk can be avoided using the suitable encryption algorithm. Using the best suited algorithm the transmitted data will be encrypted before placing it on the common channel. Using the public key or the private key the encrypted data can be decrypted by the authenticated user. It will avoid the risk of information theft by the unauthenticated user. In this work we have proposed an encryption algorithm which uses the ASCII code to encrypt the plain text. The common key will be used by sender or receiver to encrypt and decrypt the text for secure communication.
This paper proposes a fast and robust procedure for sensing and reconstruction of sparse or compressible magnetic resonance images based on the compressive sampling theory. The algorithm starts with incoherent undersampling of the k-space data of the image using a random matrix. The undersampled data is sparsified using Haar transformation. The Haar transform coefficients of the k-space data are then reconstructed using the orthogonal matching Pursuit algorithm. The reconstructed coefficients are inverse transformed into k-space data and then into the image in spatial domain. Finally, a median filter is used to suppress the recovery noise artifacts. Experimental results show that the proposed procedure greatly reduces the image data acquisition time without significantly reducing the image quality. The results also show that the error in the reconstructed image is reduced by median filtering.
In data analysis, it is always a tough task to strike the balance between the privacy and the applicability of the data. Due to the demand for individual privacy, the data are being more or less obscured before being released or outsourced to avoid possible privacy leakage. This process is so called de-identification. To discuss a de-identification policy, the most important two aspects should be the re-identification risk and the information loss. In this paper, we introduce a novel policy searching method to efficiently find out proper de-identification policies according to acceptable re-identification risk while retaining the information resided in the data. With the UCI Machine Learning Repository as our real world dataset, the re-identification risk can therefore be able to reflect the true risk of the de-identified data under the de-identification policies. Moreover, using the proposed algorithm, one can then efficiently acquire policies with higher information entropy.
With the pretty prompt growth in Internet content, the main usage pattern of internet is shifting from traditional host-to-host model to content dissemination model. To support content distribution, content delivery networks (CDNs) gives an ad-hoc solution and some of future internet projects suggest a clean-slate design. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes the IDNet-based web applications. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. We design and develop an IDNet Browser based on the open source Qt. IDNet browser enables ID fetching and rendering by both `idp:/' schemes URID (Universal Resource Identifier) and `http:/' schemes URI in HTML The experiment shows that it can well be applicable to the IDNet test topology.
The main usage pattern of internet is shifting from traditional host-to-host central model to content dissemination model. It leads to the pretty prompt growth in Internet content. CDN and P2P are two mainstream techmologies to provide streaming content services in the current Internet. In recent years, some researchers have begun to focus on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology. Web applications have become one of the fundamental internet services. How to effectively support the popular browser-based web application is one of keys to success for future internet projects. This paper proposes ID based browser with caching in IDNet. IDNet consists of id/locator separation scheme and domain-insulated autonomous network architecture (DIANA) which redesign the future internet in the clean slate basis. Experiment shows that ID web browser with caching function can support how to disseminate content and how to find the closet network in IDNet having identical contents.
Secret key establishment is considered to be one of the main challenging issues in cryptography. Many security algorithms are implemented in practice using complicated mathematical methods to exchange secret keys, but those methods are not desirable in power limited terminals such as cellular and sensor networks. In this paper, we propose a physical layer method for exchanging secret key bits in precoding based multi-input multi-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. The proposed method uniquely relates the key bits to the indices of the precoding matrix used for MIMO channel precoding. The basic idea of the technique is to utilize a MIMO-OFDM precoding codebook. Comparative analysis with respect to the average number of mismatch bits, named key error rate (KER), shows an interesting lead for the new method relative to existing work. In addition, it will be shown that the proposed technique requires lower computation per byte per secret key.
The data processing capabilities of MapReduce systems pioneered with the on-demand scalability of cloud computing have enabled the Big Data revolution. However, the data controllers/owners worried about the privacy and accountability impact of storing their data in the cloud infrastructures as the existing cloud computing solutions provide very limited control on the underlying systems. The intuitive approach - encrypting data before uploading to the cloud - is not applicable to MapReduce computation as the data analytics tasks are ad-hoc defined in the MapReduce environment using general programming languages (e.g, Java) and homomorphic encryption methods that can scale to big data do not exist. In this paper, we address the challenges of determining and detecting unauthorized access to data stored in MapReduce based cloud environments. To this end, we introduce alarm raising honeypots distributed over the data that are not accessed by the authorized MapReduce jobs, but only by the attackers and/or unauthorized users. Our analysis shows that unauthorized data accesses can be detected with reasonable performance in MapReduce based cloud environments.
This paper introduces a design and implementation of a security scheme for the Internet of Things (IoT) based on ECQV Implicit Certificates and Datagram Transport Layer Security (DTLS) protocol. In this proposed security scheme, Elliptic curve cryptography based ECQV implicit certificate plays a key role allowing mutual authentication and key establishment between two resource-constrained IoT devices. We present how IoT devices get ECQV implicit certificates and use them for authenticated key exchange in DTLS. An evaluation of execution time of the implementation is also conducted to assess the efficiency of the solution.
Explainability and accuracy of the machine learning algorithms usually laid on a trade-off relationship. Several algorithms such as deep-learning artificial neural networks have high accuracy but low explainability. Since there were only limited ways to access the learning and prediction processes in algorithms, researchers and users were not able to understand how the results were given to them. However, a recent project, explainable artificial intelligence (XAI) by DARPA, showed that AI systems can be highly explainable but also accurate. Several technical reports of XAI suggested ways of extracting explainable features and their positive effects on users; the results showed that explainability of AI was helpful to make users understand and trust the system. However, only a few studies have addressed why the explainability can bring positive effects to users. We suggest theoretical reasons from the attribution theory and anthropomorphism studies. Trough a review, we develop three hypotheses: (1) causal attribution is a human nature and thus a system which provides casual explanation on their process will affect users to attribute the result of system; (2) Based on the attribution results, users will perceive the system as human-like and which will be a motivation of anthropomorphism; (3) The system will be perceived by the users through the anthropomorphism. We provide a research framework for designing causal explainability of an AI system and discuss the expected results of the research.
Personal data have been compiled and harnessed by a great number of establishments to execute their legal activities. Establishments are legally bound to maintain the confidentiality and security of personal data. Hence it is a requirement to provide access logs for the personal information. Depending on the needs and capacity, personal data can be opened to the users via platforms such as file system, database and web service. Web service platform is a popular alternative since it is autonomous and can isolate the data source from the user. In this paper, the way to log personal data accessed via web service method has been discussed. As an alternative to classical method in which logs were recorded and saved by client applications, a different mechanism of forming a central audit log with API manager has been investigated. By forging a model policy to exemplify central logging method, its advantages and disadvantages have been explored. It has been concluded in the end that this model could be employed in centrally recording audit logs.
It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples. Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n. We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors. Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.
It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples. Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n. We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors. Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.
Tactical wireless sensor networks (WSNs) are deployed over a region of interest for mission centric operations. The sink node in a tactical WSN is the aggregation point of data processing. Due to its essential role in the network, the sink node is a high priority target for an attacker who wishes to disable a tactical WSN. This paper focuses on the mitigation of sink-node vulnerability in a tactical WSN. Specifically, we study the issue of protecting the sink node through a technique known as k-anonymity. To achieve k-anonymity, we use a specific routing protocol designed to work within the constraints of WSN communication protocols, specifically IEEE 802.15.4. We use and modify the Lightweight Ad hoc On-Demand Next Generation (LOADng) reactive-routing protocol to achieve anonymity. This modified LOADng protocol prevents an attacker from identifying the sink node without adding significant complexity to the regular sensor nodes. We simulate the modified LOADng protocol using a custom-designed simulator in MATLAB. We demonstrate the effectiveness of our protocol and also show some of the performance tradeoffs that come with this method.
Wireless Sensor Network (WSN) consists of a numerous of small devices called sensor which has a limitation in resources such as low energy, memory, and computation. Sensors deployed in a harsh environment and vulnerable to various security issues and due to the resource restriction in a sensor, key management and provide robust security in this type of networks is a challenge. keys may be used in two ways in cryptography is symmetric or asymmetric, asymmetric is required more communication, memory, and computing when compared with symmetric, so it is not appropriate for WSN. In this paper, key management scheme based on symmetric keys has been proposed where each node uses pseudo-random generator (PRNG)to generate key that is shared with base station based on pre-distributed initial key and CBC - RC5 to reached to confidently, integrity and authentication.
Smartphones have become ubiquitous in our everyday lives, providing diverse functionalities via millions of applications (apps) that are readily available. To achieve these functionalities, apps need to access and utilize potentially sensitive data, stored in the user's device. This can pose a serious threat to users' security and privacy, when considering malicious or underskilled developers. While application marketplaces, like Google Play store and Apple App store, provide factors like ratings, user reviews, and number of downloads to distinguish benign from risky apps, studies have shown that these metrics are not adequately effective. The security and privacy health of an application should also be considered to generate a more reliable and transparent trustworthiness score. In order to automate the trustworthiness assessment of mobile applications, we introduce the Trust4App framework, which not only considers the publicly available factors mentioned above, but also takes into account the Security and Privacy (S&P) health of an application. Additionally, it considers the S&P posture of a user, and provides an holistic personalized trustworthiness score. While existing automatic trustworthiness frameworks only consider trustworthiness indicators (e.g. permission usage, privacy leaks) individually, Trust4App is, to the best of our knowledge, the first framework to combine these indicators. We also implement a proof-of-concept realization of our framework and demonstrate that Trust4App provides a more comprehensive, intuitive and actionable trustworthiness assessment compared to existing approaches.