Biblio
E- Health systems, specifically, Telecare Medical Information Systems (TMIS), are deployed in order to provide patients with specific diseases with healthcare services that are usually based on remote monitoring. Therefore, making an efficient, convenient and secure connection between users and medical servers over insecure channels within medical services is a rather major issue. In this context, because of the biometrics' characteristics, many biometrics-based three factor user authentication schemes have been proposed in the literature to secure user/server communication within medical services. In this paper, we make a brief study of the most interesting proposals. Then, we propose a new three-factor authentication and key agreement scheme for TMIS. Our scheme tends not only to fix the security drawbacks of some studied related work, but also, offers additional significant features while minimizing resource consumption. In addition, we perform a formal verification using the widely accepted formal security verification tool AVISPA to demonstrate that our proposed scheme is secure. Also, our comparative performance analysis reveals that our proposed scheme provides a lower resource consumption compared to other related work's proposals.
As the number of data in various industries and government sectors is growing exponentially, the `7V' concept of big data aims to create a new value by indiscriminately collecting and analyzing information from various fields. At the same time as the ecosystem of the ICT industry arrives, big data utilization is treatened by the privacy attacks such as infringement due to the large amount of data. To manage and sustain the controllable privacy level, there need some recommended de-identification techniques. This paper exploits those de-identification processes and three types of commonly used privacy models. Furthermore, this paper presents use cases which can be adopted those kinds of technologies and future development directions.
Present security study involving analysis of manipulation of individual droplets of samples and reagents by digital microfluidic biochip has remarked that the biochip design flow is vulnerable to piracy attacks, hardware Trojans attacks, overproduction, Denial-of-Service attacks, and counterfeiting. Attackers can introduce bioprotocol manipulation attacks against biochips used for medical diagnosis, biochemical analysis, and frequent diseases detection in healthcare industry. Among these attacks, hardware Trojans have created a major threatening issue in its security concern with multiple ways to crack the sensitive data or alter original functionality by doing malicious operations in biochips. In this paper, we present a systematic algorithm for the assignment of checkpoints required for error-recovery of available bioprotocols in case of hardware Trojans attacks in performing operations by biochip. Moreover, it can guide the placement and timing of checkpoints so that the result of an attack is reduced, and hence enhance the security concerns of digital microfluidic biochips. Comparative study with traditional checkpoint schemes demonstrate the superiority of the proposed algorithm without overhead of the bioprotocol completion time with higher error detection accuracy.
Differential privacy is an approach that preserves patient privacy while permitting researchers access to medical data. This paper presents mechanisms proposed to satisfy differential privacy while answering a given workload of range queries. Representing input data as a vector of counts, these methods partition the vector according to relationships between the data and the ranges of the given queries. After partitioning the vector into buckets, the counts of each bucket are estimated privately and split among the bucket's positions to answer the given query set. The performance of the proposed method was evaluated using different workloads over several attributes. The results show that partitioning the vector based on the data can produce more accurate answers, while partitioning the vector based on the given workload improves privacy. This paper's two main contributions are: (1) improving earlier work on partitioning mechanisms by building a greedy algorithm to partition the counts' vector efficiently, and (2) its adaptive algorithm considers the sensitivity of the given queries before providing results.
Personalized medicine performs diagnoses and treatments according to the DNA information of the patients. The new paradigm will change the health care model in the future. A doctor will perform the DNA sequence matching instead of the regular clinical laboratory tests to diagnose and medicate the diseases. Additionally, with the help of the affordable personal genomics services such as 23andMe, personalized medicine will be applied to a great population. Cloud computing will be the perfect computing model as the volume of the DNA data and the computation over it are often immense. However, due to the sensitivity, the DNA data should be encrypted before being outsourced into the cloud. In this paper, we start from a practical system model of the personalize medicine and present a solution for the secure DNA sequence matching problem in cloud computing. Comparing with the existing solutions, our scheme protects the DNA data privacy as well as the search pattern to provide a better privacy guarantee. We have proved that our scheme is secure under the well-defined cryptographic assumption, i.e., the sub-group decision assumption over a bilinear group. Unlike the existing interactive schemes, our scheme requires only one round of communication, which is critical in practical application scenarios. We also carry out a simulation study using the real-world DNA data to evaluate the performance of our scheme. The simulation results show that the computation overhead for real world problems is practical, and the communication cost is small. Furthermore, our scheme is not limited to the genome matching problem but it applies to general privacy preserving pattern matching problems which is widely used in real world.
Genetic data are important dataset utilised in genetic epidemiology to investigate biologically coded information within the human genome. Enormous research has been delved into in recent years in order to fully sequence and understand the genome. Personalised medicine, patient response to treatments and relationships between specific genes and certain characteristics such as phenotypes and diseases, are positive impacts of studying the genome, just to mention a few. The sensitivity, longevity and non-modifiable nature of genetic data make it even more interesting, consequently, the security and privacy for the storage and processing of genomic data beg for attention. A common activity carried out by geneticists is the association analysis between allele-allele, or even a genetic locus and a disease. We demonstrate the use of cryptographic techniques such as homomorphic encryption schemes and multiparty computations, how such analysis can be carried out in a privacy friendly manner. We compute a 3 × 3 contingency table, and then, genome analyses algorithms such as linkage disequilibrium (LD) measures, all on the encrypted domain. Our computation guarantees privacy of the genome data under our security settings, and provides up to 98.4% improvement, compared to an existing solution.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few ({\textbackslash}textless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- {\textbackslash}textbar{\textbackslash}textbar
Wireless Capsule Endoscopy (WCE) is a noninvasive device for detection of gastrointestinal problems especially small bowel diseases, such as polyps which causes gastrointestinal bleeding. The quality of WCE images is very important for diagnosis. In this paper, a new method is proposed to improve the quality of WCE images. In our proposed method for improving the quality of WCE images, Removing Noise and Contrast Enhancement (RNCE) algorithm is used. The algorithm have been implemented and tested on some real images. Quality metrics used for performance evaluation of the proposed method is Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Edge Strength Similarity for Image (ESSIM). The results obtained from SSIM, PSNR and ESSIM indicate that the implemented RNCE method improve the quality of WCE images significantly.
Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.
Genes, proteins and other metabolites present in cellular environment exhibit a virtual network that represents the regulatory relationship among its constituents. This network is called Gene Regulatory Network (GRN). Computational reconstruction of GRN reveals the normal metabolic pathway as well as disease motifs. Availability of microarray gene expression data from normal and diseased tissues makes the job easier for computational biologists. Reconstruction of GRN is based on neural modeling. Here we have used discrete and continuous versions of a meta-heuristic algorithm named Firefly algorithm for structure and parameter learning of GRNs respectively. The discrete version for this problem is proposed by us and it has been applied to explore the discrete search space of GRN structure. To evaluate performance of the algorithm, we have used a widely used synthetic GRN data set. The algorithm shows an accuracy rate above 50% in finding GRN. The accuracy level of the performance of Firefly algorithm in structure and parameter optimization of GRN is promising.
This study presents spatial analysis of Dengue Fever (DF) outbreak using Geographic Information System (GIS) in the state of Selangor, Malaysia. DF is an Aedes mosquito-borne disease. The aim of the study is to map the spread of DF outbreak in Selangor by producing a risk map while the objective is to identify high risk areas of DF by producing a risk map using GIS tools. The data used was DF dengue cases in 2012 obtained from Ministry of Health, Malaysia. The analysis was carried out using Moran's I, Average Nearest Neighbor (ANN), Kernel Density Estimation (KDE) and buffer analysis using GIS. From the Moran's I analysis, the distribution pattern of DF in Selangor clustered. From the ANN analysis, the result shows a dispersed pattern where the ratio is more than 1. The third analysis was based on KDE to locate the hot spot location. The result shows that some districts are classified as high risk areas which are Ampang, Damansara, Kapar, Kajang, Klang, Semenyih, Sungai Buloh and Petaling. The buffer analysis, area ranges between 200m. to 500m. above sea level shows a clustered pattern where the highest frequent cases in the year are at the same location. It was proven that the analysis based on the spatial statistic, spatial interpolation, and buffer analysis can be used as a method in controlling and locating the DF affection with the aid of GIS.