Visible to the public Biblio

Filters: Keyword is Compression  [Clear All Filters]
2022-06-30
Ahuja, Bharti, Doriya, Rajesh.  2021.  An Unsupervised Learning Approach for Visual Data Compression with Chaotic Encryption. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1—4.
The increased demand of multimedia leads to shortage of network bandwidth and memory capacity. As a result, image compression is more significant for decreasing data redundancy, saving storage space and bandwidth. Along with the compression the next major challenge in this field is to safeguard the compressed data further from the spy which are commonly known as hackers. It is evident that the major increments in the fields like communication, wireless sensor network, data science, cloud computing and machine learning not only eases the operations of the related field but also increases the challenges as well. This paper proposes a worthy composition for image compression encryption based on unsupervised learning i.e. k-means clustering for compression with logistic chaotic map for encryption. The main advantage of the above combination is to address the problem of data storage and the security of the visual data as well. The algorithm reduces the size of the input image and also gives the larger key space for encryption. The validity of the algorithm is testified with the PSNR, MSE, SSIM and Correlation coefficient.
2022-04-01
Pokharana, Anchal, Sharma, Samiksha.  2021.  Encryption, File Splitting and File compression Techniques for Data Security in virtualized environment. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :480—485.
Nowadays cloud computing has become the crucial part of IT and most important thing is information security in cloud environment. Range of users can access the facilities and use cloud according to their feasibility. Cloud computing is utilized as safe storage of information but still data security is the biggest concern, for example, secrecy, data accessibility, data integrity is considerable factor for cloud storage. Cloud service providers provide the facility to clients that they can store the data on cloud remotely and access whenever required. Due to this facility, it gets necessary to shield or cover information from unapproved access, hackers or any sort of alteration and malevolent conduct. It is inexpensive approach to store the valuable information and doesn't require any hardware and software to hold the data. it gives excellent work experience but main measure is just security. In this work security strategies have been proposed for cloud data protection, capable to overpower the shortcomings of conventional data protection algorithms and enhancing security using steganography algorithm, encryption decryption techniques, compression and file splitting technique. These techniques are utilized for effective results in data protection, Client can easily access our developed desktop application and share the information in an effective and secured way.
2022-03-14
Salunke, Sharad, Venkatadri, M., Hashmi, Md. Farukh, Ahuja, Bharti.  2021.  An Implicit Approach for Visual Data: Compression Encryption via Singular Value Decomposition, Multiple Chaos and Beta Function. 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :1—5.
This paper proposes a digital image compression-encryption scheme based on the theory of singular value decomposition, multiple chaos and Beta function, which uses SVD to compress the digital image and utilizes three way protections for encryption viz. logistic and Arnold map along with the beta function. The algorithm has three advantages: First, the compression scheme gives the freedom to a user so that one can select the desired compression level according to the application with the help of singular value. Second, it includes a confusion mechanism wherein the pixel positions of image are scrambled employing Cat Map. The pixel location is shuffled, resulting in a cipher text image that is safe for communication. Third the key is generated with the help of logistic map which is nonlinear and chaotic in nature therefore highly secured. Fourth the beta function used for encryption is symmetric in nature which means the order of its parameters does not change the outcome of the operation, meaning faithful reconstruction of an image. Thus, the algorithm is highly secured and also saving the storage space as well. The experimental results show that the algorithm has the advantages of faithful reconstruction with reasonable PSNR on different singular values.
2022-03-01
Roy, Debaleena, Guha, Tanaya, Sanchez, Victor.  2021.  Graph Based Transforms based on Graph Neural Networks for Predictive Transform Coding. 2021 Data Compression Conference (DCC). :367–367.
This paper introduces the GBT-NN, a novel class of Graph-based Transform within the context of block-based predictive transform coding using intra-prediction. The GBT-NNis constructed by learning a mapping function to map a graph Laplacian representing the covariance matrix of the current block. Our objective of learning such a mapping functionis to design a GBT that performs as well as the KLT without requiring to explicitly com-pute the covariance matrix for each residual block to be transformed. To avoid signallingany additional information required to compute the inverse GBT-NN, we also introduce acoding framework that uses a template-based prediction to predict residuals at the decoder. Evaluation results on several video frames and medical images, in terms of the percentageof preserved energy and mean square error, show that the GBT-NN can outperform the DST and DCT.
2021-01-11
Zhang, H., Zhang, D., Chen, H., Xu, J..  2020.  Improving Efficiency of Pseudonym Revocation in VANET Using Cuckoo Filter. 2020 IEEE 20th International Conference on Communication Technology (ICCT). :763–769.
In VANETs, pseudonyms are often used to replace the identity of vehicles in communication. When vehicles drive out of the network or misbehave, their pseudonym certificates need to be revoked by the certificate authority (CA). The certificate revocation lists (CRLs) are usually used to store the revoked certificates before their expiration. However, using CRLs would incur additional storage, communication and computation overhead. Some existing schemes have proposed to use Bloom Filter to compress the original CRLs, but they are unable to delete the expired certificates and introduce the false positive problem. In this paper, we propose an improved pseudonym certificates revocation scheme, using Cuckoo Filter for compression to reduce the impact of these problems. In order to optimize deletion efficiency, we propose the concept of Certificate Expiration List (CEL) which can be implemented with priority queue. The experimental results show that our scheme can effectively reduce the storage and communication overhead of pseudonym certificates revocation, while retaining moderately low false positive rates. In addition, our scheme can also greatly improve the lookup performance on CRLs, and reduce the revocation operation costs by allowing deletion.
2020-03-16
Zebari, Dilovan Asaad, Haron, Habibollah, Zeebaree, Diyar Qader, Zain, Azlan Mohd.  2019.  A Simultaneous Approach for Compression and Encryption Techniques Using Deoxyribonucleic Acid. 2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA). :1–6.
The Data Compression is a creative skill which defined scientific concepts of providing contents in a compact form. Thus, it has turned into a need in the field of communication as well as in different scientific studies. Data transmission must be sufficiently secure to be utilized in a channel medium with no misfortune; and altering of information. Encryption is the way toward scrambling an information with the goal that just the known receiver can peruse or see it. Encryption can give methods for anchoring data. Along these lines, the two strategies are the two crucial advances that required for the protected transmission of huge measure of information. In typical cases, the compacted information is encoded and transmitted. In any case, this sequential technique is time consumption and computationally cost. In the present paper, an examination on simultaneous compression and encryption technique depends on DNA which is proposed for various sorts of secret data. In simultaneous technique, both techniques can be done at single step which lessens the time for the whole task. The present work is consisting of two phases. First phase, encodes the plaintext by 6-bits instead of 8-bits, means each character represented by three DNA nucleotides whereas to encode any pixel of image by four DNA nucleotides. This phase can compress the plaintext by 25% of the original text. Second phase, compression and encryption has been done at the same time. Both types of data have been compressed by their half size as well as encrypted the generated symmetric key. Thus, this technique is more secure against intruders. Experimental results show a better performance of the proposed scheme compared with standard compression techniques.
2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

2019-09-23
Ramijak, Dusan, Pal, Amitangshu, Kant, Krishna.  2018.  Pattern Mining Based Compression of IoT Data. Proceedings of the Workshop Program of the 19th International Conference on Distributed Computing and Networking. :12:1–12:6.
The increasing proliferation of the Internet of Things (IoT) devices and systems result in large amounts of highly heterogeneous data to be collected. Although at least some of the collected sensor data is often consumed by the real-time decision making and control of the IoT system, that is not the only use of such data. Invariably, the collected data is stored, perhaps in some filtered or downselected fashion, so that it can be used for a variety of lower-frequency operations. It is expected that in a smart city environment with numerous IoT deployments, the volume of such data can become enormous. Therefore, mechanisms for lossy data compression that provide a trade-off between compression ratio and data usefulness for offline statistical analysis becomes necessary. In this paper, we discuss several simple pattern mining based compression strategies for multi-attribute IoT data streams. For each method, we evaluate the compressibility of the method vs. the level of similarity between original and compressed time series in the context of the home energy management system.
2017-09-15
Dhulipala, Laxman, Kabiljo, Igor, Karrer, Brian, Ottaviano, Giuseppe, Pupyrev, Sergey, Shalita, Alon.  2016.  Compressing Graphs and Indexes with Recursive Graph Bisection. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. :1535–1544.

Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of Chierichetti et al. (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.

Silva, Rodrigo M., Gomes, Guilherme C.M., Alvim, Mário S., Gonçalves, Marcos A..  2016.  Compression-Based Selective Sampling for Learning to Rank. Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. :247–256.

Learning to rank (L2R) algorithms use a labeled training set to generate a ranking model that can be later used to rank new query results. These training sets are very costly and laborious to produce, requiring human annotators to assess the relevance or order of the documents in relation to a query. Active learning (AL) algorithms are able to reduce the labeling effort by actively sampling an unlabeled set and choosing data instances that maximize the effectiveness of a learning function. But AL methods require constant supervision, as documents have to be labeled at each round of the process. In this paper, we propose that certain characteristics of unlabeled L2R datasets allow for an unsupervised, compression-based selection process to be used to create small and yet highly informative and effective initial sets that can later be labeled and used to bootstrap a L2R system. We implement our ideas through a novel unsupervised selective sampling method, which we call Cover, that has several advantages over AL methods tailored to L2R. First, it does not need an initial labeled seed set and can select documents from scratch. Second, selected documents do not need to be labeled as the iterations of the method progress since it is unsupervised (i.e., no learning model needs to be updated). Thus, an arbitrarily sized training set can be selected without human intervention depending on the available budget. Third, the method is efficient and can be run on unlabeled collections containing millions of query-document instances. We run various experiments with two important L2R benchmarking collections to show that the proposed method allows for the creation of small, yet very effective training sets. It achieves full training-like performance with less than 10% of the original sets selected, outperforming the baselines in both effectiveness and scalability.

2017-04-20
Sankalpa, I., Dhanushka, T., Amarasinghe, N., Alawathugoda, J., Ragel, R..  2016.  On implementing a client-server setting to prevent the Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH) attacks. 2016 Manufacturing Industrial Engineering Symposium (MIES). :1–5.

Compression is desirable for network applications as it saves bandwidth. Differently, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to the “Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH)” attack on web traffic protected by the TLS protocol. The general guidance to prevent this attack is to disable HTTP compression, preserving confidentiality but sacrificing bandwidth. As a more sophisticated countermeasure, fixed-dictionary compression was introduced in 2015 enabling compression while protecting high-value secrets, such as cookies, from attacks. The fixed-dictionary compression method is a cryptographically sound countermeasure against the BREACH attack, since it is proven secure in a suitable security model. In this project, we integrate the fixed-dictionary compression method as a countermeasure for BREACH attack, for real-world client-server setting. Further, we measure the performance of the fixed-dictionary compression algorithm against the DEFLATE compression algorithm. The results evident that, it is possible to save some amount of bandwidth, with reasonable compression/decompression time compared to DEFLATE operations. The countermeasure is easy to implement and deploy, hence, this would be a possible direction to mitigate the BREACH attack efficiently, rather than stripping off the HTTP compression entirely.

2017-02-14
F. Hassan, J. L. Magalini, V. de Campos Pentea, R. A. Santos.  2015.  "A project-based multi-disciplinary elective on digital data processing techniques". 2015 IEEE Frontiers in Education Conference (FIE). :1-7.

Todays' era of internet-of-things, cloud computing and big data centers calls for more fresh graduates with expertise in digital data processing techniques such as compression, encryption and error correcting codes. This paper describes a project-based elective that covers these three main digital data processing techniques and can be offered to three different undergraduate majors electrical and computer engineering and computer science. The course has been offered successfully for three years. Registration statistics show equal interest from the three different majors. Assessment data show that students have successfully completed the different course outcomes. Students' feedback show that students appreciate the knowledge they attain from this elective and suggest that the workload for this course in relation to other courses of equal credit is as expected.

P. Dahake, S. Nimbhorkar.  2015.  "Hybrid cryptosystem for maintaining image integrity using biometric fingerprint". 2015 International Conference on Pervasive Computing (ICPC). :1-5.

Integrity of image data plays an important role in data communication. Image data contain confidential information so it is very important to protect data from intruder. When data is transmitted through the network, there may be possibility that data may be get lost or damaged. Existing system does not provide all functionality for securing image during transmission. i.e image compression, encryption and user authentication. In this paper hybrid cryptosystem is proposed in which biometric fingerprint is used for key generation which is further useful for encryption purpose. Secret fragment visible mosaic image method is used for secure transmission of image. For reducing the size of image lossless compression technique is used which leads to the fast transmission of image data through transmission channel. The biometric fingerprint is useful for authentication purpose. Biometric method is more secure method of authentication because it requires physical presence of human being and it is untraceable.

2015-05-05
Coatsworth, M., Tran, J., Ferworn, A..  2014.  A hybrid lossless and lossy compression scheme for streaming RGB-D data in real time. Safety, Security, and Rescue Robotics (SSRR), 2014 IEEE International Symposium on. :1-6.

Mobile and aerial robots used in urban search and rescue (USAR) operations have shown the potential for allowing us to explore, survey and assess collapsed structures effectively at a safe distance. RGB-D cameras, such as the Microsoft Kinect, allow us to capture 3D depth data in addition to RGB images, providing a significantly richer user experience than flat video, which may provide improved situational awareness for first responders. However, the richer data comes at a higher cost in terms of data throughput and computing power requirements. In this paper we consider the problem of live streaming RGB-D data over wired and wireless communication channels, using low-power, embedded computing equipment. When assessing a disaster environment, a range camera is typically mounted on a ground or aerial robot along with the onboard computer system. Ground robots can use both wireless radio and tethers for communications, whereas aerial robots can only use wireless communication. We propose a hybrid lossless and lossy streaming compression format designed specifically for RGB-D data and investigate the feasibility and usefulness of live-streaming this data in disaster situations.
 

2015-05-04
Putra, M.S.A., Budiman, G., Novamizanti, L..  2014.  Implementation of steganography using LSB with encrypted and compressed text using TEA-LZW on Android. Computer, Control, Informatics and Its Applications (IC3INA), 2014 International Conference on. :93-98.

The development of data communications enabling the exchange of information via mobile devices more easily. Security in the exchange of information on mobile devices is very important. One of the weaknesses in steganography is the capacity of data that can be inserted. With compression, the size of the data will be reduced. In this paper, designed a system application on the Android platform with the implementation of LSB steganography and cryptography using TEA to the security of a text message. The size of this text message may be reduced by performing lossless compression technique using LZW method. The advantages of this method is can provide double security and more messages to be inserted, so it is expected be a good way to exchange information data. The system is able to perform the compression process with an average ratio of 67.42 %. Modified TEA algorithm resulting average value of avalanche effect 53.8%. Average result PSNR of stego image 70.44 dB. As well as average MOS values is 4.8.