Visible to the public Biblio

Filters: Keyword is hash collision  [Clear All Filters]
2023-08-11
Zhu, Haiting, Wan, Junmei, Li, Nan, Deng, Yingying, He, Gaofeng, Guo, Jing, Zhang, Lu.  2022.  Odd-Even Hash Algorithm: A Improvement of Cuckoo Hash Algorithm. 2021 Ninth International Conference on Advanced Cloud and Big Data (CBD). :1—6.
Hash-based data structures and algorithms are currently flourishing on the Internet. It is an effective way to store large amounts of information, especially for applications related to measurement, monitoring and security. At present, there are many hash table algorithms such as: Cuckoo Hash, Peacock Hash, Double Hash, Link Hash and D-left Hash algorithm. However, there are still some problems in these hash table algorithms, such as excessive memory space, long insertion and query operations, and insertion failures caused by infinite loops that require rehashing. This paper improves the kick-out mechanism of the Cuckoo Hash algorithm, and proposes a new hash table structure- Odd-Even Hash (OE Hash) algorithm. The experimental results show that OE Hash algorithm is more efficient than the existing Link Hash algorithm, Linear Hash algorithm, Cuckoo Hash algorithm, etc. OE Hash algorithm takes into account the performance of both query time and insertion time while occupying the least space, and there is no insertion failure that leads to rehashing, which is suitable for massive data storage.
2020-06-12
De Guzman, Froilan E., Gerardo, Bobby D., Medina, Ruji P..  2018.  Enhanced Secure Hash Algorithm-512 based on Quadratic Function. 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM). :1—6.

This paper attempts to introduce the enhanced SHA-1 algorithm which features a simple quadratic function that will control the selection of primitive function and constant used per round of SHA-1. The message digest for this enhancement is designed for 512 hashed value that will answer the possible occurrence of hash collisions. Moreover, this features the architecture of 8 registers of A, B, C, D, E, F, G, and H which consists of 64 bits out of the total 512 bits. The testing of frequency for Q15 and Q0 will prove that the selection of primitive function and the constant used are not equally distributed. Implementation of extended bits for hash message will provide additional resources for dictionary attacks and the extension of its hash outputs will provide an extended time for providing a permutation of 512 hash bits.

2019-12-16
Zhu, Yan, Yang, Shuai, Chu, William Cheng-Chung, Feng, Rongquan.  2019.  FlashGhost: Data Sanitization with Privacy Protection Based on Frequent Colliding Hash Table. 2019 IEEE International Conference on Services Computing (SCC). :90–99.

Today's extensive use of Internet creates huge volumes of data by users in both client and server sides. Normally users don't want to store all the data in local as well as keep archive in the server. For some unwanted data, such as trash, cache and private data, needs to be deleted periodically. Explicit deletion could be applied to the local data, while it is a troublesome job. But there is no transparency to users on the personal data stored in the server. Since we have no knowledge of whether they're cached, copied and archived by the third parties, or sold by the service provider. Our research seeks to provide an automatic data sanitization system to make data could be self-destructing. Specifically, we give data a life cycle, which would be erased automatically when at the end of its life, and the destroyed data cannot be recovered by any effort. In this paper, we present FlashGhost, which is a system that meets this challenge through a novel integration of cryptography techniques with the frequent colliding hash table. In this system, data will be unreadable and rendered unrecoverable by overwriting multiple times after its validity period has expired. Besides, the system reliability is enhanced by threshold cryptography. We also present a mathematical model and verify it by a number of experiments, which demonstrate theoretically and experimentally our system is practical to use and meet the data auto-sanitization goal described above.