Biblio
Filters: Author is Ma, Xiaoyu [Clear All Filters]
k-Nearest Neighbor algorithm based on feature subspace. 2021 International Conference on Big Data Analysis and Computer Science (BDACS). :225—228.
.
2021. The traditional KNN algorithm takes insufficient consideration of the spatial distribution of training samples, which leads to low accuracy in processing high-dimensional data sets. Moreover, the generation of k nearest neighbors requires all known samples to participate in the distance calculation, resulting in high time overhead. To solve these problems, a feature subspace based KNN algorithm (Feature Subspace KNN, FSS-KNN) is proposed in this paper. First, the FSS-KNN algorithm solves all the feature subspaces according to the distribution of the training samples in the feature space, so as to ensure that the samples in the same subspace have higher similarity. Second, the corresponding feature subspace is matched for the test set samples. On this basis, the search of k nearest neighbors is carried out in the corresponding subspace first, thus improving the accuracy and efficiency of the algorithm. Experimental results show that compared with the traditional KNN algorithm, FSS-KNN algorithm improves the accuracy and efficiency on Kaggle data set and UCI data set. Compared with the other four classical machine learning algorithms, FSS-KNN algorithm can significantly improve the accuracy.