Biblio

Filters: Author is Blondel, Mathieu  [Clear All Filters]
2018-06-11
Fujiwara, Yasuhiro, Marumo, Naoki, Blondel, Mathieu, Takeuchi, Koh, Kim, Hideaki, Iwata, Tomoharu, Ueda, Naonori.  2017.  Scaling Locally Linear Embedding. Proceedings of the 2017 ACM International Conference on Management of Data. :1479–1492.
Locally Linear Embedding (LLE) is a popular approach to dimensionality reduction as it can effectively represent nonlinear structures of high-dimensional data. For dimensionality reduction, it computes a nearest neighbor graph from a given dataset where edge weights are obtained by applying the Lagrange multiplier method, and it then computes eigenvectors of the LLE kernel where the edge weights are used to obtain the kernel. Although LLE is used in many applications, its computation cost is significantly high. This is because, in obtaining edge weights, its computation cost is cubic in the number of edges to each data point. In addition, the computation cost in obtaining the eigenvectors of the LLE kernel is cubic in the number of data points. Our approach, Ripple, is based on two ideas: (1) it incrementally updates the edge weights by exploiting the Woodbury formula and (2) it efficiently computes eigenvectors of the LLE kernel by exploiting the LU decomposition-based inverse power method. Experiments show that Ripple is significantly faster than the original approach of LLE by guaranteeing the same results of dimensionality reduction.