BiocNeighbors 1.8.2
The BiocNeighbors package implements a few algorithms for exact nearest neighbor searching:
Both KMKNN and VP-trees involve a component of randomness during index construction, though the k-nearest neighbors result is fully deterministic1 Except in the presence of ties, see ?"BiocNeighbors-ties"
for details..
The most obvious application is to perform a k-nearest neighbors search. We’ll mock up an example here with a hypercube of points, for which we want to identify the 10 nearest neighbors for each point.
nobs <- 10000
ndim <- 20
data <- matrix(runif(nobs*ndim), ncol=ndim)
The findKNN()
method expects a numeric matrix as input with data points as the rows and variables/dimensions as the columns.
We indicate that we want to use the KMKNN algorithm by setting BNPARAM=KmknnParam()
(which is also the default, so this is not strictly necessary here).
We could use a VP tree instead by setting BNPARAM=VptreeParam()
.
fout <- findKNN(data, k=10, BNPARAM=KmknnParam())
head(fout$index)
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
## [1,] 2553 1595 7649 7976 3011 208 5421 2164 6415 4751
## [2,] 9266 2491 8134 3873 4358 2104 7419 6363 8804 3450
## [3,] 4943 2727 1286 1498 1904 4625 3181 5658 5446 213
## [4,] 8329 8880 6269 542 9744 2524 8901 4679 2365 4727
## [5,] 5549 2164 9034 763 4166 7968 2768 5313 3533 6259
## [6,] 2160 5494 7119 475 1967 3362 9976 9135 9529 9199
head(fout$distance)
## [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## [1,] 0.9189922 1.0601061 1.0737019 1.0857166 1.0896317 1.1101270 1.1126746
## [2,] 0.9799377 0.9906567 1.0141855 1.0451879 1.0695980 1.0830421 1.0837300
## [3,] 0.7988453 0.9237226 0.9265293 0.9527528 0.9580240 0.9653371 0.9752211
## [4,] 0.9884900 0.9901128 1.0072836 1.0138503 1.0615682 1.0885705 1.0912672
## [5,] 0.7884572 0.7967854 0.8589068 0.9083483 0.9224415 0.9348405 0.9391195
## [6,] 0.8734475 0.9003913 0.9156969 0.9409719 0.9527093 0.9636832 0.9690086
## [,8] [,9] [,10]
## [1,] 1.1160088 1.1379348 1.1383849
## [2,] 1.0910011 1.1053965 1.1278329
## [3,] 0.9763910 1.0005029 1.0014857
## [4,] 1.1018559 1.1047590 1.1142807
## [5,] 0.9463403 0.9513726 0.9689858
## [6,] 0.9869310 1.0061344 1.0190784
Each row of the index
matrix corresponds to a point in data
and contains the row indices in data
that are its nearest neighbors.
For example, the 3rd point in data
has the following nearest neighbors:
fout$index[3,]
## [1] 4943 2727 1286 1498 1904 4625 3181 5658 5446 213
… with the following distances to those neighbors:
fout$distance[3,]
## [1] 0.7988453 0.9237226 0.9265293 0.9527528 0.9580240 0.9653371 0.9752211
## [8] 0.9763910 1.0005029 1.0014857
Note that the reported neighbors are sorted by distance.
Another application is to identify the k-nearest neighbors in one dataset based on query points in another dataset. Again, we mock up a small data set:
nquery <- 1000
ndim <- 20
query <- matrix(runif(nquery*ndim), ncol=ndim)
We then use the queryKNN()
function to identify the 5 nearest neighbors in data
for each point in query
.
qout <- queryKNN(data, query, k=5, BNPARAM=KmknnParam())
head(qout$index)
## [,1] [,2] [,3] [,4] [,5]
## [1,] 6081 5301 3620 2901 3250
## [2,] 3333 2388 8055 8475 1883
## [3,] 9992 95 3060 2787 3789
## [4,] 4233 1932 6050 3163 3767
## [5,] 2919 1187 7241 4050 7617
## [6,] 1498 9127 4821 5839 7531
head(qout$distance)
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.9271668 0.9461094 0.9846260 1.0022616 1.0171393
## [2,] 0.8145769 0.8294167 0.8408904 0.8477818 0.8587270
## [3,] 0.9499102 0.9542164 0.9674011 0.9992832 0.9997102
## [4,] 0.8599454 0.8610505 0.8954363 0.9150887 0.9297481
## [5,] 0.8646357 0.9550284 1.0187175 1.0492174 1.0523602
## [6,] 0.8413112 0.9011919 0.9622625 0.9758293 0.9779908
Each row of the index
matrix contains the row indices in data
that are the nearest neighbors of a point in query
.
For example, the 3rd point in query
has the following nearest neighbors in data
:
qout$index[3,]
## [1] 9992 95 3060 2787 3789
… with the following distances to those neighbors:
qout$distance[3,]
## [1] 0.9499102 0.9542164 0.9674011 0.9992832 0.9997102
Again, the reported neighbors are sorted by distance.
Users can perform the search for a subset of query points using the subset=
argument.
This yields the same result as but is more efficient than performing the search for all points and subsetting the output.
findKNN(data, k=5, subset=3:5)
## $index
## [,1] [,2] [,3] [,4] [,5]
## [1,] 4943 2727 1286 1498 1904
## [2,] 8329 8880 6269 542 9744
## [3,] 5549 2164 9034 763 4166
##
## $distance
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.7988453 0.9237226 0.9265293 0.9527528 0.9580240
## [2,] 0.9884900 0.9901128 1.0072836 1.0138503 1.0615682
## [3,] 0.7884572 0.7967854 0.8589068 0.9083483 0.9224415
If only the indices are of interest, users can set get.distance=FALSE
to avoid returning the matrix of distances.
This will save some time and memory.
names(findKNN(data, k=2, get.distance=FALSE))
## [1] "index"
It is also simple to speed up functions by parallelizing the calculations with the BiocParallel framework.
library(BiocParallel)
out <- findKNN(data, k=10, BPPARAM=MulticoreParam(3))
For multiple queries to a constant data
, the pre-clustering can be performed in a separate step with buildIndex()
.
The result can then be passed to multiple calls, avoiding the overhead of repeated clustering2 The algorithm type is automatically determined when BNINDEX
is specified, so there is no need to also specify BNPARAM
in the later functions..
pre <- buildIndex(data, BNPARAM=KmknnParam())
out1 <- findKNN(BNINDEX=pre, k=5)
out2 <- queryKNN(BNINDEX=pre, query=query, k=2)
The default setting is to search on the Euclidean distance.
Alternatively, we can use the Manhattan distance by setting distance="Manhattan"
in the BiocNeighborParam
object.
out.m <- findKNN(data, k=5, BNPARAM=KmknnParam(distance="Manhattan"))
Advanced users may also be interested in the raw.index=
argument, which returns indices directly to the precomputed object rather than to data
.
This may be useful inside package functions where it may be more convenient to work on a common precomputed object.
sessionInfo()
## R version 4.0.3 (2020-10-10)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.5 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.12-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.12-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] BiocParallel_1.24.1 BiocNeighbors_1.8.2 knitr_1.30
## [4] BiocStyle_2.18.1
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.5 bookdown_0.21 lattice_0.20-41
## [4] digest_0.6.27 grid_4.0.3 stats4_4.0.3
## [7] magrittr_2.0.1 evaluate_0.14 rlang_0.4.9
## [10] stringi_1.5.3 S4Vectors_0.28.0 Matrix_1.2-18
## [13] rmarkdown_2.5 tools_4.0.3 stringr_1.4.0
## [16] parallel_4.0.3 xfun_0.19 yaml_2.2.1
## [19] compiler_4.0.3 BiocGenerics_0.36.0 BiocManager_1.30.10
## [22] htmltools_0.5.0
Wang, X. 2012. “A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality.” Proc Int Jt Conf Neural Netw 43 (6): 2351–8.
Yianilos, P. N. 1993. “Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces.” In SODA, 93:311–21. 194.