rr-bigmemorybigstatsr

How can I subset rows or columns of a bigstatsr::FBM in Rcpp and store them in a vector?


I have a function that computes basic summary statistics from the rows (or columns) of a given Matrix and I am now trying to also use this function with a bigstatsr::FBM (I am aware that using columns should be more efficient). The reason I want to store the rows / columns in a vector is that I would like to compute quantiles with std::nth_element. If there is a different way to do that with out the vector I would be equally happy.

This is the code I use for a regular matrix.

// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::depends(RcppEigen)]]
#include <RcppEigen.h>

using namespace Rcpp;

// [[Rcpp::export]]
Eigen::MatrixXd summaryC(Eigen::MatrixXd x,int nrow) {
  Eigen::MatrixXd result(nrow, 5);
  int indices[6] = {-1, 0,  249,  500,  750, 999};

  for (int i = 0; i < nrow; i++) {
    Eigen::VectorXd v = x.row(i);
    for (int q = 0; q < 5; ++q) {
      std::nth_element(v.data() + indices[q] + 1,
                       v.data() + indices[q+1],
                       v.data() + v.size());
      result(i,q) = v[indices[q+1]];
    }
  }
return result;
}

/*** R 
x <- matrix(as.numeric(1:1000000), ncol = 1000)
summaryC(x = x, nrow = 1000)
***/

However I struggle to do this with an FBM as I am not fully grasping the intricacies of how the FBM - Pointer works.

I tried the following without success:

// [[Rcpp::depends(BH, bigstatsr, RcppEigen)]]
// [[Rcpp::plugins(cpp11)]]
#include <bigstatsr/BMAcc.h>
#include <RcppEigen.h>



// [[Rcpp::export]]
Eigen::MatrixXd summaryCbig(Environment fbm,int nrow, Eigen::VecttorXi ind_col) {

  Eigen::MatrixXd result(nrow, 5);

  XPtr<FBM> xpMat = fbm["address"];
  BMAcc<double> macc(xpMat);

  int indices[6] = {-1, 0,  249,  500,  750, 999};

  for (int i = 0; i < nrow; i++) {

    Eigen::VectorXd v = macc.row(i); // this does not work
    Eigen::VectorXd v = macc(i,_); // this does not work
    SubBMAcc<double> maccr(XPtr, i, ind_col -1); // This did not work with Eigen::VectorXi, but works with const NumericVector&
    Eigen::VectorXd v = maccr // this does not work even for appropriate ind_col

    for (int q = 0; q < 5; ++q) {
      std::nth_element(v.data() + indices[q] + 1,
                       v.data() + indices[q+1],
                                         v.data() + v.size());
      macc(i,q) = v[indices[q+1]];
    }
  }
}
/*** R 
x <- matrix(as.numeric(1:1000000), ncol = 1000)
summaryCbig(x = x, nrow = 1000, ind_col = 1:1000)

***/

Any help would be greatly appreciated, thank you!

Update - the big_apply - approach

I implemented the approach twice with two differently sized matrices X1 and X2. Code for X1:

X1 <- FBM(1000, 1000, init 1e6)
X2 <- FBM(10000, 10000, init = 9999)
library(bigstatsr)
microbenchmark::microbenchmark(
  big_apply(X, a.FUN = function(X, ind) {
    matrixStats::rowQuantiles(X1[ind, ])
  }, a.combine = "rbind", ind = rows_along(X), ncores = nb_cores(), block.size = 500),

  big_apply(X, a.FUN = function(X, ind) {
    matrixStats::rowQuantiles(X1[ind, ])
  }, a.combine = "rbind", ind = rows_along(X), ncores = 1, block.size = 500),

  times = 5
)

When using X1 and block.size = 500, having 4 cores instead of 1 makes the task 5-10 times slower on my PC (4 CPU and using windows, unfortunately). using the bigger matrix X2 and leaving block.size with the default takes 10 times longer with 4 cores instead of the non-parallelized version.

Result for X2:

       min       lq      mean    median        uq       max neval
 16.149055 19.13568 19.369975 20.139363 20.474103 20.951676     5
  1.297259  2.67385  2.584647  2.858035  2.867537  3.226552     5

Solution

  • Assuming you have

    library(bigstatsr)
    X <- FBM(1000, 1000, init = 1:1e6)
    

    I would not reinvent the wheel and use:

    big_apply(X, a.FUN = function(X, ind) {
      matrixStats::rowQuantiles(X[ind, ])
    }, a.combine = "rbind", ind = rows_along(X), ncores = nb_cores(), block.size = 500)
    

    Choose the block.size (number of rows) wisely. Function big_apply() is very useful if you want to apply an R(cpp) function to blocks of the FBM.

    Edit: Of course, parallelism will me slower for small matrices, because of OVERHEAD of parallelism (usually, 1-3 seconds). See the results for X1 and X2:

    library(bigstatsr)
    X1 <- FBM(1000, 1000, init = 1e6)
    microbenchmark::microbenchmark(
      PAR = big_apply(X1, a.FUN = function(X, ind) {
        matrixStats::rowQuantiles(X[ind, ])
      }, a.combine = "rbind", ind = rows_along(X1), ncores = nb_cores(), block.size = 500),
    
      SEQ = big_apply(X1, a.FUN = function(X, ind) {
        matrixStats::rowQuantiles(X[ind, ])
      }, a.combine = "rbind", ind = rows_along(X1), ncores = 1, block.size = 500),
    
      times = 5
    )
    
    Unit: milliseconds
     expr        min        lq       mean    median         uq        max neval cld
      PAR 1564.20591 1602.0465 1637.77552 1629.9803 1651.04509 1741.59974     5   b
      SEQ   68.92936   69.1002   76.70196   72.9173   85.31751   87.24543     5  a 
    
    X2 <- FBM(10000, 10000, init = 9999)
    microbenchmark::microbenchmark(
      PAR = big_apply(X2, a.FUN = function(X, ind) {
        matrixStats::rowQuantiles(X[ind, ])
      }, a.combine = "rbind", ind = rows_along(X2), ncores = nb_cores(), block.size = 500),
    
      SEQ = big_apply(X2, a.FUN = function(X, ind) {
        matrixStats::rowQuantiles(X[ind, ])
      }, a.combine = "rbind", ind = rows_along(X2), ncores = 1, block.size = 500),
    
      times = 5
    )
    
    Unit: seconds
     expr       min        lq      mean    median        uq       max neval cld
      PAR  4.757409  4.958869  5.071982  5.083381  5.218098  5.342153     5  a 
      SEQ 10.842828 10.846281 11.177460 11.360162 11.416967 11.421065     5   b
    

    The bigger your matrix is, the more you will gain from parallelism.