rloopsneural-networkcross-validationnnet

Cross-validating multiple Neural Networks with varying size of the single hidden layer in R


I have to use a cross-validation to find out how many neurons the single hidden layer of my model should include (using the nnet package). I must write a function in R that takes the data, model, and a parameter n as inputs, and computes the model performance in terms of accuracy on a randomly split training and test set using a neural network with n layers. Using this function in a loop, compute the performance using neural networks with sizes of the hidden layers of n = 1, 2, 3, 20. My main goal is to understand which size of the hidden layer, because finally I have to plot a graph to show accuracy versus model complexity. For this reason, ideally I would like to have all accuracy measurements for both the test- and trainsets

I get the Error: object 'accNN' not found, which is the empty vector to store the results. I want to compare the 20 models, so also in the loop I have to create 20 empty vectors to store 20 different results (accNN1, accNN2, accNN3, etc.) It would be great to have help with coding the loops correctly.

Thank you a lot!

set.seed(1)
df <- data.frame(
    X = sample(1:100),
    Y = sample(1:100),
    Z = sample(1:100),
    target = sample(c("yes", "no"), 10, replace = TRUE))

# Create K folds with equal size for cross validation.
nFolds  <- 5
myFolds <- cut(seq(1, nrow(df)), 
                breaks = nFolds, 
                labels=FALSE)
table(myFolds)

# Create object for number of neurons
sizehiddenlayer <- 3

# Define the model
mdl <- target ~ X + Y + Z


for (j in 1:sizehiddenlayer) {
   # Initialize empty vectors to collect results
   accNN[j]    <- rep(NA, nFolds)

   for (i in 1:nFolds) {
   cat("Analysis of fold", i, "\n")

   # 1: Define training and test sets
   testObs  <- which(myFolds == i, arr.ind = TRUE)
   dfTest   <- df[ testObs, ]
   dfTrain  <- df[-testObs, ]

   # 2: Train the models on the training sets
   rsltNN[j] <- nnet(mdlB, data = df, size = j)

   # 3: Predict values for the test sets
   predNN[j] <- predict(rsltNN[j], type ="class")

   # 4: Measure accuracy and store the results
   accNN[j] <- mean(df$target == predNN[j])
}
}

Solution

  • You need to make an object to store the results, using the arrow doesn't append the object to the existing vector or list, so something like this will work (note you train on dfTrain and predict on dfTest:

    results = vector("list",sizehiddenlayer)
    
    for (j in 1:sizehiddenlayer) {
    
       results[[j]]$accNN  <- rep(NA, nFolds)
       results[[j]]$rsltNN  <- vector("list",nFolds)
       results[[j]]$predNN  <- vector("list",nFolds)
    
       for (i in 1:nFolds) {
    
       testObs  <- which(myFolds == i, arr.ind = TRUE)
       dfTest   <- df[ testObs, ]
       dfTrain  <- df[-testObs, ]
    
       results[[j]]$rsltNN[[i]] <- nnet(mdl, data = dfTrain, size = j)
       results[[j]]$predNN[[i]] <- predict(results[[j]]$rsltNN[[i]],dfTest, type ="class")
       results[[j]]$accNN[i] <- mean(dfTest$target == results[[j]]$predNN[[i]])
    }
    }
    

    The results are organized in a list:

    head(results[[1]],2)
    $accNN
    [1] 0.6 0.6 0.6 0.6 0.6
    
    $rsltNN
    $rsltNN[[1]]
    a 3-1-1 network with 6 weights
    inputs: X Y Z 
    output(s): target 
    options were - entropy fitting 
    
    $rsltNN[[2]]
    a 3-1-1 network with 6 weights
    inputs: X Y Z 
    output(s): target 
    options were - entropy fitting 
    

    Another way is to use caret to handle the CV etc, or you can try something like purrr :

    library(purrr)
    library(dplyr)
    
    fit = function(dat,Folds,i,j){nnet(mdl, data = dat[Folds!=i,],size = j)}
    pred = function(dat,Folds,mdl,i){predict(mdl,dat[Folds==i,],type="class")}
    accr = function(dat,Folds,prediction,i){mean(dat$target[Folds==i] == prediction)}
    
    results = expand.grid(hiddenlayer=1:sizehiddenlayer,fold=1:nFolds) %>%
    tibble() %>%
    mutate(
    mdl=map2(.x=fold,.y= hiddenlayer,~fit(dat=df,F=myFolds,i =.x ,j=.y)),
    pred = map2(.x=fold,.y= mdl,~pred(dat=df,F=myFolds,mdl = .y ,i=.x)),
    accuracy = map2(.x=fold,.y= pred,~accr(dat=df,F=myFolds,prediction = .y ,i=.x))
    )
    
    results
    # A tibble: 15 x 5
       hiddenlayer  fold mdl        pred       accuracy 
             <int> <int> <list>     <list>     <list>   
     1           1     1 <nnt.frml> <chr [20]> <dbl [1]>
     2           2     1 <nnt.frml> <chr [20]> <dbl [1]>
     3           3     1 <nnt.frml> <chr [20]> <dbl [1]>
     4           1     2 <nnt.frml> <chr [20]> <dbl [1]>
     5           2     2 <nnt.frml> <chr [20]> <dbl [1]>
     6           3     2 <nnt.frml> <chr [20]> <dbl [1]>
     7           1     3 <nnt.frml> <chr [20]> <dbl [1]>
    

    And you can access them like this:

    results$mdl[[1]]
    a 3-1-1 network with 6 weights
    inputs: X Y Z 
    output(s): target 
    options were - entropy fitting 
    > results$pred[[1]]
     [1] "no" "no" "no" "no" "no" "no" "no" "no" "no" "no" "no" "no" "no" "no" "no"
    [16] "no" "no" "no" "no" "no"
    > results$accuracy[[1]]
    [1] 0.6