haskellbenchmarkinghaskell-criterionautobench

Invalid Criterion report when testing on large inputs with AutoBench


I am working with AutoBench since a few days testing performances of Euler's sieve on different input sizes.

My tests simply asks for the nth prime inside the list generated by Euler's sieve.

While Criterion works well on small inputs for n, it doesn't seem to produce a valid report when n is greater than 7000.

Here is my Input.hs file tested:

eS :: Int -> Int
eS x = (eulerSieve [2..]) !! x where
  eulerSieve cs@(p:tcs) = p:eulerSieve (minus tcs (map (p*) cs))

tDat :: UnaryTestData Int
tDat  = 
  [ (1000, return 1000)
  , (2000, return 2000)
  , (3000, return 3000)
  , (4000, return 4000)
  , (5000, return 5000)
  , (6000, return 6000)
  , (7000, return 7000)
  , (8000, return 8000)
  , (9000, return 9000)
  , (10000, return 10000)
  , (11000, return 11000)
  , (12000, return 12000)
  , (13000, return 13000)
  , (14000, return 14000)
  , (15000, return 15000)
  , (16000, return 16000)
  , (17000, return 17000)
  , (18000, return 18000)
  , (19000, return 19000)
  , (20000, return 20000)
  ]

ts :: TestSuite 
ts  = def {
  _progs = ["eS"],
  _dataOpts = Manual "tDat"
}

And this is the error I am getting:

benchmarking Input Size 8000/Input.eS
 • Executed benchmarking file ✔
 • Generating test report
File error: Invalid Criterion report: Error in $: not enough input
Testing cancelled. Press any key to exit... 
Leaving AutoBench.

I think it is something related to the time execution needed for the procedure to answer with the nth prime but I haven't found anything online but the official documentation which doesn't mention anything about it.


Solution

  • After some profiling I found that for n greater than 7000, the Euler procedure quickly saturates the ram thus causing Criterion to crash.

    The only ways to overcome this problem are increasing your ram or switching to a different algorithm/implementation.