There are probably libraries to do this (although I haven't found any), I'm actually looking to measure the time a function takes to run in Idris. The way I've found is by using clockTime
from System
and differentiating between before and after a function runs. Here is an example code:
module Main
import Data.String
import System
factorial : Integer -> Integer
factorial 0 = 1
factorial 1 = 1
factorial n = n * factorial (n - 1)
main : IO ()
main = do
args <- getArgs
case args of
[self ] => putStrLn "Please enter a value"
[_, ar] => do
case parseInteger ar of
Just a => do
t1 <- clockTime
let r = factorial a
t2 <- clockTime
let elapsed = (nanoseconds t2) - (nanoseconds t1)
putStrLn $ "fact(" ++ show a ++ ") = "
++ show r ++ " in "
++ (show elapsed) ++ " ns"
Nothing => putStrLn "Not a valid number"
To avoid Idris optimising the program by already evaluating the factorial, I just asked that the program be called with an argument.
This code doesn't work though: no matter what numbers I enter, such as 10000
, Idris always returns 0 nanoseconds, which makes me quite sceptical, even just allocating a bigint takes time. I compile with idris main.idr -o main
.
What am I doing wrong in my code? Is clockTime
not a good plan for benchmarks?
Idris 1 is no longer being maintained. In Idris 2, clockTime can be used.
clockTime : (typ : ClockType) -> IO (clockTimeReturnType typ)
An example of its use for benchmarking can be found within the Idris2 compiler, here.