javajava-2dgame-developmentgame-loop

Why gameloops render more times than updating


Many game loops use tick and render, when tick is done every X ms and render whenever possible. Why not ticking then rendering? The way I see it, between ticks the render will draw the same thing, so why call it more than once? (if not explain why please)

the typical gameloop is:

public void run() {

        long lastTime = System.nanoTime();
        double amountOfTicks = 60.0;
        double nsBetweenTicks = 1000000000 / amountOfTicks;
        double delta = 0;
        long timer = System.currentTimeMillis();
        while(running){
            long now = System.nanoTime();
            delta += (now - lastTime) / nsBetweenTicks;
            lastTime = now;
            while(delta >=1){
                tick();
                delta--;
            }
            render();
       }
}

My question is why not:

public void run() {

        long lastTime = System.nanoTime();
        double amountOfTicks = 60.0;
        double nsBetweenTicks = 1000000000 / amountOfTicks;
        double delta = 0;
        long timer = System.currentTimeMillis();
        while(running){
            long now = System.nanoTime();
            delta += (now - lastTime) / nsBetweenTicks;
            lastTime = now;
            while(delta >=1){
                tick();
                render();
                delta--;
            }
       }
}

Solution

  • Game logic should not be tied to framerate.

    Some games don't use 60 ticks per second, some (like Minecraft) run at 20 ticks per second. That would effectively lock the game at 20 fps.

    Even if all games ran at 60 ticks a second, what if someone has a 144 Hz monitor? He'd be stuck playing at 60 fps.

    If you render the game more frequently than the ticks, you can still run animations and other stuff in the render logic, while the game logic is stopped for however long it has to be, making the game fluid even if its tick frequence is slower than your monitor's refresh rate.