modelingpdefipyfvm

More sweeps vs shorter timesteps


Question

Is it better to use larger timesteps that necessitate more sweeps, or shorter timesteps that permit fewer sweeps? I define 'better' as meaning a better 'accuracy-to-computation' ratio.

Background

I recently wrote an adaptive timestepper for FiPy, loosely following the example here. It is for a pair of coupled transient PDEs. I shall put a copy of my main loop below, although the code is not important. What matters is the concept.

If I increase max_sweeps, then my adaptive stepper will naturally use larger simulation timesteps. The inverse is also true. This seems intuitive enough. Systems of PDEs are known to become less stable as the timestep increases, thus requiring more sweeps to converge. However, I wonder what approach is better. If doubling the timestep means I have to do twice as many sweeps, then am I really gaining anything? If not, then what is the point in having an adaptive stepper?

while total_elapsed < duration:

    p_backup = p.value.copy()
    T_backup = T.value.copy()

    p.updateOld()
    T.updateOld()
    
    sweep = 0
    res = 1
    
    while res > tol and sweep < max_sweeps:
        res_p = eq_p.sweep(var=p, dt=dt)
        res_T = eq_T.sweep(var=T, dt=dt)
        sweep += 1
        res = max(res_p, res_T)
    
    if sweep == max_sweeps:
        p.value[:] = p_backup
        T.value[:] = T_backup
        dt_var.setValue(dt_var.value * 0.8) # decrease dt_var
    else:
        total_elapsed += dt_var.value
        dt_var.setValue(dt_var.value * 1.2) # increase dt_var

Solution

  • There is not, to my knowledge, any absolute answer to this question. For simple-enough sets of PDEs, with simple geometries and ideal boundary conditions, sometimes it's possible to analytically prove things about time step stability or rate of convergence with sweeps. In practice, I find it easier to just benchmark different scenarios and pick a combination of time step size and number of sweeps (and solver and preconditioner and mesh resolution and and and...) that seems to "optimize" compute time.

    Implicit methods can be unconditionally stable, but they are not unconditionally accurate. It is also possible to show that, even when seemingly stable, the effective size of the time step is not necessarily what it appears to be (see https://dx.doi.org/10.1103/physreve.75.017702).

    Adaptive time stepping can be of huge benefit when there are different time scales at work in a problem, e.g., early stages happen very fast and late stages happen very slowly. If the rate of evolution is not changing, then there is no benefit to an adaptive stepper. Similarly, if you have to take twice as many sweeps when you take twice the time step, then there's no advantage to larger time steps and the smaller time steps are probably more accurate.