20. Fixed Frequency Loops

Suppose we want to stress-test a server by hitting it with a fixed number of requests per second. Or maybe we want to write a game loop that runs at a fixed number of frames per second. In both cases, we want to run some code at a fixed frequency \(\nu\). More precisely, we want a loop that calls some function, sleeps for a bit, then restarts, and overall, the function is called \(\nu\) times per second.

First, we need a way to time code. The function gettimeofday(2) gives us a timeval, which represents the current time in seconds and microseconds. By bracketing our code in a couple of these, we can find exactly how many microseconds it took to run.

Given that we know the frequency \(\nu\), we also know that our code should take \(1/\nu\) seconds to run. We time our code, and if it runs too quickly, we insert an artificial pause. In other words, if our code runs in \(t\) seconds, and \(t < 1/\nu\), we should pause for \(1/\nu - t\) seconds.

We can use usleep(3) to pause for a given number of microseconds. In our case, we want to usleep for \((1/\nu - t) * 1\_000\_000\) microseconds. Note that usleep is unavailable on some older Unix systems; pselect(2) can be used instead.

Putting it all together, we get the following code:

#include <stdio.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>

/* How many times per second do we want the code to run? */
const int frequency = 2;

/* Pretend to do something useful. */
void do_work() {
    volatile int i;
    for (i = 0; i < 10000000; ++i)

int main(int argc, char *argv[]) {
    /* How long should each work unit take? */
    long slice = (long)(1.0 / frequency * 1000000);

    struct timeval beginning;
    gettimeofday(&beginning, NULL);
    struct timeval last_tick = beginning;

    long total = 0;
    int tick;
    for (tick = 1; 1; ++tick) {

        struct timeval now;
        gettimeofday(&now, NULL);

        /* How much time has passed since the last tick? */
        long usec_elapsed = (now.tv_sec - last_tick.tv_sec) * 1000000
                          + (now.tv_usec - last_tick.tv_usec);
        last_tick = now;

        /* How time did we spend working this tick? */
        long usec_work = (now.tv_sec - beginning.tv_sec) * 1000000
                          + (now.tv_usec - beginning.tv_usec);

        total += usec_elapsed;
        printf("Worked for %ldus. Average time per tick: %ldus (%ldus since last).\n",
               usec_work, total / tick, usec_elapsed);

        /* Pause if appropriate. */
        long usec_tosleep = slice - usec_work;
        if (usec_tosleep > 0) {
            printf("Sleeping for %ldus.\n", usec_tosleep);

        /* Prepare for the next tick. */
        gettimeofday(&beginning, NULL);

    return 0;

The code itself is unsurprising, but it’s easy to get wrong: if, when calculating the length of the pause, you use the time for the entire loop, usec_elapsed, instead of the time for the work part of it, usec_work, you end up waiting alternatively for \(1/\nu\) and \(0\).