I added a feedback form under every post on this blog. It’s a bit of client-side Javascript, a couple of serverless functions, and a proof-of-work scheme to deter spammers. Let’s see what it looks like.

The full code for this post is here.

Screenshot of the feedback form
Feedback form screenshot (see bottom of the page for the real thing)


The goal of this form is to give readers a low-friction way to provide feedback: they can rate a post good or bad, and they can send me a private message.

The constraint is that it must work on a static site, so we can’t just use a server-side script. We also want to be somewhat resilient to spammers. It would be bad if a shell one-liner like while true; do curl -X POST https://example.com/send-feedback?rating=poor; done was enough to be a nuisance.

Since we don’t have our own server, we could rent a VPS or a container, but that seems wasteful. We need almost no computing power and memory, and we don’t need to be running all the time. This seems like a good fit for a function-as-a-service offering like AWS Lambda.


Next, we need a scheme to make it hard for spammers to abuse the form. The traditional solution is to use a captcha, but we don’t have a server. We’d have to use a third-party service, and I dislike all the big ones on privacy and usability grounds. So, let’s write our own.

I think the fundamental requirement is that the user must spend time and do work before they can submit a feedback message. A image identification captcha forces a human to do work, but that’s not strictly necessary. It’s just as well if a computer does the work instead. We can take a page out of the cryptocurrency book, and require users to solve a simple hash puzzle. This is the so called proof-of-work.

Given a feedback rating and a comment, we require the user to find a string X such that sha256(feedback + comment + X) < D. The value D is the difficulty factor, and it can be anything we chose. Because a hash like SHA256 doesn’t have predictable output, the only way to solve a puzzle like this is by brute-forcing it:

x = 0
while True:
    x = random()
    if int(sha256(feedback + comment + str(x))[:4]) < d:
server.send_feedback(feedback, comment, x)
Client-side brute-forcing the puzzle
def send_feedback(feedback, comment, x):
    if int(sha256(feedback + comment + str(x))[:4]) >= d:
        raise Exception("invalid answer")
Server-side code to verify the answer

Hashes are uniformly distributed, so by trying random values like this, we’ll eventually stumble upon one that passes the test. In the code above, we only test the first 4 bytes for simplicity. If we chose D = 232, then any hash passes the test, so the first value of X will work. If we choose D = 231, then only half the hashes pass the test, so we’ll need to try about two values of X before we find one that works. If we choose D = 224, we’ll need to try about 28 values of X (or something like that). The smaller we make D, the more iterations the loop needs to make.

This is almost good enough. The problem is that the X value for any given feedback and comment is the always the same. So, forcing a user to compute X prevents them from submitting a million different feedbacks, but it doesn’t prevent them from submitting the same feedback a million times.

A nonce and an HMAC

We want to make the SHA256 computation unique for every feedback submission, so we change our puzzle to be sha256(feedback + comment + X + Y) < D, where Y is something unique to every request (a nonce). The usual setup is for the client to first query the server for a unique and fresh Y, solve the puzzle, and then send it back to the server along with X. The server only accepts the answer if Y was a value it previously generated, and which it hasn’t seen before.

The hard part is tracking which Y values were generated by the server-side functions, and which have already been used. The simple solution is for the functions to store all the Y values they generate, check any given value against the set of generated ones, and remove it once used. Unfortunately, this requires storing state on the server-side, but we’re using AWS Lambda, so we can’t do that.

💭 Well, we could do it by having the functions store the Y values into a database of some sort, but that would make an already complicated solution even more so, and it opens us up to attacks where a spammer requests a million Y values, and runs the database out of space.

Let’s solve the two problems one by one. First, how can a server-side function know if a Y value was previously generated by it? We can use some more cryptography: when the server-side generates the Y value, it also generates a hash-based message authentication code (HMAC) for it.

x = 0
(y, signature) = server.get_challenge()
while True:
    x = random()
    if int(sha256(feedback + comment + str(x) + y)[:4]) < d:
server.send_feedback(feedback, comment, x, y, signature)
Client-side brute-forcing the puzzle with a nonce

def get_challenge():
    y = random()
    signature = hmac(msg = y, secret_key = SECRET_KEY)
    return (y, signature)

def send_feedback(feedback, comment, x, y, signature):
    expected_signature = hmac(msg = y, key = SECRET_KEY)
    if signature != expected_signature:
        raise Exception("invalid Y value")
    if int(sha256(feedback + comment + str(x))[:4]) >= d:
        raise Exception("invalid answer")
Server-side code to generate challenge and verify the answer

Conceptually, hmac(msg, secret_key) is just sha256(msg + secret_key). Since sha256 and other cryptographic hashes aren’t reversible, an attacker can’t extract the secret_key from a signature. And since they don’t have secret_key, they can’t generate different messages that all have the same signature. We want this tamper-proofing because we send the signature to the client, and then have them send it back to us. We need to ensure the client can’t change the signature, or find different Y values with the same signature.

💭 Cracking the signature is essentially the same puzzle we’re using as proof-of-work in our scheme, except that the difficulty factor is 1, so stumbling upon the answer with brute-force is very hard.

Mind you, a production HMAC implementation is more complicated, and protects against a wider range of attacks. So, instead of rolling our own crypto, we’ll use BLAKE2 from the Python standard library as our HMAC.

A timestamp to limit replay attacks

The second problem is trickier. If a server-side function can’t remember anything between runs, how can it know if a Y value is fresh, and if it was previously used? If we don’t solve this, the scheme is vulnerable to replay attacks, where a spammer solves the puzzle once, and then calls send_feedback() with the same parameters in a loop.

I think knowing whether a Y value was previously used is impossible without persistent storage, but checking if Y is fresh is easy by including a timestamp inside of it. That is, instead of Y = random(), we use Y = now() + random(). Then, send_feedback() can extract the timestamp from Y, and check that it is recent enough. Even though it came from the client, an attacker wouldn’t be able to modify the timestamp because of the HMAC. This doesn’t fully protect against replay attacks, but it does reduce the time window during which one is possible, and since we control the difficulty of the hash puzzle, we can ensure that most of the time window is spent looking for a solution.


def get_challenge():
    y = now_epoch_sec() + "|" + random()
    signature = hmac(msg = y, secret_key = SECRET_KEY)
    return (y, signature)

def send_feedback(feedback, comment, x, y, signature):
    expected_signature = hmac(msg = y, key = SECRET_KEY)
    if signature != expected_signature:
        raise Exception("invalid Y value")
    timestamp = y.split("|")[0]
    if now_epoch_sec() > int(timestamp) + 10:
        raise Exception("too late")
    if int(sha256(feedback + comment + str(x))[:4]) >= d:
        raise Exception("invalid answer")
Server-side code to generate challenge, verify the answer, and ensure that the answer is recent enough

The full scheme works like this:

  • the client begins by calling get_challenge() on AWS Lambda, which spins up an instance of the function,

  • the function generates a random value (the prompt), annotates it with the current timestamp, signs it with an HMAC using the secret key, and returns both values to the client,

  • the client solves the proof-of-work (POW) hash puzzle for the feedback, comment, and prompt,

  • the client calls send_feedback() with all the parameters on AWS Lambda, which spins up an instance of the function (which shares no state with the previous instance),

  • the function checks the prompt and signature from the client against the same secret key as above, checks that the timestamp is recent enough, checks the answer to the puzzle, and, if everything looks good, proceeds with actually sending the feedback.

Sequence diagram of the whole exchange
Sequence diagram for the whole exchange

Server-side Python

The full code for the two server-side Python functions is available here. We’ll review just the interesting bits.

In get_challenge.py, we need to generate a nonce. Longer is better, so we use a UUID (version 4). We also need to share the secret key between this function and the next, so we store it in an environment variable.

def lambda_handler(event, context):
    secret_key = os.environ["SECRET_KEY"]

    prompt = str(math.floor(datetime.now().timestamp())) + "|" + str(uuid.uuid4())
    h = hashlib.blake2b(digest_size=16, key=secret_key.encode())
    signature = h.hexdigest().encode("utf-8")

    return ...

In send_feedback.py, we check that the prompt is valid, that the timestamp is recent, that the answer to the hash puzzle is correct, and then send an email. This function needs the same SECRET_KEY as get_challenge.py, and it needs the same difficulty factor (16777216 = 224) as the client-side Javascript we’ll see in the next section. There are better ways to ensure that these values match in all the places, but, for simplicity, we’re just copying them around manually here.

When we compute the challenge_and_answer, we’re careful to intersperse some hardcoded strings between the client-provided ones. If we didn’t have "Comment" between feedback and comment, the client could just move the edge characters between the two strings, and still have the same hash. For example, sha256("aa" + "bb") == sha256("a" + "abb"), but sha256("aa" + "Comment" + "bb") != sha256("a" + "Comment" + "abb").

def lambda_handler(event, context):
    # Get the secrets from the environment
    smtp_key = os.environ["SMTP_KEY"]
    secret_key = os.environ["SECRET_KEY"]

    ... extract parameters from event ...

    # Check that the prompt is valid
    h = hashlib.blake2b(digest_size=16, key=secret_key.encode())
    expected_signature = h.hexdigest().encode("utf-8")
    if not hmac.compare_digest(expected_signature, signature.encode()):
        return err("Invalid prompt")

    # Ensure that the timestamp is recent enough
    now = math.floor(datetime.now().timestamp())
    answer_now = prompt.split("|")[0]
    if now > int(answer_now) + 10:
        return err("Too late")

    # Compute the challenge, and check that the digest of the
    # challenge and answer pass our test.
    challenge_and_answer = prompt + "Path" + path + "Feedback" + feedback + "Comment" + comment + "Answer" + answer
    digest = hashlib.sha256(challenge_and_answer.encode()).digest()
    if int.from_bytes(digest[:4], "big") >= 16777216:
        return err("Challenge failed; my challenge was " + challenge_and_answer)

    # All's good, so send out the feedback email.
    msg = MIMEText(
        "Feedback: " + feedback
        + "\nComment: " + comment
        + "\nURL: https://scvalex.net" + path)
    msg['Subject'] = "Feedback: " + path
    msg['From']    = "FROM-ADDRESS-FILL-ME-IN"
    msg['To']      = "TO-ADDRESS-FILL-ME-IN"
    s = smtplib.SMTP("SMTP-SERVER-FILL-ME-IN", 587)
    s.login("SMTP-USERNAME-FILL-ME-IN", smtp_key)
    s.sendmail(msg["From"], msg["To"], msg.as_string())
    return ...

There are two magic numbers above: the 10 seconds for which a timestamp is fresh, and the difficulty factor of 16777216 (224). By decreasing them, we make the puzzle harder, and shorten the time window for a replay attack. By making them longer, we do the opposite, but support slower and more distant clients. Finding the right values will take some experimentation.

Client-side Javascript

The full code for the client-side Javascript is here. It uses crypto.subtle to compute hashes, and a DataView to convert the first 4 bytes of that into a number.

We also duplicate the challenge_and_answer construction from send_feedback(). This isn’t good, but it’s good enough for the small amount of code we have here.

async function digestMessage(crypto, message) {
    const encoder = new TextEncoder();
    const data = encoder.encode(message);
    const hash = await crypto.digest('SHA-256', data);
    return hash;

async function solveChallenge(crypto, prompt, path, feedback, comment) {
    let answer = '';
    for (;;) {
        answer = '' + Math.random();
        let challenge_and_answer =
            prompt + "Path" + path + "Feedback" + feedback
            + "Comment" + (comment || "No comment") + "Answer" + answer;
        let digest = new DataView(await digestMessage(
        if (digest.getUint32(0, false) < 16777216) {
    return answer;

Wrapping up

We’ve written a feedback form backed by couple of AWS Lambda functions. We included some measures to deter spammers, and this involved several different crypto components: proof-of-work, SHA256, and HMAC. On the server-side, we found everything we needed in Python’s standard library, and on the client-side, we used the newish crypto.subtle API.

Is this scheme a good idea? Yes and no. Using proof-of-work is a simple and effective way to deter spammers, but it’s undermined here by the statelessness of serverless functions. If we had a server, we could replace the timestamp-based freshness check with a much simpler and actually replay-resistant set membership test. The serverless bits are only really useful when we don’t have control over the server behind the site. I guess it’s also capable of scaling horizontally to infinity, because we can deploy as many instances of the static site and serverless functions as we need. So, this is the web-scale feedback form, and it will really come into its own when every Internet connected device develops an opinion on one of my blog posts at the same time.

Box: Relating the difficulty factor and the timeout window

In order to stymie spammers, we need to set the difficulty factor such that solving the hash puzzle takes up most of the timeout window. What sort of numbers make sense? Specifically, about how long does it take to solve the puzzle on commodity hardware?

Difficulty factor No. hashes to try Time (ms)
232 1 0
224 256 10
220 4096 162
216 65536 2,647
212 1048576 42,424

The above tests were run on Firefox on an Intel Core i7-8550U CPU running at 1.80 GHz. The code was just a loop running digestMessage(), so it ran on a single core. An attacker could parallelize this easily, but so could we with Web Workers. It would be an arms race.