Jump directly to main content

Last year I wrote a blog post on concurrency control and ensuring data consistency while caching things to redis from multiple processes.

Recently I revisited the blog post again and I couldn’t help wonder how over-complicated my final solution was. Custom lua script and stuff!?

Here is my attempt to fix what I started. Going back to the initial requirements:

  1. Congestion Control - We need to try our best to call fetchFreshValues() as few times as possible, across all server processes, since it is expensive (how expensive? e.g. it was a super slow end-point that internally fires 100 of MySQL queries, which if it runs too many times, will slow the database down to a halt).
  2. Data Consistency - We need to make sure that all processes returns a consistent value for a key. i.e. let’s say our system calls fetchFreshValues() independently & parallelly from two processes, then only one should successfully write to redis whereas the other must fail.

Last blog post I established that redlock (a distributed lock algorithm from folks at redis) is great at congestion control, even though there could be edge cases where two processes manage to get a lock.

However for ensuring data consistency in the case two processes simultaneously gets hold of a lock, I did not have to go with a “versioned” solution. All I needed is redis to not overwrite a key if it already has a value. A simple SET NX command would have done the trick.

> SET myKey "myValue" EX 60 NX
"OK"
> SET myKey "myValue" EX 60 NX
(nil)

When a key expires, and if, two processes manage to get a lock, then only the first write would succeed. Which is what we wanted and is also how the old solution effectively worked. And with the return value, we can detect if a write succeeded or not.

Love the simplicity of this solution over the previous one. That’s all for this post. Thanks for reading.