Skip to main content

Command Palette

Search for a command to run...

Notes: Distributed Locks

Updated
4 min read
Notes: Distributed Locks

In a distributed system, sharing common resources becomes tricky. We need to ensure that the data we're reading is not stale and that updates happen sequentially during concurrent requests. To avoid such issues, one method to maintain consistency is to use distributed locking.

A distributed lock, as the name suggests, is used to lock a resource in a concurrent environment until the assigned operation is completed and the lock is released. One such resource could be the inventory count of a product on an e-commerce website. We'll use this in our examples.

Must-Have Properties of a Distributed Lock

  • Mutual exclusion: At any given time, only one client should be able to acquire the lock, preventing conflicts.

  • Deadlock-free: The system must avoid situations where processes are stuck waiting for locks indefinitely. Fairness should be there so that each process can get similar opportunity to acquire the lock. Also, lock should be renewed if the job is not completed and the lock is about to expire.

  • Fault tolerance: The system must handle node failures gracefully, ensuring the lock isn’t lost or left in an inconsistent state. Ideally, the lock should be released in case of a failure.

Different Mechanisms to Handle Distributed Locks

  • Lock using Single Node Redis

  • Lock using Ephemeral Node in Zookeeper

  • Database Locks

  • Simple Programmatic Lock using MongoDB/ DynamoDB

Implementation Examples

Lock using Single Node Redis

Redis provides the SET command with the NX and PX options to implement locks.

SET lock_key "item:1:lock" NX PX 10000

In this example, the SET command with NX ensures that the lock is set only if it does not already exist, and PX sets an expiration time to avoid deadlocks. Point to note is this is valid for single Redis node.

This is also not fully fault-tolerant as it can fail in multiple scenarios like Redis node crash or client crash can release locks prematurely, and in case of Redis node crash all the locks can be released causing huge operational inconsistencies.

To have high fault-tolerance Redis provides quorum based algorithm called Redlock. However this is also debated to have issues as mentioned here: https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html. I will not cover Redlock here but you can explore on your own.

Lock using Ephemeral Node in Zookeeper

ZooKeeper ephemeral znode can act as locks. They are created by a client as lock acquisition. They are deleted i.e. the lock is released as soon as the client disconnects or crashes.

Try to create an ephemeral node for the lock:

create -e /locks/lock-key client-xyz
# Success → You have the lock
# Failure → Someone else holds it, retry

Delete the lock after the job:

delete /locks/lock-key

Downsides of this approach are:

  • There is no lease time so locks are supposed to be released manually, so lock release process should be highly reliable.

  • Retry to get lock should be manual.

  • In case of crash all locks will be released which can cause other potential issues.

Database Locks

Database locks can be implemented using transactions and row-level locking mechanisms provided by relational databases. This can be implemented in multiple ways (refer: https://btree.dev/postgresql-concurrency-handling-for-developers). Example:

BEGIN;

-- Lock the row containing the inventory information
SELECT * FROM inventory WHERE item_id = 1 FOR UPDATE;

-- Check inventory level
SELECT stock FROM inventory WHERE item_id = 1;

-- Update inventory if the stock is sufficient
UPDATE inventory SET stock = stock - 1 WHERE item_id = 1 AND stock > 0;

COMMIT;

Here, FOR UPDATE clause locks the row until the transaction is completed, ensuring that other transactions cannot modify the same row simultaneously.

Simple Programmatic Lock using MongoDB/ DynamoDB

This solution depends a lot on your client and code. But this can be used in places of medium concurrency and minor inconsistencies. The best part is you don’t need any heavy setup to implement this.

  1. Create Locks Collection

     {
       "_id": "key",
       "owner": "unique_client_id",
       "expiresAt": ISODate("2025-06-12T06:35:00Z") // lease expiry time
     }
    
  2. Create TTL Index

     db.locks.createIndex({ "expiresAt": 1 }, { expireAfterSeconds: 0 })
    

    This ensures expired locks are automatically removed by MongoDB.

  3. Acquire Lock

     db.locks.insertOne({
       _id: "item:1:lock",
       owner: "client-xyz",
       expiresAt: new Date(Date.now() + 10000) // 10 sec lease
     })
    

    If the insert succeeds → you have the lock.
    If it fails due to duplicate _id → someone else holds it. Retry to get the lock.

  4. Release Lock

     db.locks.deleteOne({ _id: "item:1:lock", owner: "client-xyz" })
    

Keep in Mind

  • Always set a lock expiration time, usually a small value.

  • Make sure that the task completes before lock is releases, otherwise renew the lock.

  • Use locks only when it’s necessary as it may impact overall performance.

  • Always test end to end for failure cases to make sure lock is working as expected.

More from this blog