ACID transactions in every day applications

ACID, a term coined by Andreas Reuter and Theo Härder in 1983 describing the requirements a database must possess for the reliable processing of transactions. They build it on earlier work done by Jim Gray who enumerated atomicity, consistency, and durability but left out isolation.

To most web developers, databases and ACID properties can be mundane, but they are extremely beneficial. In recent years, there have been plenty of trending non-ACID databases, but even if they don’t support ACI, they tend to get the durability correct which essentially guarantees that if the system blows up, your data would be safe!

A lot has been written on the topic, so we won’t be covering the usual textbook definition of ACID instead we will be discussing how ACID ( or the lack thereof ) is used by tools that we use day in, day out.

Atomicity

current transaction is aborted, commands ignored until end of transaction block

If you have been using PostgreSQL long enough, then there are good chances you have seen this. It especially stands out because PostgreSQL handles transactions differently than MySQL and other common databases.

I’m an RoR developer, so let’s take an example straight from its documentation.

Number.transaction do
  Number.create(i: 0)
  begin
    # This will raise a unique constraint error...
    Number.create(i: 0)
  rescue ActiveRecord::StatementInvalid
    # ...which we ignore.
  end

  # On PostgreSQL, the transaction is now unusable. The following
  # statement will cause a PostgreSQL error, even though the unique
  # constraint is no longer violated:
  Number.create(i: 1)
  # => "PG::Error: ERROR:  current transaction is aborted, commands
  #     ignored until end of transaction block"
end

As per the comments, the transaction becomes unstable after a uniqueness violation but what does it mean for a transaction to be unstable, and how does it prevent PostgreSQL from executing other SQL commands?

The reason is the ‘A’ in ACID. If PostgreSQL lets you commit the transaction, it would break atomicity. What is different in PostgreSQL compared to other databases is its strict enforcement. A syntax error or a relation failing existence check is enough to break the rules of atomicity.

# BEGIN;
# select txid_current();
>> 2833912

-- relation 't' doesn't exist
# DROP TABLE t;
>> ERROR:  table "t" does not exist

-- transaction is now in aborted status
# select txid_status(2833912);
ERROR:  current transaction is aborted, commands ignored until end of transaction block

We can check that the status of the transaction is aborted in a different session.

# select txid_status(2833912);
>> aborted

In interactive mode, it could be sometimes useful to continue even after a failure which we can achieve in psql by turning on a flag. Internally it uses an implicit savepoint before each command to initiate a rollback in the event of a failure.

# \echo :AUTOCOMMIT
# on

# \echo :ON_ERROR_ROLLBACK
# off

# \set ON_ERROR_ROLLBACK on

-- can continue even after a failure

Consistency

Consistency is concerned with business logic. By default, in PostgreSQL, the consistency check is performed by row. But it could also be deferred, by a statement or by a transaction.

The database won’t allow the transaction to commit with constraint violations but it will allow deferring the constraint check. How would deferring the consistency check help us in implementing logic?

Let’s take an example of implementing a list of todos with user-defined order using a position column with a uniqueness constraint because no two todos could be in the same position.

When we go about updating the position of a todo, the uniqueness constraint will be violated because the check is being done at the row level and that is when deferring the constraint at the transaction level comes to the rescue.

For implementation details in RoR, refer to the article by hashrocket.

NOTE: While researching for the blog I came across an HN thread, with an interesting discussion on how systems that are non-ACID or distributed can work even without consistency ( and not sacrificing throughput ) using settlement and reconciliation for-ex: when using commutative operations like deposit and withdrawal to a bank account.

Isolation

pg_dump, a tool on which many of us rely daily to dump the staging database to use for local development. But have you ever asked yourself how it works in parallel mode?

The documentation says:

pg_dump -j uses multiple database connections; it connects to the database once with the master process and once again for each worker job

If each worker establishes a connection on an in-use database, then how does it dump a consistent snapshot. I mean, by the time the second worker establishes a connection, some tuples could have been added or removed from the database!

The trick is isolation using a snapshot that is shared across the connections. Each worker starts a transaction in repeatable read isolation level and then uses the same snapshot version initialized by the master process.

You can try it yourself in a psql session:

-- 1st psql session
# SELECT pg_export_snapshot();
>> 00000003-000628A2-1

-- 2nd psql session
# BEGIN ISOLATION LEVEL REPEATABLE READ;
# SET TRANSACTION SNAPSHOT '00000003-000628A2-1';

-- the 2nd session would now see the same snapshot
-- as the 1st one!

Durability

If you ask me one article every software engineer should read on storage systems, then I would vote for The Log by Jay Krep.

Durability, it’s so important that every storage system needs to get it right. The Write-Ahead Logging (WAL) is used for implementing durability but over time it’s usage grew to include functionality like creating replica databases.

Till now, we have only discussed how the ACID principles are used, but could not using one of the principles provide some sort of benefit? Maintaining data integrity has it’s overhead but we don’t need it all the time. For-ex: on continuous integration systems we can drop the ‘D’ in ACID to make our tests involving the database run faster, after all, there is no intention to persist the data.

CircleCI, for-ex uses PostgreSQL in non-durable mode to increase the speed of our tests.

Let’s use pgbench to test the throughput.

-- initialize pgbench on test database
# pgbench -i test

-- with default durability settings
# pgbench -T 30 test

tps = 782.894466 (including connections establishing)
tps = 783.030251 (excluding connections establishing)

-- using the settings from CircleCI
# ALTER SYSTEM SET fsync = off;
# ALTER SYSTEM SET synchronous_commit = off;
# ALTER SYSTEM SET full_page_writes = off;

# SELECT pg_reload_conf();

-- with durability turned off
# pgbench -T 30 test

tps = 897.580891 (including connections establishing)
tps = 897.700150 (excluding connections establishing)

With durability turned off, we got 897 transactions per second (tps) in comparison with the 782 transactions per second (tps) with the default durability settings.

NOTE: CircleCi also has an image for PostgreSQL where PGDATA is mounted on RAM disk.

Need help on your Ruby on Rails or React project?

Join Our Newsletter