Wednesday, November 18, 2015

LISTENing connections aren't cheap

Recently I used perf to look into what could be the cause for our increased CPU usage on the PostgreSQL server (encouraged by Andres' great talk at  I was somewhat surprised to find that thirty percent of the CPU time used by postgres was spent spinning on spinlocks, i.e. doing no actual useful work.  Digging into the profile a bit more, most of these were coming from a function called asyncQueueReadAllNotifications.  To understand why this function might have such a large impact on the system, we need to understand the basics of how LISTEN / NOTIFY works in postgres.  The specifics can be found in the comment near the top of async.c, but the high level view is as follows.

When a session starts LISTENing on a channel the first time, it advertises its interest in notifications in a special shared memory segment.  The key is, however, that this is a black-and-white flag: if you're interested in any notification channel, shared memory will say you're interested in all of them.  This is also the reason you can't query which channels other backends are listening on; the data simply isn't available anywhere except in the local memory of each backend.  The only data that could be exposed on SQL level is the list of backends which are interested in notifications.  That is, however, at least currently not exposed, though if you're brave enough you can use GDB to extract that list.

For the NOTIFY part, things are a bit more complex.  While the transaction is going on, any issued notifications are queued into a local buffer.  If the transaction rolls back, this list is simply discarded.  But if the transaction commits, the notifications in the list are copied to a special queue in shared memory.  Once the queue has been populated, the backend goes through the list of all other backends listening on any channel and wakes them up.  And herein lies the problem: every single backend is being woken up, even if they're not interested in any of the notifications sent by this particular transaction.  Once you have a lot of workers listening on different channels and lots of notifications being delivered per time unit of your choice, the wakeups, transactions created for accessing the shared memory segment and locks around the data structures start eating up a lot of CPU.

So, what can we do?

Reduce the number of connections using LISTEN.  Really, it can save your day. This is what happened when we reduced the number of backends listening by around 50%:

We've since went the extra mile and migrated all listening connections to allas, which brought the spinlocks in our CPU profiles down from 30% to 0.16%.  asyncQueueReadAllNotifications doesn't even appear in the profiles, which makes me happy.

Of course, if all backends are actually interested in all (or most) notifications, perhaps this is not that big of an issue.  But if you have a lot of backends and most of them are only interested in a subset of notifications delivered through the system, you might want to think about ways to reduce the number of backends listening for connections directly in postgres.  I've presented one way of achieving that here, but obviously it's not the only way.


  1. Very helpful article, pleased to know that there is allas, so it might save us from installing RabbitMQ or other similar software.

  2. How many connections are you talking about?