SQL Support and Workarounds

As Citus provides distributed functionality by extending PostgreSQL, it is compatible with PostgreSQL constructs. This means that users can use the tools and features that come with the rich and extensible PostgreSQL ecosystem for distributed tables created with Citus.

Citus supports all SQL queries on distributed tables, with only these exceptions:

Furthermore, in Multi-tenant Applications when queries are filtered by table Distribution Column to a single tenant then all SQL features work, including the ones above.

To learn more about PostgreSQL and its features, you can visit the PostgreSQL documentation.

For a detailed reference of the PostgreSQL SQL command dialect (which can be used as is by Citus users), you can see the SQL Command Reference.

Workarounds

Before attempting workarounds consider whether Citus is appropriate for your situation. Citus’ current version works well for real-time analytics and multi-tenant use cases.

Citus supports all SQL statements in the multi-tenant use-case. Even in the real-time analytics use-cases, with queries that span across nodes, Citus supports the majority of statements. The few types of unsupported queries are listed in Are there any PostgreSQL features not supported by Citus? Many of the unsupported features have workarounds; below are a number of the most useful.

JOIN a local and a distributed table

Attempting to execute a JOIN between a local table “local” and a distributed table “dist” causes an error:

SELECT * FROM local JOIN dist USING (id);

/*
ERROR:  relation local is not distributed
STATEMENT:  SELECT * FROM local JOIN dist USING (id);
ERROR:  XX000: relation local is not distributed
LOCATION:  DistributedTableCacheEntry, metadata_cache.c:711
*/

Although you can’t join such tables directly, by wrapping the local table in a subquery or CTE you can make Citus’ recursive query planner copy the local table data to worker nodes. By colocating the data this allows the query to proceed.

-- either

SELECT *
  FROM (SELECT * FROM local) AS x
  JOIN dist USING (id);

-- or

WITH x AS (SELECT * FROM local)
SELECT * FROM x
JOIN dist USING (id);

Remember that the coordinator will send the results in the subquery or CTE to all workers which require it for processing. Thus it’s best to either add the most specific filters and limits to the inner query as possible, or else aggregate the table. That reduces the network overhead which such a query can cause. More about this in Subquery/CTE Network Overhead.

Temp Tables: the Last Resort

There are still a few queries that are unsupported even with the use of push-pull execution via subqueries. One of them is running window functions that partition by a non-distribution column.

Suppose we have a table called github_events, distributed by the column user_id. Then the following window function will not work:

-- this won't work

SELECT repo_id, org->'id' as org_id, count(*)
  OVER (PARTITION BY repo_id) -- repo_id is not distribution column
  FROM github_events
 WHERE repo_id IN (8514, 15435, 19438, 21692);

There is another trick though. We can pull the relevant information to the coordinator as a temporary table:

-- grab the data, minus the aggregate, into a local table

CREATE TEMP TABLE results AS (
  SELECT repo_id, org->'id' as org_id
    FROM github_events
   WHERE repo_id IN (8514, 15435, 19438, 21692)
);

-- now run the aggregate locally

SELECT repo_id, org_id, count(*)
  OVER (PARTITION BY repo_id)
  FROM results;

Creating a temporary table on the coordinator is a last resort. It is limited by the disk size and CPU of the node.