Replication sets v5
A replication set is a group of tables that a PGD node can subscribe to. You can use replication sets to create more complex replication topologies than regular symmetric multi-master where each node is an exact copy of the other nodes.
Every PGD group creates a replication set with the same name as the group. This replication set is the default replication set, which is used for all user tables and DDL replication. All nodes are subscribed to it. In other words, by default all user tables are replicated between all nodes.
Using replication sets
You can create replication sets using bdr.create_replication_set
,
specifying whether to include insert, update, delete, or truncate actions.
One option lets you add existing tables to the set, and
a second option defines whether to add tables when they're
created.
You can also manually define the tables to add or remove from a replication set.
Tables included in the replication set are maintained when the node joins the cluster and afterwards.
Once the node is joined, you can still remove tables from the replication set, but you must add new tables using a resync operation.
By default, a newly defined replication set doesn't replicate DDL or PGD
administration function calls. Use bdr.replication_set_add_ddl_filter
to define the commands to replicate.
PGD creates replication set definitions on all nodes. Each node can then be
defined to publish or subscribe to each replication set using
bdr.alter_node_replication_sets
.
You can use functions to alter these definitions later or to drop the replication set.
Note
Don't use the default replication set for selective replication. Don't drop or modify the default replication set on any of the PGD nodes in the cluster, as it's also used by default for DDL replication and administration function calls.
Behavior of partitioned tables
PGD supports partitioned tables transparently, meaning that you can add a partitioned table to a replication set.
Changes that involve any of the partitions are replicated downstream.
Note
When partitions are replicated through a partitioned table, the
statements executed directly on a partition are replicated as they
were executed on the parent table. The exception is the TRUNCATE
command,
which always replicates with the list of affected tables or partitions.
You can add individual partitions to the replication set, in which case they're replicated like regular tables, that is, to the table of the same name as the partition on the downstream. This behavior has some performance advantages if the partitioning definition is the same on both provider and subscriber, as the partitioning logic doesn't have to be executed.
Note
If a root partitioned table is part of any replication set, memberships of individual partitions are ignored. Only the membership of that root table is taken into account.
Behavior with foreign keys
A foreign key constraint ensures that each row in the referencing table matches a row in the referenced table. Therefore, if the referencing table is a member of a replication set, the referenced table must also be a member of the same replication set.
The current version of PGD doesn't automatically check for or enforce this condition. When adding a table to a replication set, the database administrator must make sure that all the tables referenced by foreign keys are also added.
You can use the following query to list all the foreign keys and replication sets that don't satisfy this requirement. The referencing table is a member of the replication set, while the referenced table isn't.
The output of this query looks like the following:
This output means that table t2
is a member of replication set s2
, but the
table referenced by the foreign key t2_x_fkey
isn't.
The TRUNCATE CASCADE
command takes into account the
replication set membership before replicating the command. For example:
This becomes a TRUNCATE
without cascade on all the tables that are
part of the replication set only:
Replication set membership
You can add tables to or remove them from one or more replication sets. Doing so affects replication only of changes (DML) in those tables. Schema changes (DDL) are handled by DDL replication set filters (see DDL replication filtering).
The replication uses the table membership in replication sets with the node replication sets configuration to determine the actions to replicate to which node. The decision is done using the union of all the memberships and replication set options. Suppose that a table is a member of replication set A that replicates only INSERT actions and replication set B that replicates only UPDATE actions. Both INSERT and UPDATE actions are replicated if the target node is also subscribed to both replication set A and B.
You can control membership using bdr.replication_set_add_table
and bdr.replication_set_remove_table
.
Listing replication sets
You can list existing replication sets with the following query:
You can use this query to list all the tables in a given replication set:
Behavior with foreign keys shows a query that lists all the foreign keys whose referenced table isn't included in the same replication set as the referencing table.
Use the following SQL to show those replication sets that the current node publishes and subscribes from:
This code produces output like this:
To execute the same query against all nodes in the cluster, you can use the following query. This approach gets the replication sets associated with all nodes at the same time.
This shows, for example:
DDL replication filtering
By default, the replication of all supported DDL happens by way of the default PGD group replication set. This replication is achieved using a DDL filter with the same name as the PGD group. This filter is added to the default PGD group replication set when the PGD group is created.
You can adjust this by changing the DDL replication filters for all existing replication sets. These filters are independent of table membership in the replication sets. Just like data changes, each DDL statement is replicated only once, even if it's matched by multiple filters on multiple replication sets.
You can list existing DDL filters with the following query, which shows for each filter the regular expression applied to the command tag and to the role name:
You can use bdr.replication_set_add_ddl_filter
and bdr.replication_set_remove_ddl_filter
to manipulate DDL filters.
They're considered to be DDL
and are therefore subject to DDL
replication and global locking.
Selective Replication Example
In this example, we configure EDB Postgres Distributed to selectively replicate tables to particular groups of nodes.
Cluster Configuration
This example assumes we have a cluster of six data nodes, data-a1
to data-a3
and data-b1
to data-b3
in two locations, represented by them being members of the region_a
and region_b
groups.
There's also, as we recommend, a witness node, named witness
in region-c
, but we won't be needing to mention that in this example. The cluster itself will be called sere
.
This configuration looks like this:
This is the standard Always-On multiregion configuration as discussed in the Choosing your architecture section.
Application Requirements
For this example, we are going to work with an application which record the opinions of people who attended performances of musical works. There is a table for attendees, a table for the works and an opinion table which records which attendee saw which work, where, when and how they scored the work. Because of data regulation, the example assumes that opinion data must stay only in the region where the opinion was recorded.
Creating tables
The first step is to create appropriate tables.
Viewing groups and replication sets
By default, EDB Postgres Distributed is configured to replicate each table in its entireity to each and every node. This is managed through Replication Sets.
To view the initial configuration's default replication sets run:
In the output, you can see there is the top level group, sere
with a default replication set named sere
. Each of the three subgroups has a replication set with the same name as the subgroup; the region_a
group has a region_a
default replication set.
By default, all existing tables and new tables become members of the replication set of the top-level group.
Adding tables to replication sets
The next step in this process is to add tables to the replication sets belonging to the groups that represent our regions. As previously mentioned, all new tables are automatically added to the sere
replication set. We can confirm that by running:
We want the opinion
table to be replicated only within region_a
, and separately only within region_b
. To do that, we add the table to the replica sets of each region.
But, we are not done, because opinion
is still a member of the sere
replication set. When a table is a member of multiple replication sets, it is replicated within each. This doesn't impact performance though as each row in only replicated once on each target node. We don't want opinion
replicated across all nodes, so we need to remove it from the top-level group's replication set:
We can now review these changes:
This should provide the selective replication we desired. The next step is to test it.
Testing Selective Replication
Let's create some test data, two works and an attendee. We'll connect directly to data-a1 to run this next code:
Now that there is some data in these tables, we can insert into the opinion
table without violating foreign key constraints.
Once inserted, we can validate the contents of the database on the same node:
If we now connect to nodes data-a2
and data-a3
and run the same query, we will get the same result. The data is being replicated in region_a
. If we connect to data-b1
, data-b2
or data-b3
, the query will return no rows. That's because, although the attendee
and work
tables are populated, there's no opinion
row that could be selected. That, in turn, is because the replication of opinion
on region_a
only happens in that region.
If we now connect to data-b1
and insert an opinion on there like so:
This opinion will only be replicated on region_b
. On data-b1
, data-b2
and data-b3
, you can run:
You will see the same result on each of the region_b
data nodes. Run the query on region_a
nodes and you will not see this particular entry.
Finally, we should note that the attendee
table is shared identically across all nodes; on any node, running the query: