Clickhouse multi master replication
EnglishSoftwareTutorials
File for reproductible environment :
Nothing is stored on the VM instances : see volumes configuration out of the docker-compose file.
Sharding and replica is straightforward and defined in these files :
- ch_configs/config.d/remote_server.xml
- ch-configs/macros_chX.xml (replace X with server’s instance number)
Starting clickhouse servers :
$ docker-compose up -d ch1 ch2 ch3 ch4
$ docker ps -a
1 2 3 4 5 6 |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6c63f386a951 yandex/clickhouse-server:latest ... 3d8fccb51750 yandex/clickhouse-server:latest ... 916b395d2ba9 yandex/clickhouse-server:latest ... 8fb0bc77d7c8 yandex/clickhouse-server:latest ... b4c3f11eba60 zookeeper ... |
To use it either use any client you prefer (Tabix, …) or connect with the client (already defined in docker-compose file):
$ docker-compose run –rm client –host ch1 –user quid –password quid
1 2 3 4 5 6 7 8 9 10 11 12 |
ClickHouse client version 20.5.3.27 (official build). Connecting to ch1:9000 as user quid. Connected to ClickHouse server version 20.5.3 revision 54435. 3d8fccb51750 :) show databases SHOW DATABASES ┌─name───────────────────────────┐ │ _temporary_and_external_tables │ │ default │ │ system │ └────────────────────────────────┘ 3 rows in set. Elapsed: 0.002 sec. |
3d8fccb51750 🙂 SELECT * FROM system.clusters
1 2 3 4 5 6 7 8 9 |
SELECT * FROM system.clusters ┌cluster┬shard_num─┬─shard_weight─┬─replica_num─┬─... │ todos │ 1 │ 1 │ 1 │ ch1 │ 172.19.0.5 │ 9000 │ 1 │ ... │ todos │ 1 │ 1 │ 2 │ ch2 │ 172.19.0.2 │ 9000 │ 0 │ ... │ todos │ 2 │ 1 │ 1 │ ch3 │ 172.19.0.4 │ 9000 │ 0 │ ... │ todos │ 2 │ 1 │ 2 │ ch4 │ 172.19.0.3 │ 9000 │ 0 │ ... └───────┴───┴───┴───┴─────┴────────────┴──────┴───┴ ... 4 rows in set. Elapsed: 0.002 sec. |
Database and tables creation
1 |
3d8fccb51750 |
CREATE DATABASE todos_cluster on cluster todos;
SHOW DATABASES;
show tables from todos_cluster
This database has been replicated on all the shards/replicas, no tables are yet existing
The name of the cluster can be found in this file : ch_configs/config.d/remote_server.xml with the tag <todos>
Local table
To create tables where content will only be replicated with its replica :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
CREATE TABLE todos_cluster.events_local on cluster todos( id UInt64, time DateTime, type UInt16, task String, deadline DateTime ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/event_local', '{replica}') PARTITION BY toYYYYMM(time) ORDER BY (toYYYYMM(time), id); <strong>show tables from todos_cluster</strong>; SHOW TABLES FROM todos_cluster ┌─name─────────┐ │ events_local │ └──────────────┘ 1 rows in set. Elapsed: 0.002 sec. |
You can check on each server (ch1, ch2, ch3, ch4) if the tatble is created
Now let’s insert data
INSERT INTO todos_cluster.events_local VALUES(1, ‘2020-03-28 00:00:00’, 3, ‘Hit the road Jack’, ‘2020-04-15 00:00:00’);
select * from todos_cluster.events_local;
Distributed tables
To create tables where content will be replicated in all shards :
1 2 3 4 5 6 7 8 9 10 11 12 |
CREATE TABLE todos_cluster.events_distributed on cluster todos ( id UInt64, time DateTime, type UInt16, task String, deadline DateTime ) ENGINE = Distributed(todos, todos_cluster, events_local, rand()); SHOW TABLES from todos_cluster select * from todos_cluster.events_distributed |
Use show tables on all instances to see it has been created.
Now we insert data inside this table
INSERT INTO todos_cluster.events_distributed VALUES(2, ‘2020-03-28 00:00:00’, 4, ‘Sweig – Le joueur d\’échec’, ‘2020-05-05 00:00:00’);
INSERT INTO todos_cluster.events_distributed VALUES(3, ‘2020-03-28 00:00:00’, 1, ‘Kitchen’, ‘2020-03-30 00:00:00’);
INSERT INTO todos_cluster.events_distributed VALUES(4, ‘2020-03-28 00:00:00’, 3, ‘Muse Hysteria’, ‘2020-04-25 00:00:00’);
select * from todos_cluster.events_distributed
Leave a comment