PostgreSQL data synchronization 1. Overall requirements 1.1. Current situation As software system complexity increases, distributed deployment has become a popular way of software deployment. For the structure, programs and data of the system are two major elements of the support system. The distribution of procedures has a lot of good programs, here I talk about data distributed deployment. Data distributed deployment is actually a distributed deployment of a database.
1.2. The system environment is here, and I will detail the process of deployment in the following environment. Main Database Server (MASTER) OS: SUSE Linux 9.0 for x86ip: 10.73.132.201mask: 255.255.254.0
From the database server (SLAVE) OS: SUSE Linux 9.0 for x86ip: 10.73.133.222mask: 255.255.254.0 Need to ensure that two machines interconnect.
Please download the specified package to specify the destination: db: postgreSQL 8.1.2.tar.gz (http://www.postgreSQL.org/download/) Slony1: slony1-1.1.5.tar (http: // www .postgresql.org / download /) The above URL is an entry address, select the appropriate correct source program package.
Use the root user to establish a working directory on both active / standby machines: / home / hzh / share and copy postgreSQL 8.1.2.tar.gz and Slony1-1.1.5.tar to the workload directory. If there is no clear designated user, it is generally root users.
1.3. System Installation 1.3.1 Main Database Server 1.3.1.1 Install Database Decompression, Command: Tar -Xvzf PostgreSQL 8.1.2.tar.gz Enters the corresponding postgreSQL-8.1.2 directory, command: CD PostgreSQL-8.1.2 Check, Command: ./configure check may fail, generally lacking the corresponding package, please install it. Gmake, command: gmake, is gmake installation, command: gmake install
The following is the following to the operating system that does not have a Postgres user, the command: groupAdd -g 26 Postgres creates postgres users, command: useradd -c "postgressql admin us" -d / usr / loca / pgsql -g 26 -g root -u 26 -S / Bn / BASH
Configure environment variables, modify / etc / profile file VI / etc / profile, modify infodir = / usr / local / info: / usr / share / info: / usr / info :. . . To set up infodir = / usr / local / info: / usr / share / info: / usr / info: . . / usr / local / pgsql / man uses postgres / root user modification, /usr/local/pgsql/.bashrc file, add environmental parameters for Postgres users as follows: pglib = / usr / local / pgsql / libpGData = / Test / SpessCSO / DataPath = $ path: / usr / local / pgsql / binmanpath = $ manpath: / usr / local / pgsql / manexport pglib PGData Path ManPath
Using root users, build database cluster directories, commands as follows (, please refer to Linux Online Help and PostgreSQL Online Help): MKDIR / TESTMKDIR / TEST / SPESCSO / MKDIR / TEST / SPESCSO / DATACHOWN Postgres / Test / Specso / Data / CHMOD 700 / TEST / SPESCSO / DATA / Using Postgres Creating Database Cluster / USR / Local / PGSQL / BIN / INITDB -E UTF-8 / TEST / SPESCSO / DATA /
Using root users, configure database cluster parameters Su -c "/ usr / local / pgsql / bin / createuser -a -d ssuser" -l postgressu -c "/ usr / local / pgsql / bin / createlang plpgsql template1" -l postgreSQL
Create a log directory using Postgres MKDIR / TEST / SPESCSO / DATA / LOG
Modify /Test/Spescso/data/postgreSQL.conf file main configuration log file log_destination = 'stderr'redirect_stderr = true log_directory =' / test / speption / speption / data / log / 'log_filename =' postgreSQL-% Y-% M-% D_ % H% m% s.log '
Modify /Test/spescso/data/pg_hba.conf, mainly getting dual-machine mutual authentication issues, otherwise you cannot access each other # type database user customr-address method METHOD
# "Local" is for Unix domain socket connections onlylocal all all trust # IPv4 local connections: host all all 127.0.0.1/32 trusthost all all 10.73.132.0/24 trusthost all all 10.73.133.0/24 trust # IPv6 local connections: host All all :: 1/128 Trust If you can't understand the meaning above, please read the PostgreSQL database secure authentication documentation carefully.
Use Postgres user background to launch Postmaser database main process / usr / local / pgsql / bin / postmaster -i -d / test / speptionmaster -i -d / test / speptionmaster -i -d / test / speptionmaster -i -d / test / speptionmaster -i
1.3.1.2 Installing the SLONY1 Data Synchronization Tool (the master is required to be installed) decompressed, command: tar -xvjf slony1-1.1.5.tar pays attention to the use -j parameter, the site's compression package is a bit inconsistent. Enter the corresponding Slony1-1.1.5 directory, command: CD Slony1-1.1.5 check, command: ./configure check may fail, generally lack of corresponding packages, please install it. Maybe it is necessary to bring -with parameter to specify the installation directory of PostgreSQL 8.1.2, the default is /usr/local/pgsql/postgreSQL-8.1.2gmake, command: gmake pay attention, is gmake installation, command: gmake install1.3.2 from the database server The installation method of the primary database server is consistent.
1.3.3 Establishing the Database and Data Sheet Unexpected to create a primary database and data table TEST on the primary database server, please refer to the establishment .SU -C "/ usr / local / pgsql / bin / createdb - U ssuser -e utf-8 test -p 5432 "-l postgres
Su -c "/ usr / local / pgsql / bin / psql -f /Home/hzh/share/sql.txt -p 5432 -dtest -ususer" -l postgres (note, sql.txt is a command to create data tables, Please write it yourself. Sql.txt file is preferably UTF-8 format, especially when there is Chinese characters.
Establish Testslave1, Testslave2 on the primary database machine. Establish Testslave3 on the 5431 port from the database machine
1.4 Configuration Synchronization 1.4.1. Host Configuration Write the Configmaster Sheul script file, set its executable properties, command: chmod a x w r configmaster, the contents of the file are as follows: #! / Bin / bashbasebin = / usr / local / pgsql / bin
# 集 群 Name Cluster = SLONY_TEST1
# Participate in the synchronous database name, Master DB is TEST, the other three are slavedbserver = testdbslave1 = testslave1dbslave2 = testslave2dbslave3 = testslave3
# Participate in synchronization machine address hostserver = 10.73.132.201HOSTSLAVE1 = 10.73.132.201HOSTSLAVE2 = 10.73.132.201HOSTSLAVE3 = 10.73.133.222
# 参 同 同 数据 数据 用户 用户 用户 d er = = SSUSERDBSLAVE2_USER = SSUSERDBSLAVE3_USER = SSUSER
# Synchronous release configuration, the following is the parameter of the command Slonik $ BASEBIN / SLONIK << _ EOF_
Cluster Name = $ cluster;
# Define copy nodes node 1 admin conninfo = 'dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432'; node 2 admin conninfo = 'dbname = $ DBSLAVE1 host = $ HOSTSLAVE1 user = $ DBSLAVE1_USER port = 5432'; node 3 admin conninfo = 'dbname = $ DBSLAVE2 host = $ HOSTSLAVE2 user = $ DBSLAVE2_USER port = 5432'; node 4 admin conninfo = 'dbname = $ DBSLAVE3 host = $ HOSTSLAVE3 user = $ DBSLAVE3_USER port = 5431'; # initialized clusters, ID starts from 1 init cluster (id = 1, comment = 'node 1');
# Set the data sheet for participating synchronization # first create a copy set, ID is also from 1st # to your own copy collection table, each table that needs to be copied, the set command #ID starts from 1, successively, step byline To 1; #Fully Qualified Name is the full name of the table: mode name. Table name # Here the copy set ID needs to be consistent with the previously created copy set ID Create Set (ID = 1, Origin = 1, Comment = 'All Test Tables' ); set add table (set id = 1, origin = 1, id = 1, full Qualified name = 'public.tb_depart', comment = 'table tb_depart'); set add table (set id = 1, Origin = 1, ID = 2, fully qualified name = 'public.tb_user', comment = 'table tb_user'); set add table (set id = 1, origin = 1, id = 3, full Qualified Name = 'public.tb_manager', Comment = 'Table TB_Manager');
# If a table has no primary key, but there is a unique keyword, then you can use the key keyword # to specify it as a copy key, such as the key parameter #set add table (set id = 1, Origin = 1, ID = 4 Fully Qualified name = 'public.history', key = "color", comment = 'table history'); # For tables without unique column, you need to do this, this sentence is placed in front #TABLE ADD KEY in Create Set ( Node id = 1, full Qualified name = 'public.history'); # This setting result set #set add table (set id = 1, origin = 1, id = 4, fully qualified name = 'public.history', Comment = 'History Table', Key = Serial;
# Set the storage node store node (id = 2, comment = 'node 2'); store node (id = 3, comment = 'node 3'); store node (id = 4, comment = 'node 4'); # Set Storage Path Store Path (Server = 1, Client = 2, CONNINFO = 'DBNAME = $ dbserver host = $ hostserver user = $ dbserver_user port = 5432'); store path (server = 2, client = 1, conninfo = 'dbname = $ DBSLAVE1 host = $ HOSTSLAVE1 user = $ DBSLAVE1_USER port = 5432 '); store path (server = 1, client = 3, conninfo =' dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432 '); store path (server = 3, client = 1, conninfo = 'dbname = $ DBSLAVE2 host = $ HOSTSLAVE2 user = $ DBSLAVE2_USER port = 5432'); store path (server = 1, client = 4, conninfo = 'dbname = $ DBSERVER host = $ Hostserver user = $ dbserver_user port = 5432 '); store path (server = 4, client = 1, connInfo =' DBNAME = $ dbslave3 host = $ hostslave3 user = $ dbslave3_user port = 5431 ');
# Set the listening event and subscription direction, replication role, the main node is the original provider, from the node is the recipient Store Listen (Origin = 1, provider = 1, receiver = 2); store listen (ORIGIN = 2, provider = 2, Receiver = 1); Store Listen (Origin = 1, Provider = 1, Receiver = 3); Store Listen (ORIGIN = 3, Provider = 3, Receiver = 1); Store Listen (Origin = 1, Provider = 1, Receiver = 4); Store Listen (Origin = 4, Provider = 4, Receiver = 1);
_Eof_
1.4.2. Submitted Data Collection Writing the commitdata shell script file, giving executable permissions, as follows:
#! / bin / bashbasebin = / usr / local / pgsql / bin
Cluster = SLONY_TEST1
DBServer = testdbslave1 = testslave1dbslave2 = testslave2dbslave3 = Testslave3
HOSTSERVER = 10.73.132.201HOSTSLAVE1 = 10.73.132.201HOSTSLAVE2 = 10.73.132.201HOSTSLAVE3 = 10.73.133.222DBSERVER_USER = ssuserDBSLAVE1_USER = ssuserDBSLAVE2_USER = ssuserDBSLAVE3_USER = ssuser
$ basebin / slonik << _ EOF_
Cluster Name = $ cluster;
# Supply connection parameters node 1 admin conninfo = 'dbname = $ DBSERVER host = $ HOSTSERVER user = $ DBSERVER_USER port = 5432'; node 2 admin conninfo = 'dbname = $ DBSLAVE1 host = $ HOSTSLAVE1 user = $ DBSLAVE1_USER port = 5432'; node 3 admin conninfo = 'dbname = $ DBSLAVE2 host = $ HOSTSLAVE2 user = $ DBSLAVE2_USER port = 5432'; node 4 admin conninfo = 'dbname = $ DBSLAVE3 host = $ HOSTSLAVE3 user = $ DBSLAVE3_USER port = 5431';
# Submit Subscription Copy Set Subscribe Set (ID = 1, Provider = 1, Receiver = 2, Forward = NO); Subscribe Set (ID = 1, Provider = 1, Receiver = 3, Forward = NO); Subscribe Set (ID = 1, provider = 1, received = 4, forward = no); _ EOF_
1.4.3. The configuration synchronization process performs configuration commands on the host: ./configmaster turns on the Slon background process on the host, start the main database copy, command: / usr / local / pgsql / bin / slon slony_test1 "DBNAME = Test Host = 10.73 .132.201 User = SSUSER port = 5432 "&
On the host, open the SLON background process, start the first one from the database copy, command: / usr / local / pgsql / bin / slony_test1 "DBNAME = Testslave1 host = 10.73.132.201 user = ssuser port = 5432" &
Turn on the Slon background process on the host, start the second copy from the database, command: / usr / local / pgsql / bin / slony_test1 "dbname = testslave2 host = 10.73.132.201 user = SSUSER port = 5432" &
On the Slon background process on the slave, start the third from the database copy, command: / usr / local / pgsql / bin / slon slony_test1 "dbname = testslave3 host = 10.73.133.222 user = SSUSER port = 5431"
Execute the submission command on the host: ./commitdata
At this point, all technologies are configured. You can modify the data of the primary database corresponding to the table to see if the data is synchronized to three from the database. Thank you for reading, I wish you good luck!