티스토리 뷰

기록남기기

mariadb Galera Cluster on kubernetes

양들의침묵1 2021. 8. 18. 10:43

배경 : 한정된 k8s 클러스터 node 에서 MariaDB 를 설치하며 고가용성을 확보할 수있는 방법을 찾고 있음

구성 : 아래와 같이 K8s Cluster 구성
OS : Ubuntu 20.04
master , worker  (192.168.5.191)
master , worker  (192.168.5.192)
master , worker  (192.168.5.193)
worker  (192.168.5.194)

191,192,193 : Mariadb 설치 및 Galera Cluster 구성 on Node
192,193 : ProxySQL 을 통하여 다른 어플리케이션(다른 노드 또는 다른 node에서의 pod , 같은 node 에서의 pod ) 에서 접속하도록 할 예정

Galera Cluster 구성

Server Hostname IP Address
DB 1 node1 192.168.5.191
DB 2 node2 192.168.5.192
DB3 node3 192.168.5.193

 

Step 1: Updates servers

# sudo apt-get update && sudo apt -y upgrade
sudo reboot

Step 2: Setup Hostnames (kubespray 로 설치 했을 경우 이미 세팅되어 있음 )

# vim /etcx/hosts

192.168.5.191 node1.cluster.local node1
192.168.5.192 node2.cluster.local node2
192.168.5.193 node3.cluster.local node3
192.168.5.194 node4.cluster.local node4

Step3 : Install MariaDB on all nodes

# apt-get update
# sudo apt -y install mariadb-server mariadb-client

Configure MariaDB 

$ sudo mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

위에서 설정한 패스워드 기반으로 각 DB 에서 연결 테스트 

# mysql -u root -p

위와 같이 설치 하였을 경우 , root 는 비번 없이도 바로 접속이 가능하다. 
보안상 아래와 같이 패스워드를 무조건 확인하고 로그인 할 수 있도록 하려고 한다. ( 모든 Node )

MariaDB [mysql]> ALTER USER 'root'@'localhost' IDENTIFIED BY 'passw0rd';
Query OK, 0 rows affected (0.000 sec)

##위와 같이 설정한후 비번없이 로그인을 하려고할경우 다음과 같이 에러가 발생한다. 
# mysql -uroot -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

## 위에서 설정한 패스워드로 로그인 하게 되면 정상적으로 로그인이 된다
# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 60
Server version: 10.3.31-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

Step4: Configure Galera Cluster

다음은 3개 node에 Galera Cluster 를 구성한다.
각 3개의 Node의 컨피그 파일에서 bind address 를 주석처리한다.

# sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf
#bind-address            = 127.0.0.1

 Node1  구성
 
MariaDB 컨피그 파일에 아래와 같이 추가해준다. 명심해야할 부분은 "wsrep_node_address" 이 부분은 Node1의  hostname 또는 IP 정보를 기입힌다.

# vim /etc/mysql/mariadb.conf.d/50-server.cnf

[galera]
wsrep_on                 = ON
wsrep_cluster_name       = "MariaDB Galera Cluster"
wsrep_provider           = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address    = "gcomm://"
binlog_format            = row
default_storage_engine   = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="node1"

initialize galera cluster 그리고 restart MariaDB 

# sudo galera_new_cluster
# sudo systemctl restart mariadb

Node2, Node3 Galera 구성 

Node2

# vim /etc/mysql/mariadb.conf.d/50-server.cnf

[galera]
wsrep_on                 = ON
wsrep_cluster_name       = "MariaDB Galera Cluster"
wsrep_provider           = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address    = "gcomm://node1,node2,node3"
binlog_format            = row
default_storage_engine   = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="node2"

Node3

# vim /etc/mysql/mariadb.conf.d/50-server.cnf

[galera]
wsrep_on                 = ON
wsrep_cluster_name       = "MariaDB Galera Cluster"
wsrep_provider           = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address    = "gcomm://node1,node2,node3"
binlog_format            = row
default_storage_engine   = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="node3"

node2 , node3 MariaDB restart

# systemctl restart mariadb

Step5: Galera Setting 확인

각 3개의 노드에 root 로 mariadb 접속 후 , 클러스터 세팅이 잘 되었는지 확인

# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 10.3.31-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_%';
+-------------------------------+--------------------------------------+
| Variable_name                 | Value                                |
+-------------------------------+--------------------------------------+
| wsrep_applier_thread_count    | 1                                    |
| wsrep_apply_oooe              | 0.000000                             |
| wsrep_apply_oool              | 0.000000                             |
| wsrep_apply_window            | 0.000000                             |
| wsrep_causal_reads            | 0                                    |
| wsrep_cert_deps_distance      | 0.000000                             |
| wsrep_cert_index_size         | 0                                    |
| wsrep_cert_interval           | 0.000000                             |
| wsrep_cluster_conf_id         | 3                                    |
| wsrep_cluster_size            | 3                                    |
| wsrep_cluster_state_uuid      | bd0c701c-ffc3-11eb-b6aa-3fc6c9d8727d |
| wsrep_cluster_status          | Primary                              |
| wsrep_cluster_weight          | 3                                    |
| wsrep_commit_oooe             | 0.000000                             |
| wsrep_commit_oool             | 0.000000                             |
| wsrep_commit_window           | 0.000000                             |
| wsrep_connected               | ON                                   |
| wsrep_desync_count            | 0                                    |
| wsrep_evs_delayed             |                                      |
| wsrep_evs_evict_list          |                                      |
| wsrep_evs_repl_latency        | 0/0/0/0/0                            |
| wsrep_evs_state               | OPERATIONAL                          |
| wsrep_flow_control_paused     | 0.000000                             |
| wsrep_flow_control_paused_ns  | 0                                    |
| wsrep_flow_control_recv       | 0                                    |
| wsrep_flow_control_sent       | 0                                    |
| wsrep_gcomm_uuid              | 00cb5713-ffc4-11eb-9209-a6b005ea014e |
| wsrep_incoming_addresses      | node1:3306,node2:3306,node3:3306     |
| wsrep_last_committed          | 0                                    |
| wsrep_local_bf_aborts         | 0                                    |
| wsrep_local_cached_downto     | 18446744073709551615                 |
| wsrep_local_cert_failures     | 0                                    |
| wsrep_local_commits           | 0                                    |
| wsrep_local_index             | 0                                    |
| wsrep_local_recv_queue        | 0                                    |
| wsrep_local_recv_queue_avg    | 0.100000                             |
| wsrep_local_recv_queue_max    | 2                                    |
| wsrep_local_recv_queue_min    | 0                                    |
| wsrep_local_replays           | 0                                    |
| wsrep_local_send_queue        | 0                                    |
| wsrep_local_send_queue_avg    | 0.500000                             |
| wsrep_local_send_queue_max    | 2                                    |
| wsrep_local_send_queue_min    | 0                                    |
| wsrep_local_state             | 4                                    |
| wsrep_local_state_comment     | Synced                               |
| wsrep_local_state_uuid        | bd0c701c-ffc3-11eb-b6aa-3fc6c9d8727d |
| wsrep_open_connections        | 0                                    |
| wsrep_open_transactions       | 0                                    |
| wsrep_protocol_version        | 9                                    |
| wsrep_provider_name           | Galera                               |
| wsrep_provider_vendor         | Codership Oy <info@codership.com>    |
| wsrep_provider_version        | 3.29(ra60e019)                       |
| wsrep_ready                   | ON                                   |
| wsrep_received                | 10                                   |
| wsrep_received_bytes          | 686                                  |
| wsrep_repl_data_bytes         | 0                                    |
| wsrep_repl_keys               | 0                                    |
| wsrep_repl_keys_bytes         | 0                                    |
| wsrep_repl_other_bytes        | 0                                    |
| wsrep_replicated              | 0                                    |
| wsrep_replicated_bytes        | 0                                    |
| wsrep_rollbacker_thread_count | 1                                    |
| wsrep_thread_count            | 2                                    |
+-------------------------------+--------------------------------------+
63 rows in set (0.000 sec)

MariaDB [(none)]>

cluster size 가 3 인지 확인

wsrep_cluster_size    3

다음은 node1에서 test1 이라는 database 를 만들고 다른 node 에서 생성된 데이터베이스를 확인한다

root@node1:~# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 15
Server version: 10.3.31-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database test1;
Query OK, 1 row affected (0.004 sec)


## node2, node3 에서 확인

root@node2:# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 10.3.31-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test1              |
+--------------------+
4 rows in set (0.001 sec)


root@node3:# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 10.3.31-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test1              |
+--------------------+
4 rows in set (0.001 sec)

이렇게 해서 Galera Cluster 구성은 완료했다

다음은 Kubernetes에 ProxySql 을  pod 로 올리는 작업을 할 것이다.

Kubernetes Cluser 구성현황

# kubectl get nodes -o wide
NAME    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
node1   Ready    control-plane,master   13d   v1.20.7   192.168.5.191   <none>        Ubuntu 20.04.2 LTS   5.4.0-80-generic   docker://19.3.15
node2   Ready    control-plane,master   13d   v1.20.7   192.168.5.192   <none>        Ubuntu 20.04.2 LTS   5.4.0-65-generic   docker://19.3.15
node3   Ready    control-plane,master   13d   v1.20.7   192.168.5.193   <none>        Ubuntu 20.04.2 LTS   5.4.0-80-generic   docker://19.3.15
node4   Ready    <none>                 13d   v1.20.7   192.168.5.194   <none>        Ubuntu 20.04.2 LTS   5.4.0-65-generic   docker://19.3.15

 

ProxySQL Configuration Via ConfigMap

첫번째  ConfigMap 을 이용한다. proxysql.cnf 라는 파일을 생성
proxysql 계정 - proxysql-admin / passw0rd

root@node1:~# mkdir -p /var/lib/proxysql

root@node1:~# vim proxysql.cnf

datadir="/var/lib/proxysql"

admin_variables=
{
    admin_credentials="proxysql-admin:passw0rd;cluster1:secret1pass"
    mysql_ifaces="0.0.0.0:6032"
    refresh_interval=2000
    cluster_username="cluster1"
    cluster_password="secret1pass"
    cluster_check_interval_ms=200
    cluster_check_status_frequency=100
    cluster_mysql_query_rules_save_to_disk=true
    cluster_mysql_servers_save_to_disk=true
    cluster_mysql_users_save_to_disk=true
    cluster_proxysql_servers_save_to_disk=true
    cluster_mysql_query_rules_diffs_before_sync=3
    cluster_mysql_servers_diffs_before_sync=3
    cluster_mysql_users_diffs_before_sync=3
    cluster_proxysql_servers_diffs_before_sync=3
}

mysql_variables=
{
    threads=4
    max_connections=2048
    default_query_delay=0
    default_query_timeout=36000000
    have_compress=true
    poll_timeout=2000
    interfaces="0.0.0.0:6033;/tmp/proxysql.sock"
    default_schema="information_schema"
    stacksize=1048576
    server_version="5.1.30"
    connect_timeout_server=10000
    monitor_history=60000
    monitor_connect_interval=200000
    monitor_ping_interval=200000
    ping_interval_server_msec=10000
    ping_timeout_server=200
    commands_stats=true
    sessions_sort=true
    monitor_username="proxysql"
    monitor_password="proxysqlpassw0rd"
    monitor_galera_healthcheck_interval=2000
    monitor_galera_healthcheck_timeout=800
}

mysql_galera_hostgroups =
(
    {
        writer_hostgroup=10
        backup_writer_hostgroup=20
        reader_hostgroup=30
        offline_hostgroup=9999
        max_writers=1
        writer_is_also_reader=1
        max_transactions_behind=30
        active=1
    }
)

mysql_servers =
(
    { address="192.168.5.191" , port=3306 , hostgroup=10, max_connections=100 },
    { address="192.168.5.192" , port=3306 , hostgroup=10, max_connections=100 },
    { address="192.168.5.193" , port=3306 , hostgroup=10, max_connections=100 }
)

mysql_query_rules =
(
    {
        rule_id=100
        active=1
        match_pattern="^SELECT .* FOR UPDATE"
        destination_hostgroup=10
        apply=1
    },
    {
        rule_id=200
        active=1
        match_pattern="^SELECT .*"
        destination_hostgroup=20
        apply=1
    },
    {
        rule_id=300
        active=1
        match_pattern=".*"
        destination_hostgroup=10
        apply=1
    }
)

mysql_users =
(
    { username = "wordpress", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 },
    { username = "sbtest", password = "passw0rd", default_hostgroup = 10, transaction_persistent = 0, active = 1 }
)

proxysql_servers =
(
    { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 },
    { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 }
)

 

ConfigMap 파일이 준비되었고, ProxySQL 을 pod로 deployment 해주는 작업을 한다

# kubectl create configmap proxysql-configmap --from-file=proxysql.cnf
configmap/proxysql-configmap created

ConfigMap 확인

#  kubectl get configmap
NAME                 DATA   AGE
proxysql-configmap   1      44s

다음은 ProxySQL Monitoring User 생성

Galera Cluster 가 구성되어 있기 때문에 node1에서 아래와 같이 실행해준다. 그러면 나머지 node 에서 동일하게 확인가능하다.

root@node1:~# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 16
Server version: 10.3.31-MariaDB-0ubuntu0.20.04.1 Ubuntu 20.04

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'proxysqlpassw0rd';
Query OK, 0 rows affected (0.005 sec)

MariaDB [(none)]> GRANT USAGE ON *.* TO 'proxysql'@'%';
Query OK, 0 rows affected (0.004 sec)

다음은 web ,was , application 에서 proxysql 에 접근 시 pass Through 할 수 있는 계정정보를 Galera Cluster Node에 추가해준다

MariaDB [mysql]> CREATE USER 'wordpress'@'%' IDENTIFIED BY 'passw0rd';
Query OK, 0 rows affected (0.004 sec)

MariaDB [mysql]> GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%';
Query OK, 0 rows affected (0.004 sec)

MariaDB [mysql]> CREATE USER 'sbtest'@'%' IDENTIFIED BY 'passw0rd';
Query OK, 0 rows affected (0.005 sec)

MariaDB [mysql]> GRANT ALL PRIVILEGES ON sbtest.* TO 'proxysql'@'%';
Query OK, 0 rows affected (0.005 sec)

MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.004 sec)

 

이제는 Deplyment 를 할 준비가 다 되었다.

Deploying a StatefulSet

우리는 두개의 ProxySQL instance 또는 replicas 가능하도록 StatefulSet 으로 구성한다
파일은 proxysql-ss-svc.yml 이다

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: proxysql
  labels:
    app: proxysql
spec:
  replicas: 2
  serviceName: proxysqlcluster
  selector:
    matchLabels:
      app: proxysql
      tier: frontend
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: proxysql
        tier: frontend
    spec:
      restartPolicy: Always
      containers:
      - image: severalnines/proxysql:2.0.4
        name: proxysql
        volumeMounts:
        - name: proxysql-config
          mountPath: /etc/proxysql.cnf
          subPath: proxysql.cnf
        ports:
        - containerPort: 6033
          name: proxysql-mysql
        - containerPort: 6032
          name: proxysql-admin
      volumes:
      - name: proxysql-config
        configMap:
          name: proxysql-configmap
---
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    app: proxysql
    tier: frontend
  name: proxysql
spec:
  ports:
  - name: proxysql-mysql
    nodePort: 30033
    port: 6033
    protocol: TCP
    targetPort: 6033
  - name: proxysql-admin
    nodePort: 30032
    port: 6032
    protocol: TCP
    targetPort: 6032
  selector:
    app: proxysql
    tier: frontend
  type: NodePort

 

ProxySQL statefulset 과 서비스 를 만든다

#  kubectl create -f proxysql-ss-svc.yml
statefulset.apps/proxysql created
service/proxysql created

pod , service 확인

root@node1:~# kubectl get pods,svc
NAME                                 READY   STATUS      RESTARTS   AGE
pod/proxysql-0                       1/1     Running     0          34s
pod/proxysql-1                       1/1     Running     0          26s

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
service/proxysql                   NodePort    10.233.31.9     <none>        6033:30033/TCP,6032:30032/TCP   34s

생성된 pod 의 log를 보면 warning 문구를 많이 볼수 있다

root@node1:~# kubectl logs -f proxysql-0

2021-08-18 02:24:28 ProxySQL_Cluster.cpp:215:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer proxysql-0.proxysqlcluster:6032 . Error: Unknown MySQL server host 'proxysql-0.proxysqlcluster' (0)
2021-08-18 02:24:28 ProxySQL_Cluster.cpp:215:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer proxysql-1.proxysqlcluster:6032 . Error: Unknown MySQL server host 'proxysql-1.proxysqlcluster' (0)

위 내용은 proxysql-0 은 "proxysql-0.proxysqlcluster" ,"proxysql-1.proxysqlcluster"  연결시 도메인을 찾을수 없다는 
내용이고, ProxySQL 통신을 위해서 DNS 레코드가 필요하다는 내용이다.

Kubernetes Headless Service

Headless service는 load-balancing 이 아닌 오로지 DNS 자동으로 쿠성한다. 
headless service 를 위해서 DNS 쿼리를  할때 IP 주소를 얻을 수있다.
proxysql-headless-svc.yml 을 만든다.

apiVersion: v1
kind: Service
metadata:
  name: proxysqlcluster
  labels:
    app: proxysql
spec:
  clusterIP: None
  ports:
  - port: 6032
    name: proxysql-admin
  selector:
    app: proxysql

Headless service 생성

root@node1:~# kubectl create -f proxysql-headless-svc.yml
service/proxysqlcluster created

Service 확인

root@node1:~# kubectl get svc
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
kubernetes                 ClusterIP   10.233.0.1      <none>        443/TCP                         13d
proxysql                   NodePort    10.233.31.9     <none>        6033:30033/TCP,6032:30032/TCP   14m
proxysqlcluster            ClusterIP   None            <none>        6032/TCP                        41s

다시 proxysql pod 로그를 보면

root@node1:~# kubectl logs -f proxysql-0

2021-08-18 02:35:25 ProxySQL_Cluster.cpp:215:ProxySQL_Cluster_Monitor_thread(): [WARNING] Cluster: unable to connect to peer proxysql-1.proxysqlcluster:6032 . Error: Unknown MySQL server host 'proxysql-1.proxysqlcluster' (0)
2021-08-18 02:35:25 [INFO] Cluster: detected a new checksum for mysql_query_rules from peer proxysql-0.proxysqlcluster:6032, version 1, epoch 1629253306, checksum 0x3FEC69A5C9D96848 . Not syncing yet ...
2021-08-18 02:35:25 [INFO] Cluster: checksum for mysql_query_rules from peer proxysql-0.proxysqlcluster:6032 matches with local checksum 0x3FEC69A5C9D96848 , we won't sync.
2021-08-18 02:35:25 [INFO] Cluster: detected a new checksum for mysql_servers from peer proxysql-0.proxysqlcluster:6032, version 5, epoch 1629253311, checksum 0x9B4C1B8B729A26F5 . Not syncing yet ...

현재까지 deployment 는 잘 진행되고 있음

다음은 ProxySQL 연결

ProxySQL service 를 연결하는 방법은 몇가지 있습니다.
kubernetes 네트워크 안에서 6033 포트를 통하여 mysql 연결 하는 방법 과 만일 클라이언트가 외부 네트워크에 있다면 30333 포트를 이용해서 연결할수 있습니다.

외부 네트워크에서 ProxySQL admin interface 를 연결하기 위해서  NodePort로  설정한  30032 (192.168.5.193) 로 연결

root@node1:~# mysql -uproxysql-admin -p'passw0rd' -h192.168.5.193 -P30032
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.30 (ProxySQL Admin Module)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>

만일 kubernetes 네트워크 안에서 접근할 때에는 cluster ip (10.233.31.9) 의 6032 포트로 연결할 수 있다

root@node1:~# mysql -uproxysql-admin -p'passw0rd' -h10.233.31.9 -P6032
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.5.30 (ProxySQL Admin Module)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>

 

root@node1:~# mysql -uproxysql-admin -p'passw0rd' -h10.233.31.9 -P6032
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.30 (ProxySQL Admin Module)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>  LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.004 sec)

MySQL [(none)]> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.005 sec)

MySQL [(none)]>
root@node1:~# kubectl logs -f proxysql-1

2021-08-18 04:02:19 [INFO] Cluster: detected a peer proxysql-0.proxysqlcluster:6032 with mysql_users version 3, epoch 1629259338, diff_check 4. Own version: 1, epoch: 1629253314. Proceeding with remote sync
2021-08-18 04:02:19 [INFO] Cluster: detected peer proxysql-0.proxysqlcluster:6032 with mysql_users version 3, epoch 1629259338
2021-08-18 04:02:19 [INFO] Cluster: Fetching MySQL Users from peer proxysql-0.proxysqlcluster:6032 started
2021-08-18 04:02:19 [INFO] Cluster: Fetching MySQL Users from peer proxysql-0.proxysqlcluster:6032 completed

 

외부(Node 또는 다른 외부 서버) 에서 접속시에는 30033 포트로 연결하면 된다.
위와 같이 세팅 후  연결 테스트를 해보자 (pod 통신 시 에는  6033  포트로 접속한다)
예시 ) Tomcat 을 Pod 올리고, 이 WAS는 ProxySQL 로 접속하여 MySQL 이 정상적으로 접속되는지에 대한 테스트 

Tomcat 설정 파일 

server.xml

 <Resource name="jdbc/mytest"
              auth="Container"
              type="javax.sql.DataSource"
              username="sbtest"
              password="passw0rd"
              driverClassName="com.mysql.cj.jdbc.Driver"
              url="jdbc:mysql://proxysql:6033/test1?serverTimezone=UTC"
              maxTotal="100"
              maxIdle="100"
              minIdle="50"
              initialSize="50"
              maxWaitMillis="10000"
              validationQuery="SELECT 1"
              testOnBorrow="true"
              testOnReturn="false"
              testWhileIdle="true" />

context.xml

<ResourceLink global="jdbc/mytest" name="jdbc/mytest" type="javax.sql.DataSource"/>

아래 커넥터 파일 위치
tomcat/lib/mysql-connector-java-8.0.18.jar

test.jsp - 파일 정상여부 인덱스페이지

<%@page import="javax.sql.DataSource"%>
<%@page import="javax.naming.InitialContext"%>
<%@page import="javax.naming.Context"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.Connection"%>

<html>
<body>

        <%

                Context context = new InitialContext();
                DataSource ds = (DataSource)context.lookup("java:comp/env/jdbc/mytest");


                Connection conn = ds.getConnection();


                if(conn != null){

                        out.println(" CI/CD TEST - MysqlDB Connection Success!!!");

                        conn.close();

                }

        %>
</body>
</html>


위에 server.xml 파일에 보면 proxysql 도메인으로 sbtest 계정이 test1 DB 를 접근하게 된다.
아래와 같이 Galera Cluster DB 에 접속하여 권한을 할당하여 준다.

MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO sbtest@'%' IDENTIFIED BY 'passw0rd' WITH GRANT OPTION;
## Dockerfile

FROM tomcat:9-jre8-alpine

COPY server.xml /usr/local/tomcat/conf
COPY context.xml /usr/local/tomcat/conf
COPY test.jsp /usr/local/tomcat/webapps/ROOT


COPY mysql-connector-java-8.0.18.jar /usr/local/tomcat/lib


ENV JAVA_OPTS="-DsvrNo=4"


ENV TZ=Asia/Seoul
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone



EXPOSE 8080

 

# vim k8s-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tcmy-nexus-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tcmy-nexus
  template:
    metadata:
      labels:
        app: tcmy-nexus
    spec:
      containers:
        - name: tcmy-nexus
          image: github.com/sotl/project/tcmy-nexus
          ports:
            - containerPort: 8080
          imagePullPolicy: Always
          env:
            - name: DATE
              value: 'DATE_STRING'
      imagePullSecrets:
        - name: nexus-secret
# vim k8s-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: tcmy-nexus-service
spec:
  ports:
    - name: "8080"
      port: 8082
      targetPort: 8080
  selector:
    app: tcmy-nexus
  type: NodePort
root@node1:s# kubectl get pods
NAME                                     READY   STATUS      RESTARTS   AGE
tcmy-nexus-deployment-6845f75cb4-bf474   1/1     Running     0          60m

root@node1:# kubectl get svc
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
proxysql                   NodePort    10.233.31.9     <none>        6033:30033/TCP,6032:30032/TCP   3h2m
proxysqlcluster            ClusterIP   None            <none>        6032/TCP                        169m
tcmy-nexus-service         NodePort    10.233.46.118   <none>        8082:32171/TCP                  59m

 

위와 같이 연결 테스트가 완료 된다.

 

 

참고)

1. Galera Cluster on Ubuntu 20.04

https://computingforgeeks.com/install-mariadb-galera-cluster-on-ubuntu-with-proxysql/

2. ProxySQL on Kubernetes

https://severalnines.com/database-blog/proxysql-native-clustering-kubernetes

 

ProxySQL Native Clustering with Kubernetes

To efficiently manage multiple ProxySQL nodes, one has to make sure whatever changes performed on one of the nodes is applied across all the nodes. Without native clustering, one has to manually export the configurations and import them to the other nodes.

severalnines.com

2.1 ProxySQL on Kubernetes2

https://severalnines.com/database-blog/how-run-and-configure-proxysql-20-mysql-galera-cluster-docker

 

How to Run and Configure ProxySQL 2.0 for MySQL Galera Cluster on Docker

ProxySQL 2.0 has become GA and it comes with new exciting features such as GTID consistent reads, frontend SSL, Galera and MySQL Group Replication native support. This blog post shows how to run the new ProxySQL 2.x for Galera Cluster on Docker container,

severalnines.com

3. MariaDB root 비번 변경

https://blog.logger.one/entry/MariaDB-%EC%84%A4%EC%A0%95-%ED%9B%84-%EC%9B%90%EA%B2%A9-%EC%A0%91%EC%86%8D-%EC%84%A4%EC%A0%95%ED%95%98%EA%B8%B0

 

'기록남기기' 카테고리의 다른 글

Oracle 19.3 on Rocky linux  (0) 2021.09.04
kafka on kubernetes (with Strimzi)  (0) 2021.08.19
k8s log monitoring Loki  (0) 2021.08.10
[k8s] helm 설치  (0) 2020.12.11
jetty 설치  (0) 2020.11.26