環境當然是透過Oracle VirtualBox來模擬 |
shared disk 的建置 |
如下: |
就測試而言 |
每個vm guest 至少給兩個core |
不然你的tibero instance動不動就會sem wait timeout |
C:\Program Files\Oracle\VirtualBox>VBoxManage createhd --filename "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --size 10240 --format VDI --variant Fixed |
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% |
Medium created. UUID: 8bb399bb-ad0f-4271-bc7e-fce79fde652b |
1. shared disk setup on Host |
C:\Program Files\Oracle\VirtualBox>VBoxManage createhd --filename "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --size 10240 --format VDI --variant Fixed |
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% |
Medium created. UUID: 31233658-ad37-45d0-a09a-2dfd5451c670 |
VBoxManage modifyhd "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --type shareable |
VBoxManage modifyhd "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --type shareable |
VBoxManage storageattach TAC1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --mtype shareable |
VBoxManage storageattach TAC1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --mtype shareable |
VBoxManage storageattach TAC2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --mtype shareable |
VBoxManage storageattach TAC2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --mtype shareable |
2. udev setup on Guests |
root@oraogg:~# /lib/udev/scsi_id -g -u -d /dev/sdb |
1ATA_VBOX_HARDDISK_VB8bb399bb-2b65de9f |
root@oraogg:~# /lib/udev/scsi_id -g -u -d /dev/sdc |
1ATA_VBOX_HARDDISK_VB31233658-70c65154 |
The Rules: |
vi /etc/udev/rules.d/99-tiberotasdevices.rules |
KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB8bb399bb-2b65de9f", SYMLINK+="tas-disk1", OWNER="tibero", GROUP="tibero", MODE="0660" |
KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB31233658-70c65154", SYMLINK+="tas-disk2", OWNER="tibero", GROUP="tibero", MODE="0660" |
Testing UDEV Rules : |
udevadm test /block/sdb/sdb1 |
udevadm test /block/sdc/sdc1 |
root@oraogg2:/etc/udev/rules.d# ls -l /dev/tas-disk1 |
lrwxrwxrwx 1 root root 4 Jun 14 16:15 /dev/tas-disk1 -> sdb1 |
root@oraogg2:/etc/udev/rules.d# ls -l /dev/tas-disk2 |
lrwxrwxrwx 1 root root 4 Jun 14 16:16 /dev/tas-disk2 -> sdc1 |
TAC on TAS 建置步驟是參考底下這篇文章 |
Como criar um ambiente de Alta Disponibilidade usando TAC |
葡萄牙文ㄝ...... |
沒關係,用chrome 打開來,翻譯轉成英文就好了 |
你看了英文就知道,該篇文章是由英文轉成葡萄文的啦~~ |
它有副圖及tbdsn.tbr有關VIP的部分有點錯誤 |
應該是用192開頭的IP,而不是100開頭的interconnect |
底下是我的組態 |
先到node 1做事吧~~ |
主機名稱叫oraogg |
沒辦法我當初申請demo license的時候 |
就是用這個主機名稱 |
第二台就oraogg2 |
設定profile |
tibero@oraogg:~$ cat .bash_profile |
export TB_HOME=/home/tibero/tibero6 |
#export TB_SID=tibero |
export TB_SID=tas1 |
export CM_SID=cm1 |
export CM_HOME=$TB_HOME |
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH |
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH |
TAS1 TIP (Storage Server 參數): |
DB_NAME=tas |
LISTENER_PORT=9629 |
MAX_SESSION_COUNT = 20 |
MEMORY_TARGET = 1500M |
TOTAL_SHM_SIZE = 1G |
INSTANCE_TYPEINSTANCE_TYPE = AS |
AS_DISKSTRING = "/dev/tas-disk*" |
CLUSTER_DATABASE = Y |
LOCAL_CLUSTER_ADDR = 10.0.0.1 |
LOCAL_CLUSTER_PORT = 20000 |
CM_PORT = 48629 |
THREAD = 0 -- 它一定要由0開始,不然建DISK會出錯 |
CM1 TIP (Cluster Manager 的參數) : |
CM_NAME = cm1 |
CM_UI_PORT = 48629 |
CM_RESOURCE_FILE = "/home/tibero/tibero6/config/cm1.rsc" |
CM_HEARTBEAT_EXPIRE = 60 |
CM_WATCHDOG_EXPIRE = 55 |
client config: |
tibero@oraogg:~/tibero6/client/config$ cat tbdsn.tbr |
#------------------------------------------------- |
# Appended by gen_tip.sh at Sun 14 Jun 2020 08:16:59 PM CST |
tas1=( |
(INSTANCE=(HOST=10.0.0.1) |
(PORT=9629) |
(DB_NAME=tas) |
) |
) |
我們要開始建立Storage Server了(等同Oracle ASM) |
tbboot nomount |
tibero@oraogg:~/tibero6/config$ tbsql sys/tibero@tas1 |
tbSQL 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Connected to Tibero using tas1. |
SQL> CREATE DISKSPACE ds0 NORMAL REDUNDANCY |
FAILGROUP fg1 DISK '/dev/tas-disk1' NAME disk1 |
FAILGROUP fg2 DISK '/dev/tas-disk2' NAME disk2 |
ATTRIBUTE 'AU_SIZE' = '4M'; 2 3 4 |
Diskspace 'DS0' created. |
Cluster Manager 帶起Storage Server Instance 時 |
會需要它專屬的環境變數 |
tibero@oraogg:~$ cp .bash_profile $TB_HOME/config/tas1.profile |
tibero@oraogg:~/tibero6/config$ vi tas1.profile |
tibero@oraogg:~/tibero6/config$ cat $TB_HOME/config/tas1.profile |
export TB_HOME=/home/tibero/tibero6 |
export TB_SID=tas1 |
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH |
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH |
啟動Cluster Manager |
tibero@oraogg:~/tibero6/config$ tbcm -b |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM Guard daemon started up. |
TBCM 6.1.1 (Build 174424) |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero cluster manager started up. |
Local node name is (cm1:48629). |
確認組態 |
tibero@oraogg:~/tibero6/config$ tbcm -s |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM information |
=========================================================== |
CM NAME : cm1 |
CM UI PORT : 48629 |
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm1.rsc |
CM MODE : GUARD ON, FENCE OFF, ROOT OFF |
LOG LEVEL : 2 |
CM BLOCK SIZE : 512 |
=========================================================== |
新增在這台主機上private interconnect 的網路設定 |
並幫它取個名稱,但它不能有 - 這種特殊字元 |
我把錯誤訊息一併Po出來 |
tibero@oraogg:~/tibero6/config$ cmrctl add network --nettype private --ipaddr 10.0.0.1 --portno 18629 --name priv-net |
[ERROR] invalid character in 'priv-net'. (only use a-z, A-Z, 0-9 and _) |
[ERROR] Invalid argument value for key name: priv-net. `cmrctl help' for more information |
tibero@oraogg:~/tibero6/config$ cmrctl add network --nettype private --ipaddr 10.0.0.1 --portno 18629 --name privnet |
Resource add success! (network, privnet) |
新增在這台主機上Public Network 的網路設定 |
tibero@oraogg:~/tibero6/config$ cmrctl add network --nettype public --ifname enp0s8 --name pubnet |
Resource add success! (network, pubnet) |
設定CLUSTER 組態並啟動它 |
tibero@oraogg:~/tibero6/config$ cmrctl add cluster --incnet privnet --pubnet pubnet --cfile "+/dev/tas-disk*" --name cluster |
Resource add success! (cluster, cluster) |
tibero@oraogg:~/tibero6/config$ cmrctl start cluster --name cluster |
MSG SENDING SUCCESS |
設定Storage Cluster 組態並啟動在這台主機上的instance |
tibero@oraogg:~/tibero6/config$ cmrctl add service --name tas --type as --cname cluster |
Resource add success! (service, tas) |
tibero@oraogg:~/tibero6/config$ cmrctl add as --name tas1 --svcname tas --dbhome $CM_HOME --envfile $TB_HOME/config/tas1.profile |
Resource add success! (as, tas1) |
tibero@oraogg:~/tibero6/config$ cmrctl start as --name tas1 |
Listener port = 9629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
BOOT SUCCESS! (MODE : NORMAL) |
- Connect to the running instance and add the THREAD to start the TAS on node 2: |
tibero@oraogg:~/tibero6/config$ tbsql sys/tibero@tas1 |
tbSQL 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Connected to Tibero using tas1. |
SQL> ALTER DISKSPACE ds0 ADD THREAD 1; |
Diskspace altered. |
- Add the TAC service feature: |
tibero@oraogg:~/tibero6/config$ cmrctl add service --name tac --cname cluster |
Resource add success! (service, tac) |
接下來要設定db cluster |
就拷貝storage server的參數修改一下就好了 |
cp tas1.profile tac1.profile |
tibero@oraogg:~/tibero6/config$ cat tac1.profile |
export TB_HOME=/home/tibero/tibero6 |
export TB_SID=tac1 |
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH |
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH |
tibero@oraogg:~/tibero6/config$ cat tac1.tip |
DB_NAME = tac |
LISTENER_PORT = 8629 |
CONTROL_FILES = "+DS0/tac/c1.ctl" |
DB_CREATE_FILE_DEST = "+DS0/tac" |
LOG_ARCHIVE_DEST = "+DS0/tac/ archive" |
MAX_SESSION_COUNT = 20 |
TOTAL_SHM_SIZE = 1G |
MEMORY_TARGET = 1500M |
USE_ACTIVE_STORAGE = Y |
AS_PORT = 9629 |
LOCAL_CLUSTER_ADDR = 10.0.0.1 |
CM_PORT = 48629 |
LOCAL_CLUSTER_PORT = 21000 |
CLUSTER_DATABASE = Y |
THREAD = 0 |
UNDO_TABLESPACE = UNDO0 |
- Add the Database feature: |
cmrctl add db --name tac1 --svcname tac --dbhome $CM_HOME --envfile $TB_HOME/config/tac1.profile |
tibero@oraogg:~/tibero6/config$ cmrctl add db --name tac1 --svcname tac --dbhome $CM_HOME --envfile $TB_HOME/config/tac1.profile |
Resource add success! (db, tac1) |
- Start the Database instance in NOMOUNT mode: |
cmrctl start db --name tac1 --option "-t nomount" |
tibero@oraogg:~/tibero6/config$ cmrctl start db --name tac1 --option "-t nomount" |
Listener port = 8629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NOMOUNT mode). |
BOOT SUCCESS! (MODE : NOMOUNT) |
copy tas1 tbdsn record into tac1 |
tibero@oraogg:~/tibero6/client/config$ cat tbdsn.tbr |
#------------------------------------------------- |
# Appended by gen_tip.sh at Sun 14 Jun 2020 08:16:59 PM CST |
tas1=( |
(INSTANCE=(HOST=10.0.0.1) |
(PORT=9629) |
(DB_NAME=tas) |
) |
) |
tac1=( |
(INSTANCE=(HOST=10.0.0.1) |
(PORT=8629) |
(DB_NAME=tac) |
) |
) |
來.....我們來建立資料庫了 |
tibero@oraogg:~/tibero6/client/config$ tbsql sys/tibero@tac1 |
tbSQL 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Connected to Tibero using tac1. |
SQL> CREATE DATABASE "tac" |
USER sys IDENTIFIED BY tibero |
MAXINSTANCES 8 |
MAXDATAFILES 256 |
CHARACTER SET ZHT16MSWIN950 |
national character set UTF16 |
LOGFILE |
GROUP 0 '+DS0/tac/log001.log' SIZE 100M, |
GROUP 1 '+DS0/tac/log011.log' SIZE 100M, |
GROUP 2 '+DS0/tac/log021.log' SIZE 100M |
MAXLOGFILES 255 |
MAXLOGMEMBERS 8 |
NOARCHIVELOG |
DATAFILE '+DS0/tac/system001.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M |
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '+DS0/tac/temp001.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M EXTENT MANAGEMENT LOCAL AUTOALLOCATE |
UNDO TABLESPACE UNDO0 DATAFILE '+DS0/tac/undo001.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Database created. |
- Start the Database instance in NORMAL mode: | |
tibero@oraogg:~/tibero6/client/config$ cmrctl start db --name tac1 | |
Listener port = 8629 | |
Tibero 6 | |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. | |
Tibero instance started up (NORMAL mode). | |
BOOT SUCCESS! (MODE : NORMAL) | |
- Create UNDO TABLESPACE for the second TAC node: | |
tibero@oraogg:~/tibero6/client/config$ tbsql sys/tibero@tac1 | |
tbSQL 6 | |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. | |
Connected to Tibero using tac1. | |
SQL> CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/tac/undo002.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; | |
Tablespace 'UNDO1' created. | |
- Add REDO LOGS to the second TAC node: | |
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/tac/log031.log' size 100M; | |
Database altered. | |
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/tac/log041.log' size 100M; | |
Database altered. | |
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/tac/log051.log' size 100M; | |
Database altered. | |
- Add the public THREAD to the second TAC node: | |
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1; | |
Database altered. |
- Create the system data dictionary: |
tibero@oraogg:~/tibero6/config$ . ./tac1.profile |
tibero@oraogg:~/tibero6/config$ sh $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 Y -a2 Y -a3 Y -a4 Y |
Dropping agent table... |
Creating text packages table ... |
Creating the role DBA... |
Creating system users & roles... |
Creating example users... |
Creating virtual tables(1)... |
Creating virtual tables(2)... |
Granting public access to _VT_DUAL... |
Creating the system generated sequences... |
Creating internal dynamic performance views... |
Creating outline table... |
Creating system tables related to dbms_job... |
Creating system tables related to dbms_lock... |
Creating system tables related to scheduler... |
Creating system tables related to server_alert... |
Creating system tables related to tpm... |
Creating system tables related to tsn and timestamp... |
Creating system tables related to rsrc... |
Creating system tables related to workspacemanager... |
Creating system tables related to statistics... |
. |
. |
Create tudi interface |
Running /home/tibero/tibero6/scripts/odci.sql... |
Creating spatial meta tables and views ... |
Creating internal system jobs... |
Creating Japanese Lexer epa source ... |
Creating internal system notice queue ... |
Creating sql translator profiles ... |
Creating agent table... |
Done. |
For details, check /home/tibero/tibero6/instance/tac1/log/system_init.log. |
接下來我們到第二台進行設定,就是oraogg2 |
所以參數及設定檔從第一台拷貝過來,稍作修改就ok了 |
tibero@oraogg2:~$ cat .bash_profile |
export TB_HOME=/home/tibero/tibero6 |
#export TB_SID=tibero |
export TB_SID=tas2 |
export CM_SID=cm2 |
export CM_HOME=$TB_HOME |
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH |
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH |
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tas1.tip tas2.tip |
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110' |
Offending key for IP in /home/tibero/.ssh/known_hosts:1 |
Matching host key in /home/tibero/.ssh/known_hosts:2 |
Are you sure you want to continue connecting (yes/no)? yes |
tibero@oraogg's password: |
tas1.tip 100% 554 789.0KB/s 00:00 |
tibero@oraogg2:~/tibero6/config$ cat tas2.tip |
# tip file generated from /home/tibero/tibero6/config/tip.template (Sun 14 Jun 2020 08:16:59 PM CST) |
#------------------------------------------------------------------------------- |
# |
# RDBMS initialization parameter |
# |
#------------------------------------------------------------------------------- |
DB_NAME=tas |
LISTENER_PORT=9629 |
MAX_SESSION_COUNT = 20 |
MEMORY_TARGET = 1500M |
TOTAL_SHM_SIZE = 1G |
INSTANCE_TYPE = AS |
AS_DISKSTRING = "/dev/tas*" |
CLUSTER_DATABASE = Y |
LOCAL_CLUSTER_ADDR = 10.0.0.2 |
LOCAL_CLUSTER_PORT = 20000 |
CM_PORT = 48629 |
THREAD = 1 |
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/cm1.tip cm2.tip |
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110' |
Offending key for IP in /home/tibero/.ssh/known_hosts:1 |
Matching host key in /home/tibero/.ssh/known_hosts:2 |
Are you sure you want to continue connecting (yes/no)? yes |
tibero@oraogg's password: |
cm1.tip 100% 139 155.6KB/s 00:00 |
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tas1.profile tas2.profile |
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110' |
Offending key for IP in /home/tibero/.ssh/known_hosts:1 |
Matching host key in /home/tibero/.ssh/known_hosts:2 |
Are you sure you want to continue connecting (yes/no)? yes |
tibero@oraogg's password: |
tas1.profile 100% 195 298.3KB/s 00:00 |
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tac1.profile tac2.profile |
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110' |
Offending key for IP in /home/tibero/.ssh/known_hosts:1 |
Matching host key in /home/tibero/.ssh/known_hosts:2 |
Are you sure you want to continue connecting (yes/no)? yes |
tibero@oraogg's password: |
Permission denied, please try again. |
tibero@oraogg's password: |
tac1.profile 100% 195 259.8KB/s 00:00 |
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tac1.tip tac2.tip |
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110' |
Offending key for IP in /home/tibero/.ssh/known_hosts:1 |
Matching host key in /home/tibero/.ssh/known_hosts:2 |
Are you sure you want to continue connecting (yes/no)? yes |
tibero@oraogg's password: |
Permission denied, please try again. |
tibero@oraogg's password: |
tac1.tip |
tibero@oraogg2:~/tibero6/config$ cat cm2.tip |
CM_NAME = cm2 |
CM_UI_PORT = 48629 |
CM_RESOURCE_FILE = "/home/tibero/tibero6/config/cm2.rsc" |
CM_HEARTBEAT_EXPIRE = 60 |
CM_WATCHDOG_EXPIRE = 55 |
tibero@oraogg2:~/tibero6/client/config$ scp oraogg:$TB_HOME/client/config/tbdsn.tbr tbdsn.tbr |
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110' |
Offending key for IP in /home/tibero/.ssh/known_hosts:1 |
Matching host key in /home/tibero/.ssh/known_hosts:2 |
Are you sure you want to continue connecting (yes/no)? yes |
tibero@oraogg's password: |
tbdsn.tbr 100% 310 32.5KB/s 00:00 |
tibero@oraogg2:~/tibero6/client/config$ cat tbdsn.tbr |
#------------------------------------------------- |
# Appended by gen_tip.sh at Sun 14 Jun 2020 08:16:59 PM CST |
tas2=( |
(INSTANCE=(HOST=10.0.0.2) |
(PORT=9629) |
(DB_NAME=tas) |
) |
) |
tac2=( |
(INSTANCE=(HOST=10.0.0.2) |
(PORT=8629) |
(DB_NAME=tac) |
) |
) |
tibero@oraogg2:~/tibero6/config$ cat tas2.profile |
export TB_HOME=/home/tibero/tibero6 |
export TB_SID=tas2 |
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH |
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH |
底下我就不贅述了,反正跟第一台的設定差不了多少 |
tibero@oraogg2:~/tibero6/config$ tbcm -b |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM Guard daemon started up. |
TBCM 6.1.1 (Build 174424) |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero cluster manager started up. |
Local node name is (cm2:48629). |
tibero@oraogg2:~/tibero6/config$ tbcm -s |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM information |
=========================================================== |
CM NAME : cm2 |
CM UI PORT : 48629 |
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm2.rsc |
CM MODE : GUARD ON, FENCE OFF, ROOT OFF |
LOG LEVEL : 2 |
CM BLOCK SIZE : 512 |
=========================================================== |
=========================================================== |
tibero@oraogg2:~/tibero6/config$ cmrctl add network --nettype private --ipaddr 10.0.0.2 --portno 18629 --name privnet |
Resource add success! (network, privnet) |
tibero@oraogg2:~/tibero6/config$ cmrctl add network --nettype public --ifname enp0s8 --name pubnet |
Resource add success! (network, pubnet) |
tibero@oraogg2:~/tibero6/config$ cmrctl add cluster --incnet privnet --pubnet pubnet --cfile "+/dev/tas-disk*" --name cluster |
Resource add success! (cluster, cluster) |
tibero@oraogg2:~/tibero6/config$ cmrctl start cluster --name cluster |
MSG SENDING SUCCESS! |
tibero@oraogg2:~/tibero6/config$ cmrctl add as --name tas2 --svcname tas --dbhome $CM_HOME --envfile $TB_HOME/config/tas2.profile |
Resource add success! (as, tas2) |
tibero@oraogg2:~/tibero6/config$ cmrctl start as --name tas2 |
Listener port = 9629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
BOOT SUCCESS! (MODE : NORMAL) |
tibero@oraogg2:~/tibero6/config$ cat tac2.profile |
export TB_HOME=/home/tibero/tibero6 |
export TB_SID=tac2 |
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH |
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH |
tibero@oraogg2:~/tibero6/config$ cat tac2.tip |
DB_NAME = tac |
LISTENER_PORT = 8629 |
CONTROL_FILES = "+DS0/tac/c1.ctl" |
DB_CREATE_FILE_DEST = "+DS0/tac" |
LOG_ARCHIVE_DEST = "+DS0/tac/archive" |
MAX_SESSION_COUNT = 20 |
TOTAL_SHM_SIZE = 1G |
MEMORY_TARGET = 1500M |
USE_ACTIVE_STORAGE = Y |
AS_PORT = 9629 |
LOCAL_CLUSTER_ADDR = 10.0.0.2 |
CM_PORT = 48629 |
LOCAL_CLUSTER_PORT = 21000 |
CLUSTER_DATABASE = Y |
THREAD = 1 |
UNDO_TABLESPACE = UNDO1 |
tibero@oraogg2:~/tibero6/config$ cmrctl add db --name tac2 --svcname tac --dbhome $CM_HOME --envfile $TB_HOME/config/tac2.profile |
Resource add success! (db, tac2) |
- Start the Database instance: |
tibero@oraogg2:~/tibero6/config$ cmrctl start db --name tac2 |
Listener port = 8629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
BOOT SUCCESS! (MODE : NORMAL) |
tibero@oraogg:~/tibero6/config$ cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
===================================================================== |
Resource List of Node cm2 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.2/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
===================================================================== |
最後我要講的是VIP |
它確實會failover |
但session是透過reconnect 的機制 |
來達成連線看起來沒中斷 |
其實絕大多數的系統大概需要的就是這樣的HA就夠了 |
Oracle RAC的功能是很強大 |
但需求面了不起只要1分功能 |
你把10分的開發成本都轉嫁到消費者身上 |
難怪每個用到Oracle都喊太貴了 |
它是需要由root的權限來設定 |
root@oraogg:~# . /home/tibero/.bash_profile |
root@oraogg:~# tbcm -b |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM Guard daemon started up. |
import resources from '/home/tibero/tibero6/config/cm1.rsc'... |
TBCM 6.1.1 (Build 174424) |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero cluster manager started up. |
Local node name is (cm1:48629). |
root@oraogg:~# tbcm -s |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM information |
=========================================================== |
CM NAME : cm1 |
CM UI PORT : 48629 |
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm1.rsc |
CM MODE : GUARD ON, FENCE OFF, ROOT ON |
LOG LEVEL : 2 |
CM BLOCK SIZE : 512 |
=========================================================== |
root@oraogg:~# cmrctl show cluster --name cluster |
Cluster Resource Info |
=============================================================== |
Cluster name : cluster |
Status : UP (ROOT) |
Master node : (1) cm1 |
Last NID : 1 |
Local node : (1) cm1 |
Storage type : Active Storage |
AS Diskstring : /dev/tas-disk* |
No. of cls files : 3 |
(1) +0 |
(2) +1 |
(3) +2 |
=============================================================== |
| NODE LIST | |
|-------------------------------------------------------------| |
| NID Name IP/PORT Status Schd Mst FHB NHB | |
| --- -------- -------------------- ------ ---- --- ---- ---- | |
| 1 cm1 10.0.0.1/18629 UP Y R M [ LOCAL ] | |
=============================================================== |
| CLUSTER RESOURCE STATUS | |
|-------------------------------------------------------------| |
| NAME TYPE STATUS NODE MISC. | |
| ---------------- -------- -------- -------- --------------- | |
| SERVICE: tas | |
| tas1 AS DOWN cm1 | |
| SERVICE: tac | |
| tac1 DB DOWN cm1 | |
=============================================================== |
root@oraogg:~# cmrctl start service --name tas |
=================================== SUCCESS! =================================== |
Succeeded to request at each node to boot resources under the service(tas). |
Please use "cmrctl show service --name tas" to verify the result. |
================================================================================ |
root@oraogg:~# Listener port = 9629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
root@oraogg:~# cmrctl show service --name tas |
Service Resource Info |
================================================= |
Service name : tas |
Service type : Active Storage |
Service mode : Active Cluster |
Cluster : cluster |
Inst. Auto Start: OFF |
Interrupt Status: COMMITTED |
Incarnation No. : 1 / 1 (CUR / COMMIT) |
================================================= |
| INSTANCE LIST | |
|-----------------------------------------------| |
| NID NAME Status Intr Stat ACK No. Sched | |
| --- -------- -------- --------- ------- ----- | |
| 1 cm1 UP(NRML) COMMITTED 1 Y | |
================================================= |
root@oraogg:~# cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac DOWN Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0 |
===================================================================== |
root@oraogg:~# cmrctl start service --name tac |
=================================== SUCCESS! =================================== |
Succeeded to request at each node to boot resources under the service(tac). |
Please use "cmrctl show service --name tac" to verify the result. |
================================================================================ |
root@oraogg:~# Listener port = 8629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
root@oraogg:~# cmrctl show service --name tac |
Service Resource Info |
================================================= |
Service name : tac |
Service type : Database |
Service mode : Active Cluster |
Cluster : cluster |
Inst. Auto Start: OFF |
Interrupt Status: COMMITTED |
Incarnation No. : 1 / 1 (CUR / COMMIT) |
================================================= |
| INSTANCE LIST | |
|-----------------------------------------------| |
| NID NAME Status Intr Stat ACK No. Sched | |
| --- -------- -------- --------- ------- ----- | |
| 1 cm1 UP(NRML) COMMITTED 1 Y | |
================================================= |
root@oraogg:~# cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
===================================================================== |
root@oraogg:~# cmrctl add vip --name vip1 --node cm0 --svcname tac --ipaddr 192.168.56.12/255.255.255.0 |
[CAUTION] No node with name (cm0) |
Resource add success! (vip, vip1) |
root@oraogg:~# cmrctl del vip --name vip1 --node cm0 --svcname tac --ipaddr 192.168.56.12/255.255.255.0 |
MSG SENDING SUCCESS! |
root@oraogg:~# cmrctl add vip --name vip1 --node cm1 --svcname tac --ipaddr 192.168.56.12/255.255.255.0 |
Resource add success! (vip, vip1) |
root@oraogg:~# cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
VM2 : |
root@oraogg2:~# . /home/tibero/.bash_profile |
root@oraogg2:~# tbcm -b |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM Guard daemon started up. |
import resources from '/home/tibero/tibero6/config/cm2.rsc'... |
TBCM 6.1.1 (Build 174424) |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero cluster manager started up. |
Local node name is (cm2:48629). |
root@oraogg2:~# tbcm -s |
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator |
CM information |
=========================================================== |
CM NAME : cm2 |
CM UI PORT : 48629 |
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm2.rsc |
CM MODE : GUARD ON, FENCE OFF, ROOT ON |
LOG LEVEL : 2 |
CM BLOCK SIZE : 512 |
=========================================================== |
root@oraogg2:~# cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
Resource List of Node cm2 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.2/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas2 DOWN tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac2 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
從這裡開始,文件說要退出root,透過tibero來執行---怪哉!! |
tibero@oraogg2:~/tibero6/client/config$ cmrctl start as --name tas2 |
Listener port = 9629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
BOOT SUCCESS! (MODE : NORMAL) |
tibero@oraogg2:~/tibero6/client/config$ cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
Resource List of Node cm2 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.2/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac2 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
tibero@oraogg2:~/tibero6/client/config$ cmrctl start db --name tac2 |
Listener port = 8629 |
Tibero 6 |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
Tibero instance started up (NORMAL mode). |
BOOT SUCCESS! (MODE : NORMAL) |
tibero@oraogg2:~/tibero6/client/config$ cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
Resource List of Node cm2 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.2/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
===================================================================== |
cmrctl add vip --name vip2 --node cm2 --svcname tac --ipaddr 192.168.56.13/255.255.255.0 |
Resource add success! (vip, vip2) |
tibero@oraogg2:~/tibero6/client/config$ cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
cluster vip vip2 UP(R) tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2) |
failed retry cnt: 0 |
===================================================================== |
Resource List of Node cm2 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.2/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
cluster vip vip2 UP tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2) |
failed retry cnt: 0 |
===================================================================== |
tibero@oraogg:~/tibero6/client/config$ cmrctl show cluster --name cluster |
Cluster Resource Info |
=============================================================== |
Cluster name : cluster |
Status : UP (ROOT) |
Master node : (1) cm1 |
Last NID : 2 |
Local node : (1) cm1 |
Storage type : Active Storage |
AS Diskstring : /dev/tas-disk* |
No. of cls files : 3 |
(1) +0 |
(2) +1 |
(3) +2 |
=============================================================== |
| NODE LIST | |
|-------------------------------------------------------------| |
| NID Name IP/PORT Status Schd Mst FHB NHB | |
| --- -------- -------------------- ------ ---- --- ---- ---- | |
| 1 cm1 10.0.0.1/18629 UP Y R M [ LOCAL ] | |
| 2 cm2 10.0.0.2/18629 UP Y R 59 64 | |
=============================================================== |
| CLUSTER RESOURCE STATUS | |
|-------------------------------------------------------------| |
| NAME TYPE STATUS NODE MISC. | |
| ---------------- -------- -------- -------- --------------- | |
| SERVICE: tas | |
| tas1 AS UP(NRML) cm1 | |
| tas2 AS UP(NRML) cm2 | |
| SERVICE: tac | |
| tac1 DB UP(NRML) cm1 | |
| tac2 DB UP(NRML) cm2 | |
| vip1 VIP UP cm1 cm1 | |
| vip2 VIP UP cm2 cm2 | |
=============================================================== |
這是測試failover,我們來關注test的連線 |
SQL> set linesize 170 |
SQL> col username format a20 |
SQL> select sid,serial#,username, inst_id , status from gv$session; |
SID SERIAL# USERNAME INST_ID STATUS |
---------- ---------- -------------------- ---------- -------------------------------- |
117 2227 SYS 2 RUNNING |
117 3023 TEST 1 READY |
118 3162 SYS 1 READY |
127 3617 SYS 1 RUNNING |
我們來關掉第一個db instance |
tibero@oraogg:~$ cmrctl stop db --name tac1 --option immediate |
MSG SENDING SUCCESS! |
tibero@oraogg:~$ cmrctl show all |
Resource List of Node cm1 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.1/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac1 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
cluster vip vip2 UP(R) tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2) |
failed retry cnt: 0 |
===================================================================== |
Resource List of Node cm2 |
===================================================================== |
CLUSTER TYPE NAME STATUS DETAIL |
----------- -------- -------------- -------- ------------------------ |
COMMON network privnet UP (private) 10.0.0.2/18629 |
COMMON network pubnet UP (public) enp0s8 |
COMMON cluster cluster UP inc: privnet, pub: pubnet |
cluster file cluster:0 UP +0 |
cluster file cluster:1 UP +1 |
cluster file cluster:2 UP +2 |
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF) |
cluster service tac UP Database, Active Cluster (auto-restart: OFF) |
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0 |
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0 |
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1) |
failed retry cnt: 0 |
cluster vip vip2 UP tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2) |
failed retry cnt: 0 |
===================================================================== |
TmaxData Corporation Copyright (c) 2008-. All rights reserved. |
紅色是關掉後的變化 |
SQL> connect test/tibero@tac |
Connected to Tibero using tac. |
SQL> create table T1 (id number); |
Table 'T1' created. |
SQL> insert into t1 values (12345); |
1 row inserted. |
SQL> commit; |
Commit completed. |
SQL> select * from t1; |
ID |
---------- |
12345 |
1 row selected. |
SQL> / |
TBR-2139: Connection to server was interrupted but the fail-over successfully reconnected. |
SQL> / |
ID |
---------- |
12345 |
再來看session的變化 |
SQL> set linesize 170 |
SQL> col username format a20 |
SQL> select sid,serial#,username, inst_id , status from gv$session; |
SID SERIAL# USERNAME INST_ID STATUS |
---------- ---------- -------------------- ---------- -------------------------- ------ |
117 2227 SYS 2 RUNNING |
117 3023 TEST 1 READY |
118 3162 SYS 1 READY |
127 3617 SYS 1 RUNNING |
4 rows selected. |
SQL> / |
SID SERIAL# USERNAME INST_ID STATUS |
---------- ---------- -------------------- ---------- -------------------------------- |
117 2227 SYS 2 RUNNING |
118 2918 TEST 2 READY |
session id由117 變為118 |
instance id 由1變為2 |
我們來看 IP的變化,原本的配置如下 : |
tibero@oraogg:~$ ip addr |
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 |
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 |
inet 127.0.0.1/8 scope host lo |
valid_lft forever preferred_lft forever |
inet6 ::1/128 scope host |
valid_lft forever preferred_lft forever |
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:68:7d:2d brd ff:ff:ff:ff:ff:ff |
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 |
valid_lft 76589sec preferred_lft 76589sec |
inet6 fe80::a00:27ff:fe68:7d2d/64 scope link |
valid_lft forever preferred_lft forever |
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:87:6b:ac brd ff:ff:ff:ff:ff:ff |
inet 192.168.56.110/24 brd 192.168.56.255 scope global enp0s8 |
valid_lft forever preferred_lft forever |
inet 192.168.56.12/24 brd 192.168.56.255 scope global secondary enp0s8:1 |
valid_lft forever preferred_lft forever |
inet6 fe80::a00:27ff:fe87:6bac/64 scope link |
valid_lft forever preferred_lft forever |
tibero@oraogg2:~$ ip addr |
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 |
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 |
inet 127.0.0.1/8 scope host lo |
valid_lft forever preferred_lft forever |
inet6 ::1/128 scope host |
valid_lft forever preferred_lft forever |
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:40:25:45 brd ff:ff:ff:ff:ff:ff |
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 |
valid_lft 76494sec preferred_lft 76494sec |
inet6 fe80::a00:27ff:fe40:2545/64 scope link |
valid_lft forever preferred_lft forever |
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:9e:ac:fe brd ff:ff:ff:ff:ff:ff |
inet 192.168.56.111/24 brd 192.168.56.255 scope global enp0s8 |
valid_lft forever preferred_lft forever |
inet 192.168.56.13/24 brd 192.168.56.255 scope global secondary enp0s8:1 |
valid_lft forever preferred_lft forever |
inet6 fe80::a00:27ff:fe9e:acfe/64 scope link |
valid_lft forever preferred_lft forever |
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:1c:de:0e brd ff:ff:ff:ff:ff:ff |
inet 10.0.0.2/24 brd 10.0.0.255 scope global enp0s9 |
valid_lft forever preferred_lft forever |
inet6 fe80::a00:27ff:fe1c:de0e/64 scope link |
valid_lft forever preferred_lft forever |
關掉tac1 之後, ip failover了 |
tibero@oraogg2:~$ ip addr |
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 |
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 |
inet 127.0.0.1/8 scope host lo |
valid_lft forever preferred_lft forever |
inet6 ::1/128 scope host |
valid_lft forever preferred_lft forever |
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:40:25:45 brd ff:ff:ff:ff:ff:ff |
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 |
valid_lft 76700sec preferred_lft 76700sec |
inet6 fe80::a00:27ff:fe40:2545/64 scope link |
valid_lft forever preferred_lft forever |
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:9e:ac:fe brd ff:ff:ff:ff:ff:ff |
inet 192.168.56.111/24 brd 192.168.56.255 scope global enp0s8 |
valid_lft forever preferred_lft forever |
inet 192.168.56.13/24 brd 192.168.56.255 scope global secondary enp0s8:1 |
valid_lft forever preferred_lft forever |
inet 192.168.56.12/24 brd 192.168.56.255 scope global secondary enp0s8:2 |
valid_lft forever preferred_lft forever |
inet6 fe80::a00:27ff:fe9e:acfe/64 scope link |
valid_lft forever preferred_lft forever |
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 |
link/ether 08:00:27:1c:de:0e brd ff:ff:ff:ff:ff:ff |
inet 10.0.0.2/24 brd 10.0.0.255 scope global enp0s9 |
valid_lft forever preferred_lft forever |
inet6 fe80::a00:27ff:fe1c:de0e/64 scope link |
valid_lft forever preferred_lft forever |