f 六月 2020 ~ 迪貝之家

Microsoft SQL Server Log Shipping

SQL 7就有的技術,簡單的說就是資料庫複本資料同步的技術,而這份複本可以開放讀取的功能以分散營業資料庫的負載,也可純粹當成備援資料庫,不管是在本地還是異地。怎麼達成? 透過SQL Agent自動化排程進行備份、傳檔及復原等作業。

Oracle GolGateden

機房搬遷跨Site能即時同步資料庫利器

Nagios 資料庫維運自動化

一開始設計這架構時,就排除使用remote agent的想法因為在專業分工的組織下,要求安裝新軟體於既有系統是一件不太可行的方案,既然身為DBA就只能把資料庫instance當作是一個最大的agent 想辦法在資料庫內做到我想做的事情

This is default featured post 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured post 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Tuning Example II: SQL Server CPU 滿載.....的案例

這個案例...我不得不說
其實有點烏龍

話說...晚上11點突然接到電話
CPU衝高,然後跟我要LOCK的資料
所以我就給了LOCK資料
很晚了,我也懶得動腦了,你要什麼,我就給你什麼!!
從資料上看起來
SESSION ID 92 卡住了 1450
1450卡住了 1911
這3個SESSION就影響了共
5 + 20 + 19 = 44
個SESSION
當然後來就又談怎麼砍SESSION
我也就只能..好....
幫你砍......
處理完凌晨一點了
危機暫時解除了.........
但隔天上班10點快11點又發生同樣的狀況了
CPU又再次衝高了
還被使用者在臉書上抱怨...
處理方法還是老招...砍session
後來砍到實在有夠煩的
我就說....我都砍了幾百個session了
資料庫的連線還一直衝了進來
可不可以重開AP Server
還真的重開了.....
不過沒解決問題.....
這次我就仔細看了whoisactive的資料
看了login_name我就問了
這個帳號我認為是那種每5分鐘跑批次帳號
為什麼會連進這麼多個session
AP回答我....Online及批次都是這個帳號執行程式
我當下恍然大悟...cow~~取個名字都能造成我的誤會
從我用紅色圈起來的執行時間都是將近30秒
是誰用手機app時,點一個畫面會有那個耐心等你30秒
我想大家都是點一點沒回應
就接著就繼續點...就趴趴趴趴...session就一直衝進來
資料庫連線就因為前個session還沒處理完需求
後續的session就累積上去
就這樣把系統的CPU資源全吃光了
所以變成是sql 語句的執行效率問題
想到這裡,我就點進plan欄位,去看執行計畫
該plan就給了建立include index的建議
中午休息時間
同仁關掉AP Server
建出該索引
系統重開後就解決了
你看...是不是一場烏龍
我想說...批次程式跑個30秒,應該正常吧!!
就沒往下看了...
誰知它是online operation所使用的帳號

隔天上班,我就收集了前一天的數據給AP
這應該是每5分鐘的批次運作

Tuning Example I : 程式跑在Oracle 慢.....的案例

早上突然接到一個要查效能的需求
說是為什麼17號及18號跑得比較慢
是不是有其它的Job影響到它
我還真的一開始就朝他所講的方向去查
不過看到最後就發現
誰影響到你的程式阿...
是你程式自己本身的問題
我們來看16號對比17號及18號的awr 差異報表
16 vs 17
16 vs 18
點sql_id 連結後
秀出那段sql statement
看到程式的名字
我就心瞭了......
媽的...
就是之前客戶一直抱怨的那個功能
隨後我就去查看16 17 及18號3天的ADDM報表
匯整出下列數據給AP
並給出我的回應

Tibero Active Cluster 開關機的步驟

我們來看怎麼開啟服務
1. 分別以root身分登入node 1及 node 2
帶起Cluster Manager
來看一下Cluster 是否在開在ROOT下
因為我有設定VIP



2. 在其中一台切換身分為tibero
啟動Storage Service

3. 啟動Active Cluster
我們來看整個服務開起來的狀態
我們再來看關掉Cluster的步驟
順序就跟啟動Cluster是顛倒囉
1.關掉DB Cluster
2.關掉Storage Service
這時候應該只剩下Cluster Manager的服務
3. 關掉Cluster Manager
奇怪~~ 第二個Node我用tibero關掉cm也可以??
不管了~~ 反正已經關了
在debian 執行tbcm 會出錯的問題......
看一下那隻程式就知道了
把第一行
#! /bin/sh
改成
#! /usr/bin/bash
就解決囉................

不得不抱怨一下
原廠的文件寫得有夠爛的...
CM及TAC 寫的不清不楚就罷了...
還有寫錯的地方.........
恩........
######2020/06/23 14:14 補充#####################
昨晚才把TAS 的文件完整地再看過一次
有關CM 及 TAS Cluster在該文件寫的比較完整
但是它還是沒寫到增加public network的部分
要是沒Oracle RAC 經驗的人
光看文件,是真的看不懂

在EDB 10上使用記憶體預載模組pg_prewarm

今天早上的操作過程
因為要降規
從32core 128G將到8core 32G
當然shared_buffers也得調整
之前我設了40G

重開之後,shared_buffers的值
我們來看一下,未預載前的數據
不過得先把pg_buffercache的extension先建出來
create extension pg_buffercache;
select c.relname,count(*) as buffers
from pg_class c
inner join pg_buffercache b on b.relfilenode=c.relfilenode
inner join pg_database d on (b.reldatabase=d.oid and d.datname=current_database())
group by c.relname order by buffers desc ;

進行預載吧~
SELECT pg_prewarm('pesc.ac_daily_su_invoice_202006');
SELECT pg_prewarm('pesc.ac_daily_su_invoice_202006_pk');
SELECT pg_prewarm('pesc.msg_gui_trans_record_202006');
SELECT pg_prewarm('pesc.msg_gui_online_detail_202006');
SELECT pg_prewarm('pesc.msg_gui_online_detail_202006_pk');
SELECT pg_prewarm('pesc.msg_gui_online_detail_202006_idx1');
SELECT pg_prewarm('pesc.msg_gui_query_detail_202006');
SELECT pg_prewarm('pesc.msg_gui_confirm_status_202006');
SELECT pg_prewarm('pesc.msg_gui_confirm_status_202006_pk');
SELECT pg_prewarm('pesc.msg_gui_confirm_status_202006_idx1');
SELECT pg_prewarm('pesc.msg_gui_cancel_status_202006');
SELECT pg_prewarm('pesc.ivm_invoice_prize_detail_202006');
SELECT pg_prewarm('pesc.ivm_invoice_prize_detail_202006_pk');
SELECT pg_prewarm('pesc.ivm_invoice_prize_detail_202006_idx1');
SELECT pg_prewarm('pesc.ivm_sc_invoice_prize_detail_202006');
SELECT pg_prewarm('pesc.ivm_sc_invoice_prize_detail_202006_pk');
SELECT pg_prewarm('pesc.ivm_sc_invoice_prize_detail_202006_idx1');
SELECT pg_prewarm('pesc.ivm_fisc_invoice_prize_detail_202006');
SELECT pg_prewarm('pesc.ivm_fisc_invoice_prize_detail_202006_pk');
SELECT pg_prewarm('pesc.ivm_fisc_invoice_prize_detail_202006_idx1');
SELECT pg_prewarm('pesc.ivm_fisc_settlement_detail_202006');
SELECT pg_prewarm('pesc.ivm_fisc_settlement_detail_202006_pk');
SELECT pg_prewarm('pesc.ivm_fisc_settlement_detail_202006_idx1');

再看一次buffer cache的數據
不過這一次跑起來就花了點時間
應該是要掃所有用過的cache block
relname | buffers
-------------------------------------------+---------
msg_gui_trans_record_202006 | 939128
msg_gui_online_detail_202006 | 45580
msg_gui_confirm_status_202006 | 42538
ivm_invoice_prize_detail_202006 | 38470
ivm_fisc_invoice_prize_detail_202006 | 35801
ivm_sc_invoice_prize_detail_202006 | 30228
ivm_fisc_settlement_detail_202006 | 24278
ac_daily_su_invoice_202006_pk | 23651
ivm_invoice_prize_detail_202006_pk | 23039
ac_daily_su_invoice_202006 | 22090
ivm_sc_invoice_prize_detail_202006_pk | 20554
ivm_fisc_invoice_prize_detail_202006_pk | 17708
ivm_invoice_prize_detail_202006_idx1 | 15618
msg_gui_online_detail_202006_pk | 11830
msg_gui_confirm_status_202006_pk | 11762
ivm_fisc_settlement_detail_202006_pk | 9884
ivm_fisc_invoice_prize_detail_202006_idx1 | 9256
msg_gui_online_detail_202006_idx1 | 6943
ivm_sc_invoice_prize_detail_202006_idx1 | 5985
msg_gui_confirm_status_202006_idx1 | 5347
ivm_fisc_settlement_detail_202006_idx1 | 4541
msg_gui_cancel_status_202006 | 743
ivm_410_ch_ol_t | 32
pg_statistic | 23
pg_operator | 15
msg_gui_query_detail_202006 | 14
pg_depend | 14
pg_depend_reference_index | 14
pg_index | 11
pg_depend_depender_index | 7
pg_amop | 6
pg_index_indexrelid_index | 5
pg_description | 5
pg_extension | 5
pg_init_privs | 5
pg_operator_oprname_l_r_n_index | 5
pg_rewrite | 5
pg_amproc | 4
pg_index_indrelid_index | 4
pg_amop_opr_fam_index | 4
pg_statistic_relid_att_inh_index | 4
pg_operator_oid_index | 4
有路用...阿是沒路用... 阿災
早跨麥的災
反正我做好我該做就對了
這篇文章PostgreSQL Buffer Cache
應該是我看過最詳細的
這是當初寫這個工具的人Robert Haas講出來的緣由
pg_prewarm

Postgres template0 db can't be vacuumed

Nagios 發出這個alert
edb=# \c template0
FATAL: database "template0" is not currently accepting connections
Previous connection kept
edb=# update pg_database set datallowconn='t' where datname='template0';
UPDATE 1
edb=# \c template0
psql (9.5.19, server 9.0.4.14)
You are now connected to database "template0" as user "enterprisedb".
template0=# vacuum;
VACUUM
template0=# \c edb
psql (9.5.19, server 9.0.4.14)
You are now connected to database "edb" as user "enterprisedb".
edb=# update pg_database set datallowconn='f' where datname='template0';
UPDATE 1
edb=# \c template0
FATAL: database "template0" is not currently accepting connections
Previous connection kept

Tibero TAC - TAS Setup on Debian 10

環境當然是透過Oracle VirtualBox來模擬
shared disk 的建置
如下:
就測試而言
每個vm guest 至少給兩個core
不然你的tibero instance動不動就會sem wait timeout

C:\Program Files\Oracle\VirtualBox>VBoxManage createhd --filename "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --size 10240 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 8bb399bb-ad0f-4271-bc7e-fce79fde652b
1. shared disk setup on Host
C:\Program Files\Oracle\VirtualBox>VBoxManage createhd --filename "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --size 10240 --format VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Medium created. UUID: 31233658-ad37-45d0-a09a-2dfd5451c670
VBoxManage modifyhd "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --type shareable
VBoxManage modifyhd "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --type shareable
VBoxManage storageattach TAC1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --mtype shareable
VBoxManage storageattach TAC1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --mtype shareable
VBoxManage storageattach TAC2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas1.vdi" --mtype shareable
VBoxManage storageattach TAC2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium "C:\Users\p10303550\VirtualBox VMs\ShareDisks\tas2.vdi" --mtype shareable
2. udev setup on Guests
root@oraogg:~# /lib/udev/scsi_id -g -u -d /dev/sdb
1ATA_VBOX_HARDDISK_VB8bb399bb-2b65de9f
root@oraogg:~# /lib/udev/scsi_id -g -u -d /dev/sdc
1ATA_VBOX_HARDDISK_VB31233658-70c65154
The Rules:
vi /etc/udev/rules.d/99-tiberotasdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB8bb399bb-2b65de9f", SYMLINK+="tas-disk1", OWNER="tibero", GROUP="tibero", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB31233658-70c65154", SYMLINK+="tas-disk2", OWNER="tibero", GROUP="tibero", MODE="0660"
Testing UDEV Rules :
udevadm test /block/sdb/sdb1
udevadm test /block/sdc/sdc1
root@oraogg2:/etc/udev/rules.d# ls -l /dev/tas-disk1
lrwxrwxrwx 1 root root 4 Jun 14 16:15 /dev/tas-disk1 -> sdb1
root@oraogg2:/etc/udev/rules.d# ls -l /dev/tas-disk2
lrwxrwxrwx 1 root root 4 Jun 14 16:16 /dev/tas-disk2 -> sdc1
TAC on TAS 建置步驟是參考底下這篇文章
Como criar um ambiente de Alta Disponibilidade usando TAC
葡萄牙文ㄝ......
沒關係,用chrome 打開來,翻譯轉成英文就好了
你看了英文就知道,該篇文章是由英文轉成葡萄文的啦~~
它有副圖及tbdsn.tbr有關VIP的部分有點錯誤
應該是用192開頭的IP,而不是100開頭的interconnect
底下是我的組態

先到node 1做事吧~~
主機名稱叫oraogg
沒辦法我當初申請demo license的時候
就是用這個主機名稱
第二台就oraogg2
設定profile
tibero@oraogg:~$ cat .bash_profile
export TB_HOME=/home/tibero/tibero6
#export TB_SID=tibero
export TB_SID=tas1
export CM_SID=cm1
export CM_HOME=$TB_HOME
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

TAS1 TIP (Storage Server 參數):
DB_NAME=tas
LISTENER_PORT=9629
MAX_SESSION_COUNT = 20
MEMORY_TARGET = 1500M
TOTAL_SHM_SIZE = 1G
INSTANCE_TYPEINSTANCE_TYPE = AS
AS_DISKSTRING = "/dev/tas-disk*"
CLUSTER_DATABASE = Y
LOCAL_CLUSTER_ADDR = 10.0.0.1
LOCAL_CLUSTER_PORT = 20000
CM_PORT = 48629
THREAD = 0 -- 它一定要由0開始,不然建DISK會出錯
CM1 TIP (Cluster Manager 的參數) :
CM_NAME = cm1
CM_UI_PORT = 48629
CM_RESOURCE_FILE = "/home/tibero/tibero6/config/cm1.rsc"
CM_HEARTBEAT_EXPIRE = 60
CM_WATCHDOG_EXPIRE = 55
client config:
tibero@oraogg:~/tibero6/client/config$ cat tbdsn.tbr
#-------------------------------------------------
# Appended by gen_tip.sh at Sun 14 Jun 2020 08:16:59 PM CST
tas1=(
(INSTANCE=(HOST=10.0.0.1)
(PORT=9629)
(DB_NAME=tas)
)
)
我們要開始建立Storage Server了(等同Oracle ASM)

tbboot nomount
tibero@oraogg:~/tibero6/config$ tbsql sys/tibero@tas1
tbSQL 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Connected to Tibero using tas1.
SQL> CREATE DISKSPACE ds0 NORMAL REDUNDANCY
FAILGROUP fg1 DISK '/dev/tas-disk1' NAME disk1
FAILGROUP fg2 DISK '/dev/tas-disk2' NAME disk2
ATTRIBUTE 'AU_SIZE' = '4M'; 2 3 4
Diskspace 'DS0' created.

Cluster Manager 帶起Storage Server Instance 時
會需要它專屬的環境變數
tibero@oraogg:~$ cp .bash_profile $TB_HOME/config/tas1.profile
tibero@oraogg:~/tibero6/config$ vi tas1.profile
tibero@oraogg:~/tibero6/config$ cat $TB_HOME/config/tas1.profile
export TB_HOME=/home/tibero/tibero6
export TB_SID=tas1
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

啟動Cluster Manager
tibero@oraogg:~/tibero6/config$ tbcm -b
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM Guard daemon started up.
TBCM 6.1.1 (Build 174424)
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero cluster manager started up.
Local node name is (cm1:48629).

確認組態
tibero@oraogg:~/tibero6/config$ tbcm -s
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM information
===========================================================
CM NAME : cm1
CM UI PORT : 48629
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm1.rsc
CM MODE : GUARD ON, FENCE OFF, ROOT OFF
LOG LEVEL : 2
CM BLOCK SIZE : 512
===========================================================

新增在這台主機上private interconnect 的網路設定
並幫它取個名稱,但它不能有 - 這種特殊字元
我把錯誤訊息一併Po出來
tibero@oraogg:~/tibero6/config$ cmrctl add network --nettype private --ipaddr 10.0.0.1 --portno 18629 --name priv-net
[ERROR] invalid character in 'priv-net'. (only use a-z, A-Z, 0-9 and _)
[ERROR] Invalid argument value for key name: priv-net. `cmrctl help' for more information
tibero@oraogg:~/tibero6/config$ cmrctl add network --nettype private --ipaddr 10.0.0.1 --portno 18629 --name privnet
Resource add success! (network, privnet)

新增在這台主機上Public Network 的網路設定
tibero@oraogg:~/tibero6/config$ cmrctl add network --nettype public --ifname enp0s8 --name pubnet
Resource add success! (network, pubnet)
設定CLUSTER 組態並啟動它
tibero@oraogg:~/tibero6/config$ cmrctl add cluster --incnet privnet --pubnet pubnet --cfile "+/dev/tas-disk*" --name cluster
Resource add success! (cluster, cluster)
tibero@oraogg:~/tibero6/config$ cmrctl start cluster --name cluster
MSG SENDING SUCCESS
設定Storage Cluster 組態並啟動在這台主機上的instance
tibero@oraogg:~/tibero6/config$ cmrctl add service --name tas --type as --cname cluster
Resource add success! (service, tas)
tibero@oraogg:~/tibero6/config$ cmrctl add as --name tas1 --svcname tas --dbhome $CM_HOME --envfile $TB_HOME/config/tas1.profile
Resource add success! (as, tas1)
tibero@oraogg:~/tibero6/config$ cmrctl start as --name tas1
Listener port = 9629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
- Connect to the running instance and add the THREAD to start the TAS on node 2:
tibero@oraogg:~/tibero6/config$ tbsql sys/tibero@tas1
tbSQL 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Connected to Tibero using tas1.
SQL> ALTER DISKSPACE ds0 ADD THREAD 1;
Diskspace altered.
- Add the TAC service feature:
tibero@oraogg:~/tibero6/config$ cmrctl add service --name tac --cname cluster
Resource add success! (service, tac)
接下來要設定db cluster
就拷貝storage server的參數修改一下就好了
cp tas1.profile tac1.profile
tibero@oraogg:~/tibero6/config$ cat tac1.profile
export TB_HOME=/home/tibero/tibero6
export TB_SID=tac1
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
tibero@oraogg:~/tibero6/config$ cat tac1.tip
DB_NAME = tac
LISTENER_PORT = 8629
CONTROL_FILES = "+DS0/tac/c1.ctl"
DB_CREATE_FILE_DEST = "+DS0/tac"
LOG_ARCHIVE_DEST = "+DS0/tac/ archive"
MAX_SESSION_COUNT = 20
TOTAL_SHM_SIZE = 1G
MEMORY_TARGET = 1500M
USE_ACTIVE_STORAGE = Y
AS_PORT = 9629
LOCAL_CLUSTER_ADDR = 10.0.0.1
CM_PORT = 48629
LOCAL_CLUSTER_PORT = 21000
CLUSTER_DATABASE = Y
THREAD = 0
UNDO_TABLESPACE = UNDO0
- Add the Database feature:
cmrctl add db --name tac1 --svcname tac --dbhome $CM_HOME --envfile $TB_HOME/config/tac1.profile
tibero@oraogg:~/tibero6/config$ cmrctl add db --name tac1 --svcname tac --dbhome $CM_HOME --envfile $TB_HOME/config/tac1.profile
Resource add success! (db, tac1)
- Start the Database instance in NOMOUNT mode:
cmrctl start db --name tac1 --option "-t nomount"
tibero@oraogg:~/tibero6/config$ cmrctl start db --name tac1 --option "-t nomount"
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).
BOOT SUCCESS! (MODE : NOMOUNT)
copy tas1 tbdsn record into tac1
tibero@oraogg:~/tibero6/client/config$ cat tbdsn.tbr  
#-------------------------------------------------
# Appended by gen_tip.sh at Sun 14 Jun 2020 08:16:59 PM CST
tas1=(
(INSTANCE=(HOST=10.0.0.1)
(PORT=9629)
(DB_NAME=tas)
)
)
tac1=(
(INSTANCE=(HOST=10.0.0.1)
(PORT=8629)
(DB_NAME=tac)
)
)
來.....我們來建立資料庫了
tibero@oraogg:~/tibero6/client/config$ tbsql sys/tibero@tac1
tbSQL 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE DATABASE "tac"
USER sys IDENTIFIED BY tibero
MAXINSTANCES 8
MAXDATAFILES 256
CHARACTER SET ZHT16MSWIN950
national character set UTF16
LOGFILE
GROUP 0 '+DS0/tac/log001.log' SIZE 100M,
GROUP 1 '+DS0/tac/log011.log' SIZE 100M,
GROUP 2 '+DS0/tac/log021.log' SIZE 100M
MAXLOGFILES 255
MAXLOGMEMBERS 8
NOARCHIVELOG
DATAFILE '+DS0/tac/system001.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '+DS0/tac/temp001.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M EXTENT MANAGEMENT LOCAL AUTOALLOCATE
UNDO TABLESPACE UNDO0 DATAFILE '+DS0/tac/undo001.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Database created.
- Start the Database instance in NORMAL mode:
tibero@oraogg:~/tibero6/client/config$ cmrctl start db --name tac1
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
- Create UNDO TABLESPACE for the second TAC node:
tibero@oraogg:~/tibero6/client/config$ tbsql sys/tibero@tac1
tbSQL 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Connected to Tibero using tac1.
SQL> CREATE UNDO TABLESPACE UNDO1 DATAFILE '+DS0/tac/undo002.dtf' SIZE 100M AUTOEXTEND ON NEXT 5M MAXSIZE 500M EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Tablespace 'UNDO1' created.
- Add REDO LOGS to the second TAC node:
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '+DS0/tac/log031.log' size 100M;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 '+DS0/tac/log041.log' size 100M;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 '+DS0/tac/log051.log' size 100M;
Database altered.
- Add the public THREAD to the second TAC node:
SQL> ALTER DATABASE ENABLE PUBLIC THREAD 1;
Database altered.
- Create the system data dictionary:
tibero@oraogg:~/tibero6/config$ . ./tac1.profile
tibero@oraogg:~/tibero6/config$ sh $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 Y -a2 Y -a3 Y -a4 Y
Dropping agent table...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
Creating example users...
Creating virtual tables(1)...
Creating virtual tables(2)...
Granting public access to _VT_DUAL...
Creating the system generated sequences...
Creating internal dynamic performance views...
Creating outline table...
Creating system tables related to dbms_job...
Creating system tables related to dbms_lock...
Creating system tables related to scheduler...
Creating system tables related to server_alert...
Creating system tables related to tpm...
Creating system tables related to tsn and timestamp...
Creating system tables related to rsrc...
Creating system tables related to workspacemanager...
Creating system tables related to statistics...
.
.
Create tudi interface
Running /home/tibero/tibero6/scripts/odci.sql...
Creating spatial meta tables and views ...
Creating internal system jobs...
Creating Japanese Lexer epa source ...
Creating internal system notice queue ...
Creating sql translator profiles ...
Creating agent table...
Done.
For details, check /home/tibero/tibero6/instance/tac1/log/system_init.log.
接下來我們到第二台進行設定,就是oraogg2
所以參數及設定檔從第一台拷貝過來,稍作修改就ok了
tibero@oraogg2:~$ cat .bash_profile
export TB_HOME=/home/tibero/tibero6
#export TB_SID=tibero
export TB_SID=tas2
export CM_SID=cm2
export CM_HOME=$TB_HOME
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tas1.tip tas2.tip
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110'
Offending key for IP in /home/tibero/.ssh/known_hosts:1
Matching host key in /home/tibero/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
tibero@oraogg's password:
tas1.tip 100% 554 789.0KB/s 00:00
tibero@oraogg2:~/tibero6/config$ cat tas2.tip
# tip file generated from /home/tibero/tibero6/config/tip.template (Sun 14 Jun 2020 08:16:59 PM CST)
#-------------------------------------------------------------------------------
#
# RDBMS initialization parameter
#
#-------------------------------------------------------------------------------
DB_NAME=tas
LISTENER_PORT=9629
MAX_SESSION_COUNT = 20
MEMORY_TARGET = 1500M
TOTAL_SHM_SIZE = 1G
INSTANCE_TYPE = AS
AS_DISKSTRING = "/dev/tas*"
CLUSTER_DATABASE = Y
LOCAL_CLUSTER_ADDR = 10.0.0.2
LOCAL_CLUSTER_PORT = 20000
CM_PORT = 48629
THREAD = 1
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/cm1.tip cm2.tip
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110'
Offending key for IP in /home/tibero/.ssh/known_hosts:1
Matching host key in /home/tibero/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
tibero@oraogg's password:
cm1.tip 100% 139 155.6KB/s 00:00
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tas1.profile tas2.profile
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110'
Offending key for IP in /home/tibero/.ssh/known_hosts:1
Matching host key in /home/tibero/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
tibero@oraogg's password:
tas1.profile 100% 195 298.3KB/s 00:00
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tac1.profile tac2.profile
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110'
Offending key for IP in /home/tibero/.ssh/known_hosts:1
Matching host key in /home/tibero/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
tibero@oraogg's password:
Permission denied, please try again.
tibero@oraogg's password:
tac1.profile 100% 195 259.8KB/s 00:00
tibero@oraogg2:~/tibero6/config$ scp oraogg:$TB_HOME/config/tac1.tip tac2.tip
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110'
Offending key for IP in /home/tibero/.ssh/known_hosts:1
Matching host key in /home/tibero/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
tibero@oraogg's password:
Permission denied, please try again.
tibero@oraogg's password:
tac1.tip
tibero@oraogg2:~/tibero6/config$ cat cm2.tip
CM_NAME = cm2
CM_UI_PORT = 48629
CM_RESOURCE_FILE = "/home/tibero/tibero6/config/cm2.rsc"
CM_HEARTBEAT_EXPIRE = 60
CM_WATCHDOG_EXPIRE = 55
tibero@oraogg2:~/tibero6/client/config$ scp oraogg:$TB_HOME/client/config/tbdsn.tbr tbdsn.tbr
Warning: the ECDSA host key for 'oraogg' differs from the key for the IP address '192.168.56.110'
Offending key for IP in /home/tibero/.ssh/known_hosts:1
Matching host key in /home/tibero/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
tibero@oraogg's password:
tbdsn.tbr 100% 310 32.5KB/s 00:00
tibero@oraogg2:~/tibero6/client/config$ cat tbdsn.tbr
#-------------------------------------------------
# Appended by gen_tip.sh at Sun 14 Jun 2020 08:16:59 PM CST
tas2=(
(INSTANCE=(HOST=10.0.0.2)
(PORT=9629)
(DB_NAME=tas)
)
)
tac2=(
(INSTANCE=(HOST=10.0.0.2)
(PORT=8629)
(DB_NAME=tac)
)
)
tibero@oraogg2:~/tibero6/config$ cat tas2.profile
export TB_HOME=/home/tibero/tibero6
export TB_SID=tas2
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
底下我就不贅述了,反正跟第一台的設定差不了多少
tibero@oraogg2:~/tibero6/config$ tbcm -b
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM Guard daemon started up.
TBCM 6.1.1 (Build 174424)
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero cluster manager started up.
Local node name is (cm2:48629).
tibero@oraogg2:~/tibero6/config$ tbcm -s
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM information
===========================================================
CM NAME : cm2
CM UI PORT : 48629
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm2.rsc
CM MODE : GUARD ON, FENCE OFF, ROOT OFF
LOG LEVEL : 2
CM BLOCK SIZE : 512
===========================================================
===========================================================
tibero@oraogg2:~/tibero6/config$ cmrctl add network --nettype private --ipaddr 10.0.0.2 --portno 18629 --name privnet
Resource add success! (network, privnet)
tibero@oraogg2:~/tibero6/config$ cmrctl add network --nettype public --ifname enp0s8 --name pubnet
Resource add success! (network, pubnet)
tibero@oraogg2:~/tibero6/config$ cmrctl add cluster --incnet privnet --pubnet pubnet --cfile "+/dev/tas-disk*" --name cluster
Resource add success! (cluster, cluster)
tibero@oraogg2:~/tibero6/config$ cmrctl start cluster --name cluster
MSG SENDING SUCCESS!
tibero@oraogg2:~/tibero6/config$ cmrctl add as --name tas2 --svcname tas --dbhome $CM_HOME --envfile $TB_HOME/config/tas2.profile
Resource add success! (as, tas2)
tibero@oraogg2:~/tibero6/config$ cmrctl start as --name tas2
Listener port = 9629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
tibero@oraogg2:~/tibero6/config$ cat tac2.profile
export TB_HOME=/home/tibero/tibero6
export TB_SID=tac2
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
tibero@oraogg2:~/tibero6/config$ cat tac2.tip
DB_NAME = tac
LISTENER_PORT = 8629
CONTROL_FILES = "+DS0/tac/c1.ctl"
DB_CREATE_FILE_DEST = "+DS0/tac"
LOG_ARCHIVE_DEST = "+DS0/tac/archive"
MAX_SESSION_COUNT = 20
TOTAL_SHM_SIZE = 1G
MEMORY_TARGET = 1500M
USE_ACTIVE_STORAGE = Y
AS_PORT = 9629
LOCAL_CLUSTER_ADDR = 10.0.0.2
CM_PORT = 48629
LOCAL_CLUSTER_PORT = 21000
CLUSTER_DATABASE = Y
THREAD = 1
UNDO_TABLESPACE = UNDO1
tibero@oraogg2:~/tibero6/config$ cmrctl add db --name tac2 --svcname tac --dbhome $CM_HOME --envfile $TB_HOME/config/tac2.profile
Resource add success! (db, tac2)
- Start the Database instance:
tibero@oraogg2:~/tibero6/config$ cmrctl start db --name tac2
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
tibero@oraogg:~/tibero6/config$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.2/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
=====================================================================
最後我要講的是VIP
它確實會failover
但session是透過reconnect 的機制
來達成連線看起來沒中斷
其實絕大多數的系統大概需要的就是這樣的HA就夠了
Oracle RAC的功能是很強大
但需求面了不起只要1分功能
你把10分的開發成本都轉嫁到消費者身上
難怪每個用到Oracle都喊太貴了
它是需要由root的權限來設定
root@oraogg:~# . /home/tibero/.bash_profile
root@oraogg:~# tbcm -b
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM Guard daemon started up.
import resources from '/home/tibero/tibero6/config/cm1.rsc'...
TBCM 6.1.1 (Build 174424)
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero cluster manager started up.
Local node name is (cm1:48629).
root@oraogg:~# tbcm -s
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM information
===========================================================
CM NAME : cm1
CM UI PORT : 48629
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm1.rsc
CM MODE : GUARD ON, FENCE OFF, ROOT ON
LOG LEVEL : 2
CM BLOCK SIZE : 512
===========================================================
root@oraogg:~# cmrctl show cluster --name cluster
Cluster Resource Info
===============================================================
Cluster name : cluster
Status : UP (ROOT)
Master node : (1) cm1
Last NID : 1
Local node : (1) cm1
Storage type : Active Storage
AS Diskstring : /dev/tas-disk*
No. of cls files : 3
(1) +0
(2) +1
(3) +2
===============================================================
| NODE LIST |
|-------------------------------------------------------------|
| NID Name IP/PORT Status Schd Mst FHB NHB |
| --- -------- -------------------- ------ ---- --- ---- ---- |
| 1 cm1 10.0.0.1/18629 UP Y R M [ LOCAL ] |
===============================================================
| CLUSTER RESOURCE STATUS |
|-------------------------------------------------------------|
| NAME TYPE STATUS NODE MISC. |
| ---------------- -------- -------- -------- --------------- |
| SERVICE: tas |
| tas1 AS DOWN cm1 |
| SERVICE: tac |
| tac1 DB DOWN cm1 |
===============================================================
root@oraogg:~# cmrctl start service --name tas
=================================== SUCCESS! ===================================
Succeeded to request at each node to boot resources under the service(tas).
Please use "cmrctl show service --name tas" to verify the result.
================================================================================
root@oraogg:~# Listener port = 9629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
root@oraogg:~# cmrctl show service --name tas
Service Resource Info
=================================================
Service name : tas
Service type : Active Storage
Service mode : Active Cluster
Cluster : cluster
Inst. Auto Start: OFF
Interrupt Status: COMMITTED
Incarnation No. : 1 / 1 (CUR / COMMIT)
=================================================
| INSTANCE LIST |
|-----------------------------------------------|
| NID NAME Status Intr Stat ACK No. Sched |
| --- -------- -------- --------- ------- ----- |
| 1 cm1 UP(NRML) COMMITTED 1 Y |
=================================================
root@oraogg:~# cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac DOWN Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0
=====================================================================
root@oraogg:~# cmrctl start service --name tac
=================================== SUCCESS! ===================================
Succeeded to request at each node to boot resources under the service(tac).
Please use "cmrctl show service --name tac" to verify the result.
================================================================================
root@oraogg:~# Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
root@oraogg:~# cmrctl show service --name tac
Service Resource Info
=================================================
Service name : tac
Service type : Database
Service mode : Active Cluster
Cluster : cluster
Inst. Auto Start: OFF
Interrupt Status: COMMITTED
Incarnation No. : 1 / 1 (CUR / COMMIT)
=================================================
| INSTANCE LIST |
|-----------------------------------------------|
| NID NAME Status Intr Stat ACK No. Sched |
| --- -------- -------- --------- ------- ----- |
| 1 cm1 UP(NRML) COMMITTED 1 Y |
=================================================
root@oraogg:~# cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
=====================================================================
root@oraogg:~# cmrctl add vip --name vip1 --node cm0 --svcname tac --ipaddr 192.168.56.12/255.255.255.0
[CAUTION] No node with name (cm0)
Resource add success! (vip, vip1)
root@oraogg:~# cmrctl del vip --name vip1 --node cm0 --svcname tac --ipaddr 192.168.56.12/255.255.255.0
MSG SENDING SUCCESS!
root@oraogg:~# cmrctl add vip --name vip1 --node cm1 --svcname tac --ipaddr 192.168.56.12/255.255.255.0
Resource add success! (vip, vip1)
root@oraogg:~# cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
VM2 :
root@oraogg2:~# . /home/tibero/.bash_profile
root@oraogg2:~# tbcm -b
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM Guard daemon started up.
import resources from '/home/tibero/tibero6/config/cm2.rsc'...
TBCM 6.1.1 (Build 174424)
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero cluster manager started up.
Local node name is (cm2:48629).
root@oraogg2:~# tbcm -s
/home/tibero/tibero6/bin/tbcm: 19: [: ==: unexpected operator
CM information
===========================================================
CM NAME : cm2
CM UI PORT : 48629
RESOURCE FILE PATH : /home/tibero/tibero6/config/cm2.rsc
CM MODE : GUARD ON, FENCE OFF, ROOT ON
LOG LEVEL : 2
CM BLOCK SIZE : 512
===========================================================
root@oraogg2:~# cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.2/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas2 DOWN tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac2 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
從這裡開始,文件說要退出root,透過tibero來執行---怪哉!!
tibero@oraogg2:~/tibero6/client/config$ cmrctl start as --name tas2
Listener port = 9629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
tibero@oraogg2:~/tibero6/client/config$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.2/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac2 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
tibero@oraogg2:~/tibero6/client/config$ cmrctl start db --name tac2
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
tibero@oraogg2:~/tibero6/client/config$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.2/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
=====================================================================
cmrctl add vip --name vip2 --node cm2 --svcname tac --ipaddr 192.168.56.13/255.255.255.0
Resource add success! (vip, vip2)
tibero@oraogg2:~/tibero6/client/config$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
cluster vip vip2 UP(R) tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2)
failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.2/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
cluster vip vip2 UP tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2)
failed retry cnt: 0
=====================================================================
tibero@oraogg:~/tibero6/client/config$ cmrctl show cluster --name cluster
Cluster Resource Info
===============================================================
Cluster name : cluster
Status : UP (ROOT)
Master node : (1) cm1
Last NID : 2
Local node : (1) cm1
Storage type : Active Storage
AS Diskstring : /dev/tas-disk*
No. of cls files : 3
(1) +0
(2) +1
(3) +2
===============================================================
| NODE LIST |
|-------------------------------------------------------------|
| NID Name IP/PORT Status Schd Mst FHB NHB |
| --- -------- -------------------- ------ ---- --- ---- ---- |
| 1 cm1 10.0.0.1/18629 UP Y R M [ LOCAL ] |
| 2 cm2 10.0.0.2/18629 UP Y R 59 64 |
===============================================================
| CLUSTER RESOURCE STATUS |
|-------------------------------------------------------------|
| NAME TYPE STATUS NODE MISC. |
| ---------------- -------- -------- -------- --------------- |
| SERVICE: tas |
| tas1 AS UP(NRML) cm1 |
| tas2 AS UP(NRML) cm2 |
| SERVICE: tac |
| tac1 DB UP(NRML) cm1 |
| tac2 DB UP(NRML) cm2 |
| vip1 VIP UP cm1 cm1 |
| vip2 VIP UP cm2 cm2 |
===============================================================
這是測試failover,我們來關注test的連線
SQL> set linesize 170
SQL> col username format a20
SQL> select sid,serial#,username, inst_id , status from gv$session;
SID SERIAL# USERNAME INST_ID STATUS
---------- ---------- -------------------- ---------- --------------------------------
117 2227 SYS 2 RUNNING
117 3023 TEST 1 READY
118 3162 SYS 1 READY
127 3617 SYS 1 RUNNING
我們來關掉第一個db instance
tibero@oraogg:~$ cmrctl stop db --name tac1 --option immediate
MSG SENDING SUCCESS!
tibero@oraogg:~$ cmrctl show all
Resource List of Node cm1
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.1/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas1 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac1 DOWN tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP(R) tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
cluster vip vip2 UP(R) tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2)
failed retry cnt: 0
=====================================================================
Resource List of Node cm2
=====================================================================
CLUSTER TYPE NAME STATUS DETAIL
----------- -------- -------------- -------- ------------------------
COMMON network privnet UP (private) 10.0.0.2/18629
COMMON network pubnet UP (public) enp0s8
COMMON cluster cluster UP inc: privnet, pub: pubnet
cluster file cluster:0 UP +0
cluster file cluster:1 UP +1
cluster file cluster:2 UP +2
cluster service tas UP Active Storage, Active Cluster (auto-restart: OFF)
cluster service tac UP Database, Active Cluster (auto-restart: OFF)
cluster as tas2 UP(NRML) tas, /home/tibero/tibero6, failed retry cnt: 0
cluster db tac2 UP(NRML) tac, /home/tibero/tibero6, failed retry cnt: 0
cluster vip vip1 UP tac, 192.168.56.12/255.255.255.0/192.168.56.255 (1)
failed retry cnt: 0
cluster vip vip2 UP tac, 192.168.56.13/255.255.255.0/192.168.56.255 (2)
failed retry cnt: 0
=====================================================================
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
紅色是關掉後的變化
SQL> connect test/tibero@tac
Connected to Tibero using tac.
SQL> create table T1 (id number);
Table 'T1' created.
SQL> insert into t1 values (12345);
1 row inserted.
SQL> commit;
Commit completed.
SQL> select * from t1;
ID
----------
12345
1 row selected.
SQL> /
TBR-2139: Connection to server was interrupted but the fail-over successfully reconnected.
SQL> /
ID
----------
12345
再來看session的變化
SQL> set linesize 170
SQL> col username format a20
SQL> select sid,serial#,username, inst_id , status from gv$session;
SID SERIAL# USERNAME INST_ID STATUS
---------- ---------- -------------------- ---------- -------------------------- ------
117 2227 SYS 2 RUNNING
117 3023 TEST 1 READY
118 3162 SYS 1 READY
127 3617 SYS 1 RUNNING
4 rows selected.
SQL> /
SID SERIAL# USERNAME INST_ID STATUS
---------- ---------- -------------------- ---------- --------------------------------
117 2227 SYS 2 RUNNING
118 2918 TEST 2 READY
session id由117 變為118
instance id 由1變為2
我們來看 IP的變化,原本的配置如下 :
tibero@oraogg:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:68:7d:2d brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 76589sec preferred_lft 76589sec
inet6 fe80::a00:27ff:fe68:7d2d/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:87:6b:ac brd ff:ff:ff:ff:ff:ff
inet 192.168.56.110/24 brd 192.168.56.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet 192.168.56.12/24 brd 192.168.56.255 scope global secondary enp0s8:1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe87:6bac/64 scope link
valid_lft forever preferred_lft forever
tibero@oraogg2:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:40:25:45 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 76494sec preferred_lft 76494sec
inet6 fe80::a00:27ff:fe40:2545/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:9e:ac:fe brd ff:ff:ff:ff:ff:ff
inet 192.168.56.111/24 brd 192.168.56.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet 192.168.56.13/24 brd 192.168.56.255 scope global secondary enp0s8:1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe9e:acfe/64 scope link
valid_lft forever preferred_lft forever
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1c:de:0e brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global enp0s9
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe1c:de0e/64 scope link
valid_lft forever preferred_lft forever
關掉tac1 之後, ip failover了
tibero@oraogg2:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:40:25:45 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 76700sec preferred_lft 76700sec
inet6 fe80::a00:27ff:fe40:2545/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:9e:ac:fe brd ff:ff:ff:ff:ff:ff
inet 192.168.56.111/24 brd 192.168.56.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet 192.168.56.13/24 brd 192.168.56.255 scope global secondary enp0s8:1
valid_lft forever preferred_lft forever
inet 192.168.56.12/24 brd 192.168.56.255 scope global secondary enp0s8:2
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe9e:acfe/64 scope link
valid_lft forever preferred_lft forever
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1c:de:0e brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global enp0s9
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe1c:de0e/64 scope link
valid_lft forever preferred_lft forever