f 五月 2020 ~ 迪貝之家

Microsoft SQL Server Log Shipping

SQL 7就有的技術,簡單的說就是資料庫複本資料同步的技術,而這份複本可以開放讀取的功能以分散營業資料庫的負載,也可純粹當成備援資料庫,不管是在本地還是異地。怎麼達成? 透過SQL Agent自動化排程進行備份、傳檔及復原等作業。

Oracle GolGateden

機房搬遷跨Site能即時同步資料庫利器

Nagios 資料庫維運自動化

一開始設計這架構時,就排除使用remote agent的想法因為在專業分工的組織下,要求安裝新軟體於既有系統是一件不太可行的方案,既然身為DBA就只能把資料庫instance當作是一個最大的agent 想辦法在資料庫內做到我想做的事情

This is default featured post 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured post 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

debian-10.4.0 使用Tibero 6

必要套件安裝 :
apt-get install build-essential libncurses5 libaio1
系統參數 /etc/sysctl.conf:
kernel.sem = 10000 32000 10000 10000
net.ipv4.ip_local_port_range= 1024 65000
fs.file-max=6815744
kernel.shmmni=4096
建立user tibero:
1.mkdir /home/tibero
2.useradd -d /home/tibero tibero
3. chown -R tibero:tibero /home/tibero
切換tibero 身分,解壓縮tibero binary 壓縮檔:
tar xvf tibero6-bin-FS07_CS_1912-linux64-174424-opt.tar.gz

編輯 .bash_profile:
export TB_HOME=/home/tibero/tibero6
export TB_SID=tibero
export PATH=$TB_HOME/bin:$TB_HOME/config:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

產出資料庫參數檔:
. .bash_profile
gen_tip.sh
cp license.xml $TB_HOME/license
建置資料庫:
1.啟動instance
tbboot nomount
2.command line 登入instance,建立資料庫
tbsql sys/tibero
SQL>create database "tibero"
user sys identified by tibero
maxinstances 8
maxdatafiles 100
character set ZHT16MSWIN950
national character set UTF16
logfile
group 1 'log001.log' size 100M,
group 2 'log002.log' size 100M,
group 3 'log003.log' size 100M
maxloggroups 255
maxlogmembers 8
noarchivelog
datafile 'system001.dtf' size 100M autoextend on next 100M maxsize unlimited
default temporary tablespace TEMP
tempfile 'temp001.dtf' size 100M autoextend on next 100M maxsize unlimited
extent management local autoallocate
undo tablespace UNDO
datafile 'undo001.dtf' size 100M autoextend on next 100M maxsize unlimited
extent management local autoallocate;
3.建立internal view
sys與syscat的預設密碼為tibero及syscat
執行下列指令時,依序敲入密碼
$TB_HOME/scripts/system.sh
4.上述指令執行完會自動shutdown 資料庫
所以我們要手動把資料庫帶起來
tbboot
5. 登入測試
tibero
有趣的是,其實原本沒有tbdown指令
是system.sh指令執行完之後
它才做了一個softlink
tibero

如何用SQL 語句查詢Aurora Postgres Log

是透過Nagios來進行自動化
當然得寫成Plugin來處理
基本原理應該是透過postgres_fdw
把系統紀錄檔轉化為foreign 表格
AWS 工程師有寫了一篇專文來介紹
Working with RDS and Aurora PostgreSQL logs: Part 2
我要分享的是我寫的程式碼
code是抓取最近的兩個檔案來掃log
SET client_min_messages TO NOTICE;
do $$
declare
  version_check    boolean;
  r1               record;
  outputinfo       varchar(80) := 'N';
  chk_interval     interval := '00:05';
  cnt              int;
  chk              text;

begin


  for r1 in SELECT file_name FROM list_postgres_log_files() WHERE file_name LIKE 'postgresql.log.%.csv' ORDER BY 1 DESC LIMIT 2
     loop
     SELECT create_foreign_table_for_log_file('pglog', 'log_server', r1.file_name) into chk;
select count(*) into cnt
from pglog
where error_severity in ('FATAL','PANIC','WARNNING')
and   current_timestamp - log_time <= chk_interval;
         drop foreign table pglog;
    if cnt > 0
          then
             raise notice '%',r1.file_name;
    end if ;
end loop;
end$$;
#2020/07/27
看了
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.how-it-works.html#aurora-serverless.parameter-groups
內的這兩段的文章後
The DB instances in an Aurora Serverless cluster only have associated DB cluster parameter groups, not DB parameter groups. Serverless clusters rely on DB cluster parameter groups because DB instances are not permanently associated with Aurora Serverless clusters. Aurora scales the associated DB instance automatically as needed. The scaling operation involves modifying parameter values to be suitable for the larger or smaller capacity.
To customize configuration settings for an Aurora Serverless cluster, you can define your own DB cluster parameter group and modify the parameters it contains. You can modify both cluster-level parameters, and parameters that apply at the instance level in other kinds of Aurora clusters. However, when you modify a DB cluster parameter group that's associated with an Aurora Serverless DB cluster, modifications apply differently than for other DB cluster parameter groups
以及這篇blog文章
https://aws.amazon.com/tw/premiumsupport/knowledge-center/aurora-serverless-logs-enable-view/
By design, Aurora Serverless connects to a proxy fleet of DB instances that scales automatically. Because there isn't a direct DB instance to access and host the log files, you can't view the logs directly from the Amazon Relational Database Service (Amazon RDS) console. However, you can view and download logs that are sent to the CloudWatch console.
我的理解是
serverless FQDN連接的db instance應該是會變的
所以看log的話除了上傳到cloudwatch的資料外
當下正在運作的db instance所產出的系統記錄
應該還是可以透過sql 介面來撈取
但如果serverless如果因為沒有活動而關閉
之前的log應該就看不到了
所以serverless 得online 來watch

在Postgres 10裡頭找 Partition的相關資訊

其實10的分割功能
只是比之前得完全手動
透過繼承機制建出子表
強那麼一斯一毫而已
真的要在postgres用資料分割
還是建議在12以後吧~~
因為在12之前
你要調出母表跟子表都有點困難
功能比較你可以參閱
2019年在歐洲舉辦的pg conf的相關文件
寫得算清楚了
怎麼查詢分割的資料
主要參考人家高手的Po文
Getting more Information about Partitions
用我真實的環境跑daily_invoice這個表看看
WITH RECURSIVE partition_info
(relid,
relname,
relsize,
relispartition,
relkind) AS (
SELECT oid AS relid,
relname,
pg_relation_size(oid) AS relsize,
relispartition,
relkind
FROM pg_catalog.pg_class
WHERE relname = 'daily_invoice' AND
relkind = 'p'
UNION ALL
SELECT
c.oid AS relid,
c.relname AS relname,
pg_relation_size(c.oid) AS relsize,
c.relispartition AS relispartition,
c.relkind AS relkind
FROM partition_info AS p,
pg_catalog.pg_inherits AS i,
pg_catalog.pg_class AS c
WHERE p.relid = i.inhparent AND
c.oid = i.inhrelid AND
c.relispartition
)
SELECT * FROM partition_info;
Postgres 10 Partition
我嘗試改成我自己的語法
其實主要在於pg_inherits這個系統表
它定義了母子間的關係
select inhrelid::regclass,pg_relation_size(inhrelid) AS relsiz
from pg_inherits
where inhparent = (select oid from pg_class where relname = 'daily_invoice');

用array_agg 來看slide window 的運作

一開始我其實不太了解
ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING
表達的是什麼?
照字面上解釋,前一後一嘛!!
先看結果
加了array_agg的呈現
window 是可以用別名簡化的
程式碼看起來會精簡很多

TAC-TAS

作業系統部分,可參考下列這篇文章
Tibero 6 TAC (Tibero Active Cluster) with TAS (Tibero Active Storage)
至於TAC-TAS的設定部分
就看下列這一篇吧
Como criar um ambiente de Alta Disponibilidade usando TAC
雖然不是用英文寫的
耐著點心,只看指令
就能完成設定
原廠的文件寫得有點亂
東一塊,西一塊的
設定的時候
都搞不清楚,前後的順序該怎麼跑
這是我架好的環境
TAC-TAS
TAC-TAS
安裝部分有人寫了不錯的文章
我們就引用就好了
不敢專美於前
來測一下它的failover
我們先連進系統看看

奇怪..怎會一連就是兩個session
這是我tbdsn.tbr的設定
tac=
((INSTANCE=(HOST=192.168.56.101)
(PORT=8629)
(DB_NAME=tac))
(INSTANCE=(HOST=192.168.56.102)
(PORT=8629)
(DB_NAME=tac))
(LOAD_BALANCE=Y)
(USE_FAILOVER=Y)
)
難不成每個instance都掛了一個連線
然後把tac2 shutdown
[tibero@oraogg2 config]$ cmrctl stop db --name tac2 --option immediate
MSG SENDING SUCCESS!
原來的session 的指令執行看看
雖然連線是有recover
但session id已經不一樣了
看起來跟Oracle RAC Select failover的運作不一樣
還有待研究