Apendix 1: Install DRBD(Distributed Replicated Block Device) and configure Distributed Storage System. This example shows to configure on the environment like follows
Package DRBD:
drbd83-utils.x86_64
kmod-drbd83.x86_64
(1) cloudstorage01.server.01 [192.168.0.151]
(2) cloudstorage02.server.02 [192.168.0.152]
Install DRBD on CentOs 6 (on both server).Set them on both hosts
CentOS 6 doesn’t seem to have DRBD repo.
(1) Add the DRBD repo
[root@ cloudstorage01.server.01 ~] rpm -Uvh http://elrepo.org/elrepo-release-6-4.el6.elrepo.noarch.rpm
(2) Edit the DRBD repo
[root@cloudstorage01.server.01 ~] vi /etc/yum.repos.d/elrepo.repo
[elrepo]
name=ELRepo.org Community Enterprise Linux Repository - el6
baseurl=http://elrepo.org/linux/elrepo/el6/$basearch/
mirrorlist=http://elrepo.org/mirrors-elrepo.el6
enabled=1 #chage to enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org
protect=0
(3) Confirm you can search DRBD packages via yum
[root@cloudstorage01.server.01 ~]# yum --enablerepo=elrepo search drbd
Loaded plugins: fastestmirror, refresh-packagekit
Loading mirror speeds from cached hostfile
* base: www.ftp.ne.jp
* elrepo: elrepo.org
* extras: www.ftp.ne.jp
* updates: www.ftp.ne.jp
elrepo | 1.9 kB 00:00
elrepo/primary_db | 409 kB 00:01
====================N/S Matched: drbd ===============================
drbd83-utils.i686 : Management utilities for DRBD %{version}
drbd84-utils.i686 : Management utilities for DRBD
kmod-drbd83.i686 : drbd83 kernel module(s)
kmod-drbd84.i686 : drbd84 kernel module(s)
(4) Install DRBD
[root@cloudstorage01.server.01 ~]# yum --enablerepo=elrepo install –y drbd83-utils.i686 kmod-drbd83.i686
<snip>
Installed:
drbd84-utils.i686 0:8.4.1-1.el6.elrepo kmod-drbd84.i686 0:8.4.1-1.el6.elrepo
Complete!
[2] Configure DRBD .Set them on both hosts
[root@cloudstorage01.server.01 ~]# vi /etc/drbd.d/global_common.conf
disk {
# line 27: add ( detach disks when IO-error )
on-io-error detach;
syncer {
# line 38: add ( band width for synchronization )
rate 300M;
[root@cloudstorage01.server.01 ~]# vi /etc/drbd.d/r0.res
# create new
resource r0 {
#avoid split-brain in pacemaker cluster
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
}
startup {
}
disk {
fencing resource-only;
}
net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
syncer {
rate 10M;
al-extents 257;
}
device /dev/drbd0;
disk /dev/mapper/vg_cloudstorage0-LogVol07;
meta-disk internal;
on cloudstorage01.server.01{
address 192.168.0.151:7788;
}
on cloudstorage02.server.02 {
address 192.168.0.152:7788;
}
}
[root@cloudstorage01.server.01 ~]# modprobe drbd # load DRBD module
[root@cloudstorage01.server.01 ~]# lsmod | grep drbd
drbd 286064 0
[root@cloudstorage01.server.01 ~]# drbdadm create-md r0 # create DRBD resource
--== Thank you for participating in the global usage survey ==--
The server's response is:
you are the 198th user to install this version
Writing meta data...
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
Success
[root@cloudstorage01.server.01 ~]# /etc/rc.d/init.d/drbd start
Starting DRBD resources: [ d(r0) block drbd0: Starting worker thread (from cqueue [5296])
block drbd0: disk( Diskless -> Attaching )
block drbd0: disk( Attaching -> Inconsistent )
block drbd0: attached to UUIDs 0000000000000004:0000000000000000:0000000000000000:0000000000000000
s(r0) n(r0) block drbd0: conn( StandAlone -> Unconnected )
block drbd0: Starting receiver thread (from drbd0_worker [5313])
]block drbd0: receiver (re)started
block drbd0: conn( Unconnected -> WFConnection )
..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'r0'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 18]:yes
The best is do formating
[root@cloudstorage01.server.01 ~]# chkconfig drbd on
[root@cloudstorage01.server.01 ~]# echo "/sbin/modprobe drbd" >> /etc/rc.local
[3] Make Host be primry,the synchronnization starts
[root@cloudstorage01.server.01 ~]# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by root@cloudstorage01.nocser.net .server.world, 2011-07-15 15:26:20
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:20970844
[root@cloudstorage01.server.01~]# drbdsetup /dev/drbd0 primary –o # set primary
[root@cloudstorage01.server.01~]# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by root@cloudstorage01.nocser.net .server.world, 2011-07-15 15:26:20
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:940444 nr:0 dw:0 dr:948480 al:0 bm:56 lo:136 pe:3130 ua:1979 ap:0 ep:1 wo:b oos:20042916
[>....................] sync'ed: 4.5% (19572/20476)M
finish: 0:03:57 speed: 84,356 (84,356) K/sec
# after few minutes later, synchronization completes and the status truns like follows.
[root@cloudstorage01.server.01~]# cat /proc/drbd
version: 8.3.11 (api:88/proto:86-96)
GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by root@cloudstorage01.nocser.net .server.world, 2011-07-15 15:26:20
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:20970844 nr:0 dw:0 dr:20971508 al:0 bm:1277 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
[4] Configure completes.It’s OK to create a filesystem on DRBD
devide and mount it on a primary host to use it.
[root@cloudstorage01.server.01 ~]# mkfs -t ext4 /dev/drbd0
[root@cloudstorage01.server.01 ~]# mkdir /mnt/drbd0
[root@cloudstorage01.server.01 ~]# mount /dev/drbd0 /mnt/drbd0
[root@cloudstorage01.server.01~]# touch /mnt/drbd0/test.txt
# create a test file
[root@cloudstorage01.server.01~]# ll /mnt/drbd0
total 16 drwx------ 2 root root 16384 Jul 15 15:53 lost+found
-rw-r--r-- 1 root root 0 Jul 15 15:54 test.txt
[5] If you'd like to mount DRBD device on secondary Host,
unmount DRBD device on primary Host first and make primary
Host be secondary. Next make secondary Host be primary and
mount DRBD device
########### on Primary Host ###########
[root@cloudstorage01.nocser.net ~]# umount /mnt/drbd0
[root@cloudstorage01.nocser.net ~]# drbdadm secondary r0
# set secondary
########### on Secondary Host ###########
[root@www02 ~]# drbdadm primary r0
# set primary
[root@cloudstorage01.server.02~]# mount /dev/drbd0 /mnt/drbd0
[root@cloudstorage01.server.02~]# ll /mnt/drbd0
total 16 drwx------ 2 root root 16384 Jul 15 15:53 lost+found
-rw-r--r-- 1 root root 0 Jul 15 15:54 test.txt
SOLVE ERROR
[7] IF error message appear like this :
Command 'drbdmeta /dev/drbd0 v08 /dev/sdaX 0 create-md' terminated with exit
code 40
1) umount /dev/sdaX
2) dd if=/dev/zero of=/dev/sdaX bs=1M count=128
** waiting until finished, by patient. No need to make etx3 or ext4 ,just leave it on dd format and continue step (2).
3) Continue drbdadm create-md r0
[8] When DRBD would not sync and the words "Split-Brain
detected but unresolved, dropping connection!" would be
recorded in logs, that is just the split-brain syndrome.
Then, recover like follows
########### on Secondary Host ###########
[root@cloudstorage01.server.02~]# drbdadm -- --discard-my-data connect r0
########### on Primary Host ###########
[root@cloudstorage01.server.01 ~]# drbdadm connect r0
I am receiving following error upon start after doing as you mentioned, any fixes please let me know
ReplyDeleteservice drbd start
Starting DRBD resources: [ d(drbd0) 0: Failure: (104) Can not open backing device.
[drbd0] cmd /sbin/drbdsetup 0 disk /dev/sdb1 /dev/sdb1 internal --set-defaults --create-device --on-io-error=detach --fencing=resource-only failed - continuing