Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 170552

DRBD Performance Problem

$
0
0
Hello,

i have the following setup:

Two Proxmox 2.2 nodes. On top of each exactly the same HDDs in a Software RAID 10. On top of the Software RAID i configured DRBD and on top of it LVM2.
Network Configuration looks like this:
Server 1 and Server 2 are exactly the same but the ips:
eth1 <--> Switch (Internet)
eth0 and eth2 bonded to bond0. Connected to each other with CAT 5e Network Cable.
eth0 and eth1 are Realtek NICs and eth2 is an Intel NIC.

I created two LVM LVs named "ovz-glowstone" and "ovz-bedrock" which i am mounting on the nodes "glowstone" and "bedrock":
For example on node "bedrock": On boot, activate lv ovz-bedrock and mount it to /var/container/
The same on the node "glowstone" with "ovz-glowstone".

dd with oflag=direct shows me about 30MB/s which is not as expected.

I already did this to fix the problem (but replication is still very slow):

echo 127 > /proc/sys/net/ipv4/tcp_reordering
ifconfig bond0 mtu 2000 (it does not work with 4000 because the Realtek NIC does not like this high mtu)

My DRBD configuration:

global_common.conf:
Code:

global {
    usage-count no;
    # minor-count dialog-refresh disable-ip-verification
}

common {
    protocol C;

    handlers {
        #pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        #pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
        #local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
        split-brain "/usr/lib/drbd/notify-split-brain.sh root@dedilink.eu";
        out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root@dedilink.eu";
        # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
        # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
    }

    startup {
        # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
    }

    disk {
        # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes
        # no-disk-drain no-md-flushes max-bio-bvecs
    }

    net {
        # sndbuf-size rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers
        # max-epoch-size ko-count allow-two-primaries cram-hmac-alg shared-secret
        # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-cork
    }

    syncer {
        # rate after al-extents use-rle cpu-mask verify-alg csums-alg
    }
}

r0.res:
Code:

resource r0 {
        protocol C;
        syncer {
                rate 2G;
    }
        startup {
                wfc-timeout 60;
                degr-wfc-timeout 60;
                become-primary-on both;
        }
        net {
                cram-hmac-alg sha1;
                shared-secret "*****";
                allow-two-primaries;
                after-sb-0pri discard-zero-changes;
                after-sb-1pri discard-secondary;
                after-sb-2pri disconnect;
        }
        on bedrock {
                device /dev/drbd0;
                disk /dev/md0p1;
                address 10.0.0.2:7788;
                meta-disk internal;
        }
        on glowstone {
                device /dev/drbd0;
                disk /dev/md0p1;
                address 10.0.0.3:7788;
                meta-disk internal;
        }
}

To explain the storage build again:

/dev/sd[abcd] -MDRAID-> /dev/md0
/dev/md0p1 (=Linux LVM) -DRBD-> /dev/drbd0
LVM knows /dev/drbd0.


And sorry for my bad english.

Viewing all articles
Browse latest Browse all 170552

Trending Articles