Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 170563

Unable to activate storage (GlusterFS) in certain conditions

$
0
0
Hello.

I have four servers.
I built a two nodes Proxmox cluster with a GlusterFS cluster with two servers (NAS 1, NAS 2)
There are 3 different GFS volumes.
- VOL_1 and VOL_2 which are replicated on both NAS servers. They are hosting VM hard drives.
- VOL_3 which is present only on NAS 2 (this is not an essential volume but it's sometimes used for different purposes).

Everything works rather well : migration, fencing, GlusterFS failover, UPS shutdown (everything needed to manage the cluster properly).
But I was rather disappointing when I did one of my tests.

I tried to cut off NAS 2 hosting VOL_3 with running servers, VM just kept running as excepted. Then I put NAS 2 back on. GlusterFS just synchronized files as needed.
Then I stopped every servers and just started my two Proxmox servers with only one NAS (NAS 1, the one hosting VOL_1/2).

No VM could not start with the following error.
got timeout (when starting automatically or with qm start command or even directly with the kvm command)
I also noticed this error in the /var/log/syslog file : pvedaemon[3603]: WARNING: unable to activate storage 'VOL_3' - directory '/mnt/pve/VOL_3' does not exist

I'm just wondering why should every VM whose hard drives are hosted only on VOL_1/2 could not start ?
When I started the NAS 2 everything went well.

Furthermore, I could browse files from the command line through /mnt/pve/VOL_1 and /mnt/pve/VOL_2 directories. But browsing the root directory /mnt/pve was very long with the following result.
drwxr-xr-x 5 0 0 4.1K Mar 29 10:42 VOL_1
drwxr-xr-x 5 0 0 51 Mar 28 09:12 VOL_2
?????- ??-? ? ? ? ? ? ? ? ?? VOL_3

Usually when you try to mount an offline drive it's not supposed to mount it and create an unreachable inode. Right ?

This is the version of the different components.
GlusterFS 3.4.2-1 (on the server side).

proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

This is an example of one of my VM configuration.
bootdisk: virtio0
cores: 1
cpu: host
ide2: none,media=cdrom
memory: 1024
name: srv-ad-bis
net0: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr0,tag=60
ostype: win7
sockets: 1
virtio0: VOL_1:156/vm-156-disk-1.raw,format=raw,size=50G

When looking at the kvm command line option in the running process I can see that there is only one drive specified point to the excepted volume : -drive file=gluster://xxx.xxx.xxx.xxx/gv0/images/156/vm-156-disk-1.raw

Any help would be very appreciated.
If I forgot any information please ask me.
Thank you.

Viewing all articles
Browse latest Browse all 170563

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>