Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 170596

GlusterFS: storage: invalid format - storage ID '' contains illegal characters

$
0
0
Hi all,

first of all: Proxmox is great ! Keep on going !

While executing a backup task I can see the following error message.
Code:

Parameter verification failed.  (400)

storage: invalid format - storage ID '' contains illegal characters

Couldn't find anything in the mailing list nor the forum, therefore I try it here.
Strange is the missing storage ID.

Code:

pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

I setup up a glusterfs system with two nodes in replicate mode. Details below.

Code:

Server1:/var/log/glusterfs# gluster volume info

Volume Name: datastore
Type: Replicate
Volume ID: 3dcd805e-d289-443d-9cba-5bd03269c0b5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: backup1:/data/gfs_block
Brick2: backup2:/data/gfs_block
Options Reconfigured:
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on

and

Code:

Server1:/var/log/glusterfs# gluster volume status
Status of volume: datastore
Gluster process                                        Port    Online  Pid
------------------------------------------------------------------------------
Brick backup1:/data/gfs_block                          49153  Y      414959
Brick backup2:/data/gfs_block                          49153  Y      852056
NFS Server on localhost                                2049    Y      415221
Self-heal Daemon on localhost                          N/A    Y      415228
NFS Server on backup2                                  2049    Y      852134
Self-heal Daemon on backup2                            N/A    Y      852141

There are no active volume tasks

GlusterFS is ok and working.

Glusterfs was successful integrated via GUI. Storage.cfg of one of the clients.

Code:

Server4:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0

glusterfs: backup1glusterfs
        volume datastore
        path /mnt/pve/backup1glusterfs
        content backup
        server 10.3.2.112
        maxfiles 10

Mount points looks also good
Code:

10.3.2.112:datastore on /mnt/pve/backup1glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Does anybody knows where the issue is located? In the ML I found a patch but this was just for restoring of VM's.
Thanks for any help !

Br
Mr.X

Viewing all articles
Browse latest Browse all 170596

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>