Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 170565

Problem with Backups

$
0
0
Hi there:

I'm having a problem when I run backups on a Proxmox cluster. About 2/3 of the time, whether I run them from the command line or in the GUI, backups will fail; the errors look like this:
-----
May 13 15:45:50 INFO: Starting Backup of VM 107 (openvz)May 13 15:45:50 INFO: CTID 107 exist mounted running
May 13 15:45:50 INFO: status = running
May 13 15:45:50 INFO: backup mode: snapshot
May 13 15:45:50 INFO: ionice priority: 7
May 13 15:45:50 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-pr210-32-0-0')
May 13 15:45:50 INFO: Logical volume "vzsnap-pr210-32-0-0" created
May 13 15:45:50 INFO: creating archive '/mnt/pve/backups-green/dump/vzdump-openvz-107-2014_05_13-15_45_50.tar'
May 13 15:49:07 INFO: tar: -: Cannot close: Input/output error
May 13 15:49:07 INFO: Total bytes written: 839157760 (801MiB, 34MiB/s)
May 13 15:49:07 INFO: tar: Exiting with failure status due to previous errors
May 13 15:49:09 ERROR: Backup of VM 107 failed - command '(cd /mnt/vzsnap0/private/107;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -) >/mnt/pve/backups-green/dump/vzdump-openvz-107-2014_05_13-15_45_50.dat' failed: exit code 2
-----------

The remaining 1/3 of the time, backups will run successfully and look like this:
--------
May 13 14:47:35 INFO: Starting Backup of VM 107 (openvz)
May 13 14:47:35 INFO: CTID 107 exist mounted running
May 13 14:47:35 INFO: status = running
May 13 14:47:35 INFO: backup mode: snapshot
May 13 14:47:35 INFO: ionice priority: 7
May 13 14:47:35 INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-pr210-32-0-0')
May 13 14:47:35 INFO: Logical volume "vzsnap-pr210-32-0-0" created
May 13 14:47:36 INFO: creating archive '/mnt/pve/backups-green/dump/vzdump-openvz-107-2014_05_13-14_47_35.tar'
May 13 14:49:14 INFO: Total bytes written: 839116800 (801MiB, 36MiB/s)
May 13 14:49:14 INFO: archive file size: 800MB
May 13 14:49:14 INFO: delete old backup '/mnt/pve/backups-green/dump/vzdump-openvz-107-2014_02_17-23_35_48.tar'
May 13 14:49:14 INFO: delete old backup '/mnt/pve/backups-green/dump/vzdump-openvz-107-2014_02_24-23_50_50.tar'
May 13 14:49:14 INFO: delete old backup '/mnt/pve/backups-green/dump/vzdump-openvz-107-2014_04_18-21_06_32.tar.lzo'
May 13 14:49:16 INFO: Finished Backup of VM 107 (00:01:41)
--------

I can't find any consistency in this issue, but it's made automating backups of the cluster almost impossible. There are no space issues on the NFS mount to which the backups are writing.

Thank you for any help you can provide. Please let me know if I can offer any additional details.



Output of `pveversion -v`:
------------
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
--------

Viewing all articles
Browse latest Browse all 170565

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>