Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Backup hangs part way through

$
0
0
I have a Ubuntu-based VM which hangs every time I try to back it up. The VM's disk format is 10GB raw, it has 1 CPU and 2GB RAM, virtio Network. I'm backing up to an NFS share on a QNAP NAS on the same subnet. I'm running Proxmox 3.4-1/3f2d890e.

Here is all of the backup output from the last attempt when I ran it from the Proxmox host command line:

INFO: starting new backup job: vzdump 100 --dumpdir /mnt/nas_phill/Backup/VirtualMachines --mailto myname@example.com --mode stop --compress gzip --maxfiles 3
INFO: Starting Backup of VM 100 (qemu)
INFO: status = stopped
INFO: update VM 100: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: creating archive '/mnt/nas_phill/Backup/VirtualMachines/vzdump-qemu-100-2015_06_08-23_13_58.vma.gz'
INFO: starting kvm to execute backup task
INFO: started backup task '31b83deb-2a14-4da4-8175-730e3c2f14f8'
INFO: status: 1% (110231552/10737418240), sparse 0% (55889920), duration 3, 36/18 MB/s
INFO: status: 2% (224264192/10737418240), sparse 0% (65855488), duration 9, 19/17 MB/s
INFO: status: 3% (334495744/10737418240), sparse 0% (66498560), duration 15, 18/18 MB/s
INFO: status: 4% (437125120/10737418240), sparse 0% (67723264), duration 21, 17/16 MB/s
INFO: status: 5% (551157760/10737418240), sparse 0% (85438464), duration 27, 19/16 MB/s
INFO: status: 6% (649986048/10737418240), sparse 0% (91672576), duration 33, 16/15 MB/s
INFO: status: 7% (752615424/10737418240), sparse 0% (93126656), duration 39, 17/16 MB/s
INFO: status: 8% (859045888/10737418240), sparse 0% (93138944), duration 46, 15/15 MB/s
INFO: status: 9% (973078528/10737418240), sparse 0% (93323264), duration 54, 14/14 MB/s
INFO: status: 10% (1083310080/10737418240), sparse 0% (93507584), duration 62, 13/13 MB/s
INFO: status: 11% (1185939456/10737418240), sparse 0% (93507584), duration 69, 14/14 MB/s
INFO: status: 12% (1296171008/10737418240), sparse 0% (94552064), duration 76, 15/15 MB/s
INFO: status: 13% (1414004736/10737418240), sparse 0% (97779712), duration 82, 19/19 MB/s
INFO: status: 14% (1538392064/10737418240), sparse 0% (105041920), duration 85, 41/39 MB/s
INFO: status: 15% (1615462400/10737418240), sparse 1% (107458560), duration 88, 25/24 MB/s
INFO: status: 16% (1718091776/10737418240), sparse 1% (107520000), duration 95, 14/14 MB/s
INFO: status: 17% (1832124416/10737418240), sparse 1% (107548672), duration 102, 16/16 MB/s
INFO: status: 18% (1944518656/10737418240), sparse 1% (122744832), duration 107, 22/19 MB/s
INFO: status: 19% (2041184256/10737418240), sparse 1% (122744832), duration 111, 24/24 MB/s
INFO: status: 20% (2166358016/10737418240), sparse 1% (130015232), duration 118, 17/16 MB/s
INFO: status: 21% (2261647360/10737418240), sparse 1% (146534400), duration 122, 23/19 MB/s
INFO: status: 22% (2375680000/10737418240), sparse 1% (146882560), duration 128, 19/18 MB/s
INFO: status: 23% (2478309376/10737418240), sparse 1% (146952192), duration 133, 20/20 MB/s
INFO: status: 24% (2580938752/10737418240), sparse 1% (146976768), duration 139, 17/17 MB/s
INFO: status: 25% (2691170304/10737418240), sparse 1% (148230144), duration 146, 15/15 MB/s
INFO: status: 26% (2805202944/10737418240), sparse 1% (148828160), duration 151, 22/22 MB/s
INFO: status: 27% (2926837760/10737418240), sparse 1% (148959232), duration 155, 30/30 MB/s
INFO: status: 28% (3025666048/10737418240), sparse 1% (158138368), duration 159, 24/22 MB/s
INFO: status: 29% (3135897600/10737418240), sparse 1% (158343168), duration 164, 22/22 MB/s
INFO: status: 30% (3223322624/10737418240), sparse 1% (158343168), duration 168, 21/21 MB/s
INFO: status: 31% (3341156352/10737418240), sparse 1% (158384128), duration 175, 16/16 MB/s

There doesn't seem to be any errors anywhere or any hints about why it's hanging. I have another VM which backed up just fine which has qcow2 file system. Any suggestions about how I can fix this backup issue?

BOUNTY: Custom kernel config for PVE KVM's

$
0
0
I've customized my linux-4.0.4 kernel for use with a KVM guest w/ virtio & VMWare display. The .conf is here: http://pastebin.com/JNdZ8VnN

I'm posting it here in hopes that someone w/ more experience w/ configuring kernels could edit the .conf file as I have it configured such that it's more appropriate for use in the KVM guest.

I'm just not familiar enough with all the options & the hardware of one of our KVM VM's.

If it works, I can PayPal or Bitcoin you a free coffee.

-Thanks,
-J

Shareable disk

$
0
0
Hello All!

I want to create test cluster with two VM and need shared storage.
It`s possible with proxmox?
I use LVM shared storage for VM disk in proxmox.

migrate all logged as administrator

$
0
0
Hi,

is there a way to migrate all machines from a node (Migrate All VMs) without logging in as root?

I've got an administrator group with all sysadmins there and I want to avoid giving root password to perform such operations.

Keyboard stops to work on linux VM

$
0
0
Hello!

I have some VMs using different linux distributions.

By some unknown reason the keyboard stops to work, i.e., backspace, arrows, ctrl and other keys won't work. I only can type in letters and numbers.

This only happens on Linux VMs, i also have some different Windows VMs and keyboard is ok! I change for the console window of Windows VMs and keyboard works ok!

Any idea for resolving this issues??

Thanks! :)

VF

Monitoring VM`s by Zabbix.

$
0
0
I found a great way of monitoring virtual machines using zabbix https://github.com/tcpcloud/Zabbix-T...llectd_libvirt .

Obviously for Proxmox/Debian needed some improvements, such as the exclusion of the sudo command from script and create the missing directories. However, I still run into one error which cannot be corrected. When you run a test example for the detection of parameters I get the error.
Code:

root@proxmoxnode3:/etc/zabbix/scripts/collectd-libvirt# ./collect-libvirt-handler.pl /var/run/collectd-unixsock LISTVAL LIBVIRT-CPU
socket error: No error
ERROR: Command failed!

Can anyone came across this and knows the solution?

Issue with routing table and CT

$
0
0
Hi,

I have have a really weird issue with one of my server.

Currently I rent two physical servers at OVH. Both are using Proxmox (standard installation).

However, one CT is running perfectly on the first server (S1) but not on the second physical server (S2): it seems that the routing table becomes incorrect when i move the CT from S1 to S2.

Here is the route -n result on S1:
Quote:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
XXX.XX.XXX.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth0
0.0.0.0 XXX.XX.XXX.254 0.0.0.0 UG 0 0 0 eth0
which is correct

On S2:
Quote:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
XXX.XX.XXX.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 eth0
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 eth0
the red line is not correct.
If I delete and add the right values manually with "route del -net x.x.x.x netmask y.y.y.y", it works but when the servers restarts, the previous values are back !

From my point of view it seems that a value coming from S2 is overwriting the CT's values... but I don't know which ones.

Any help would be appreciated.

Kind regards,


Michaël

SMNP of KVMs

$
0
0
Hi, a little question. Do VMs in KVM have a SMNP API for monitoring it or does proxmox have a api to smnp the VMs? We would monitoring for exsample bandwidth...

best regards

Help calculating vzdump.conf size: value

$
0
0
Good morning!

I need some help calculating the max "size:" value I can assign in vzdump.conf to get my backups working again.

I know this could be resolved with an upgrade but things have been running so well that I'm too chicken to risk it.

I'm having a problem with snapshot backups of a KVM "stalling" here:
INFO: adding '/mnt/vzsnap0/images/103/vm-103-disk-1.raw' to archive ('vm-disk-ide0.raw')

I'm pretty sure it's because the logical volume for the snapshot is running out of space. Traditionally, when this happens I bump up the "size:" value in vzdump.conf and I'm good to go again. Currently the size: value in vzdump.conf is set to 7168 (bumped up from 6144) but it's still failing.

Combing through old threads I see that lv and vg display as well as pvdisplay are important pieces. I have included those below all taken while the backup was still running. Can anyone help me to define the max size: value I can assign and whether there are any negative impacts to increasing it?

Thanks!!!!

The outputs:

vgdisplay
Code:

root@proxvs1:~# vgdisplay
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283759104: Input/outpu
t error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283816448: Input/outpu
t error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 4096: Input/output error
  --- Volume group ---
  VG Name              pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19706
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                4
  Open LV              4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size              1.64 TiB
  PE Size              4.00 MiB
  Total PE              428703
  Alloc PE / Size      426400 / 1.63 TiB
  Free  PE / Size      2303 / 9.00 GiB
  VG UUID              lENCBy-879J-R6Np-s22Z-TI8O-HNp0-imdBbx


lvdisplay
Code:

root@proxvs1:~# lvdisplay
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283759104: Input/outpu
t error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283816448: Input/outpu
t error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 4096: Input/output error
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                aJ0zA3-DKMb-zTWD-Oc3h-dmfW-fDxG-O8ei6S
  LV Write Access        read/write
  LV Creation host, time proxmox, 2012-08-03 10:41:04 -0400
  LV Status              available
  # open                1
  LV Size                62.00 GiB
  Current LE            15872
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:1


  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                QuO2hP-m7HU-msVz-4L3j-R16l-QXjE-jhtCo4
  LV Write Access        read/write
  LV Creation host, time proxmox, 2012-08-03 10:41:04 -0400
  LV Status              available
  # open                1
  LV Size                96.00 GiB
  Current LE            24576
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:0


  --- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID                L0BisB-W2tx-YzXe-0ztb-0prU-6jFA-7oqsnq
  LV Write Access        read/write
  LV Creation host, time proxmox, 2012-08-03 10:41:04 -0400
  LV snapshot status    source of
                        vzsnap-proxvs1-0 [INACTIVE]
  LV Status              available
  # open                1
  LV Size                1.47 TiB
  Current LE            384160
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:2


  --- Logical volume ---
  LV Path                /dev/pve/vzsnap-proxvs1-0
  LV Name                vzsnap-proxvs1-0
  VG Name                pve
  LV UUID                uSXgKj-aqmY-X7ON-MyEM-gaRU-MN07-nLjywI
  LV Write Access        read/write
  LV Creation host, time proxvs1, 2015-06-08 22:30:02 -0400
  LV snapshot status    INACTIVE destination for data
  LV Status              available
  # open                1
  LV Size                1.47 TiB
  Current LE            384160
  COW-table size        7.00 GiB
  COW-table LE          1792
  Snapshot chunk size    4.00 KiB
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:3


pvdisplay
Code:

root@proxvs1:~# pvdisplay
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283759104: Input/outpu
t error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 1611283816448: Input/outpu
t error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-proxvs1-0: read failed after 0 of 4096 at 4096: Input/output error
  --- Physical volume ---
  PV Name              /dev/sda2
  VG Name              pve
  PV Size              1.64 TiB / not usable 4.00 MiB
  Allocatable          yes
  PE Size              4.00 MiB
  Total PE              428703
  Free PE              2303
  Allocated PE          426400
  PV UUID              yS3gGf-67jl-Gmvy-fSVS-8Df9-mDnf-M4glE



Thanks again for any help you can provide.

Fencing for ONLINE (french hosting provider)

$
0
0
Hi all,

I'm just writing a command to fence a ONLINE server using ONLINE API (based on the FENCE_OVH) : http://www.myweblan.net/index.php/sy...erveurs-online

If I have understand, by default, fenced call a reboot action, so I have add action="off" in cluster.conf

but is it possible to :

1. Insert a delay between stop and restart by configuration (parameters ?) : time between action="off" and action="on"
2. Insert a delay before VM in HA are restarted on the other nodes ?
3. Execute a script when a VM (KVM) is moved from node to another ?

Thanks,
Yannick

Question regarding ressources (rgmanager / cluster.conf)

$
0
0
Hi all,

Is it possible to associate a script to a VM using ressources in cluster.conf.
I read some docs about rgmanager and ressources, but it seems that pvevm is a "specific" resource embedded in proxmox

So is it possible to modify cluster.conf to add a resource script linked to one or more HA VM (KVM).
reference: https://access.redhat.com/documentat...pt-resource-CA

My goal is to execute a script that run some command when a VM is migrating (on node fail or manual migration).

Thanks for help,
Yannick

Gaps in Proxmox GUI graph

$
0
0
Anybody experienced frequent gaps in the GUI graph as in the following screenshot?
gui-gap.png

It started after last batch of updates of all Proxmox nodes. Gaps is consistent across all nodes. They are not in same time period. Seems to occur in random times but on all nodes. Any idea how i can pin point the cause?

It also seems to interacting with storage somehow. For example, if i am trying to create a new VM, on Hard Disk portion the storage drop down menu seems to be disabled for sometime then gets enabled on its own. Syslog says nothing.
Attached Images

Cheap Cloud iSCSI/NFS Service Provider?

$
0
0
I'd like to add a 2TB RAW disk image via NFS or iSCSI for approximately $40/month US. Googling results come up with the cheapest I can find being $80/month at Softlayer: http://www.softlayer.com/file-storage

I'm here to see if any PVE users know of a better option.

Thanks,
-J

Move Guest to another account

$
0
0
Hallo,
eg. Tom creates an guest with his account on one of the nodes, is it possible to move the guest to another account eg. Jerry's?

thanks

hebbet

Mail Gateway Web Interface SSL Certificate

$
0
0
Hello,

How may I replace this self-signed certificate ?
Is it as easy as overwriting the file /etc/apache2/apache.pem ?

Best regards,

Rodolphe

question about last zfs release and your new iso

$
0
0
Hi martin


I would like ask you somethings.


You send a new iso with last zfs release.
But if we have already install with the older one. Can we upgrate zfs from repository ?
I try apt-get upgrade but last zfs does not seem to be upgraded from the repository.
I think you will update it later.
Could you confirm.


best regards



pve1c-0675a24039

pve1c-00082d6d08

Proxmox 4 and LXC issue

$
0
0
I have installed Debian Jessie and Proxmox from pvetest repo (LXC you have to install youself from pvetest repo). Created LVM partition for /var/lib/vz (why vz, if proxmox 4 is on lxc). Downloaded some templates, such as Centos 6 x86_64, Centos 7 x86_64 and Debian 7 x86_64. Created container with debian (when I try to create container with Centos, got error - TASK ERROR: unable to detect OS disribution). When I start container - it starts, but network interface does not included into bridge I have wrote and loop:/var/lib/vz/images/102/vm-102-rootfs.raw mounted on /var/lib/vz, so my partition on /var/lib/vz unmounted and I cannot start one more container.

So questions:
1. what's wrong with my templates?
2. how can I make work interface adding to bridge?
3. how can I change path to mount to (lxc parameter lxc.rootfs.mount proxmox does not understand)?

Any logs will be given. Thanks

Accidental removal of a VM or virtual HDD

$
0
0
Hello all,

I would like to draw more attention on this usability issue: https://bugzilla.proxmox.com/show_bug.cgi?id=360

Please have a look and share your opinion on this. I would like to know if other users also struggle with themselves not to remove something accidentally or this is just my own problem.

Best regards,
Stanislav

PVE freezes during NFS backup

$
0
0
PVE-freeze.jpg

Backup storage is FreeNAS zfs w/ highest compression & dedup on, in a Virtualbox sparse allocated default hdd.

VM 100 is a firewall/gateway & it keeps dropping the internet connection, perhaps affecting the NFS share's availability (my guess- I dunno).

PVE aptitude upgrade is current.
Attached Images

Proxmox won't set storage to Active state

$
0
0
I have proxmox running on one server using an NFS share off an Ubuntu box. I am able to mount the NFS share via command line without issue but it never shows as active. Are there logs specific to this that might help? I have tried searching for similar situations and haven't had any luck finding any answers.
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>