Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

vzdump CT error "Too many levels of symbolic links" when one storage is involved..?

$
0
0
I have a strange error for vzdump. I searched in past threads, and this is the only similar, yet unsolved, issue happened before to others:
http://forum.proxmox.com/threads/915...1889#post51889

...sorry, this is a long, quite detailed post...

I did some "investigation", create some test cases, in order to find a solution myself (which unfortunately I have not been able to):

The issue is, vzdump fails to backup a fairly basic and absolutely idle CT depending on the CT storage or the backup storage in the sense that I have specifically this backup failing if either:
1) if the CT is created on a particular storage
2) if the backup of th CT is tried to a particular storage
and the "failing" storage is always the same. So I think it should be related to the storage or how it is mounted.

before all the rest, this is on pve 3.1-24 (will append pveversion -v output, and other data, at the bottom of the post)

more in detail, I have:
2 nodes cluster (pve1 and pve2, identical nodes, ibm x3650m2),
2 network NFS storages:
- pve_ts809, an old qnap nas (core2duo cpu), with old qnap firmware/kernel/software <-- this seems to work well
- pve_ts879 a new qnap nas (xeon cpu), with new qnap firmware/kernel/software <-- this seems to cause troubles

I can't get a clue of what is not working and why: in the storage.conf and mount output (see bottom of the post) I can see some differences but I didn't configure any of this parameters, they came out when I created the storage from pve gui, so I have no clue if some of them could be wrong, or at least causing troubles, and why. I just noted some differences...

eg, for what I can see:
in storage.conf
pve_ts879 has "options vers=3,tcp,nolock,rsize=262144,wsize=262144"
while
pve_ts809 has "options vers=3"

and in mount output, both as nfs:
/mnt/pve/pve_ts879 has (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=< ts879 IP>,mountvers=3,mountport=53850,mountproto=tcp,local_lock=all,addr=<ts879 IP>)
while
/mnt/pve/pve_ts809 gas (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec =sys,mountaddr=<ts809 IP>,mountvers=3,mountport=905,mountproto=udp,local_lock=none,addr=<ts809 IP>)

That said, to test, I created 2 really basic brand new CT from the same template: ubuntu-12.04-standard_12.04-1_i386.tar.gz

CT 113 filesystem is on pve_ts879
CT 114 filesystem is on pve_ts809
both seem to be running fine

backup from gui for CT 114 works both to local and to pve_ts809, but FAILS to pve_ts879 (tried with CT running, LZO compression, snapshot mode)
backup from gui for CT 113 always FAILS to local, FAILS to pve_ts809, FAILS to pve_ts879, (tried with CT running or not, LZO/GZIP/none compression, all modes)

in the logs of the failing jobs, there is always some kind of recurring text (I can provide any example, but from what I see it happens always as you see below):

if CT are down (or stop mode), eg:

Code:

INFO: starting new backup job: vzdump 113 --remove 0 --mode stop --compress lzo --storage local --node pve2
...
INFO: creating archive '/var/lib/vz/dump/vzdump-openvz-113-2014_03_13-12_10_03.tar.lzo'
INFO: tar: ./etc/alternatives/: Cannot savedir: Too many levels of symbolic links
INFO: Total bytes written: 459663360 (439MiB, 58MiB/s)
INFO: tar: Exiting with failure status due to previous errors
INFO: Total bytes written: 459683840 (439MiB, 38MiB/s)
INFO: tar: Exiting with failure status due to previous errors
ERROR: Backup of VM 113 failed - command '(cd /mnt/pve/pve_ts879/private/113;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|lzop) >/var/lib/vz/dump/vzdump-openvz-113-2014_03_13-12_12_50.tar.dat' failed: exit code 2
INFO: Backup job finished with errors
TASK ERROR: job errors

when CT are running, eg:

Code:

INFO: starting new backup job: vzdump 113 --remove 0 --mode snapshot --storage local --node pve2
...
INFO: starting first sync /mnt/pve/pve_ts879/private/113/ to /var/lib/vz/dump/vzdump-openvz-113-2014_03_13-12_11_34.tmp
INFO: rsync: readdir("/mnt/pve/pve_ts879/private/113/etc/alternatives"): Too many levels of symbolic links (40)
INFO: IO error encountered -- skipping file deletion
...
INFO: total size is 441560783 speedup is 1.00
INFO: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
ERROR: Backup of VM 113 failed - command 'rsync --stats -x --numeric-ids -aH --delete --no-whole-file --inplace '/mnt/pve/pve_ts879/private/113/' '/var/lib/vz/dump/vzdump-openvz-113-2014_03_13-12_11_34.tmp'' failed: exit code 23
INFO: Backup job finished with errors
TASK ERROR: job errors

Code:

INFO: starting new backup job: vzdump 114 --remove 0 --mode snapshot --compress lzo --storage pve_ts879 --node pve1
...
INFO: starting final sync /mnt/pve/pve_ts809/private/114/ to /mnt/pve/pve_ts879/dump/vzdump-openvz-114-2014_03_13-11_24_12.tmp
INFO: rsync: readdir("/mnt/pve/pve_ts879/dump/vzdump-openvz-114-2014_03_13-11_24_12.tmp/etc/alternatives"): Too many levels of symbolic links (40)
INFO: IO error encountered -- skipping file deletion
...
INFO: total size is 441532565 speedup is 688.27
INFO: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
INFO: resume vm
INFO: Resuming...
INFO: vm is online again after 4 seconds
ERROR: Backup of VM 114 failed - command 'rsync --stats -x --numeric-ids -aH --delete --no-whole-file --inplace '/mnt/pve/pve_ts809/private/114/' '/mnt/pve/pve_ts879/dump/vzdump-openvz-114-2014_03_13-11_24_12.tmp'' failed: exit code 23
INFO: Backup job finished with errors
TASK ERROR: job errors

Code:

INFO: starting new backup job: vzdump 114 --remove 0 --mode snapshot --compress lzo --storage pve_ts809 --node pve1
...
INFO: starting final sync /mnt/pve/pve_ts809/private/114/ to /mnt/pve/pve_ts809/dump/vzdump-openvz-114-2014_03_13-11_34_15.tmp
...
INFO: total size is 441532565 speedup is 688.27
INFO: final sync finished (4 seconds)
INFO: resume vm
INFO: Resuming...
INFO: vm is online again after 4 seconds
INFO: creating archive '/mnt/pve/pve_ts809/dump/vzdump-openvz-114-2014_03_13-11_34_15.tar.lzo'
INFO: Total bytes written: 459786240 (439MiB, 31MiB/s)
INFO: archive file size: 227MB
INFO: Finished Backup of VM 114 (00:01:10)
INFO: Backup job finished successfully
TASK OK

Can anyone help me to sort out this issue?

Marco

hosts/storage details
================================================== =========================

Code:

#pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

Code:

#cat /etc/pve/storage.cfg:
...
nfs: pve_ts879
        path /mnt/pve/pve_ts879
        server ts879
        export /PVE
        options vers=3,tcp,nolock,rsize=262144,wsize=262144
        content images,iso,vztmpl,rootdir,backup
        maxfiles 2

nfs: pve_ts809
        path /mnt/pve/pve_ts809
        server <ts809 IP>
        export /PVE
        options vers=3
        content images,iso,vztmpl,rootdir,backup
        maxfiles 2
...

Code:

#mount
...
ts879:/PVE on /mnt/pve/pve_ts879 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=<ts879 IP>,mountvers=3,mountport=53850,mountproto=tcp,local_lock=all,addr=<ts879 IP>)
...
<ts809 IP>:/PVE on /mnt/pve/pve_ts809 type nfs (rw,relatime,vers=3,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=<ts809 IP>,mountvers=3,mountport=905,mountproto=udp,local_lock=none,addr=<ts809 IP>)
...


Proxmox Node running for 327 Days!

$
0
0
Just wanted to share this awesome accomplishment. One of our 3 Proxmox nodes setup had been running for last 327 Days without any reboot!!
proxmox-1yr.png

About 1 more month and we can celebrate 1 Year Uptime!! :)
Yes we did not do any update on it in last 327 Days. But it had been running great.
Attached Images

Disk Performance Monitoring in Proxmox host

$
0
0
how do I monitor my disk performance in proxmox since tools like iotop, iostat,perf,htop and nmon are not and not include in proxmox repo (IMHO) (I'm using apt-cache search command and it's return nothing)
or if there's another tools already include in proxmox that I'm not aware of
I'm using Proxmox VE 3.2

Regards,
Samuel Sappa

VM crash if CPU Socket is > 1 Filesystem corrupted

$
0
0
Hello,

I've tried to add more Sockets and Cores to an VM with Debian 7.2 insde.
The Configuration with 1 Socket and 4 cores, the system runs stable for over 6 months.
Now I've tried to change the Configuration to 2 Sokets and each with 4 Cores.
A week after this Change, the VM hangs, crashes and the root filesystem get corrupted.
I've no idea what is the problem. Bug?
Any ideas?

Server is a HP DL380 G7 2x Xeon E5645 with HT -> 24 Thread and 64 GB Ram

thanks for help,
s.gruner

pveversion -v:
running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-19-pve: 2.6.32-93
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

Using a NIC for a KVM

$
0
0
We want to use a VM as firewall, is possible to link a NIC to a single VM ?Either by means of a VLAN or by really linking the NIC directly?How would that work / config look like?

Several VM in error 3.2

$
0
0
Hello,

I have several issue since 3.2 on one node.
Some vm (windows 2008) correctly in install and randomly are in pause or display just cursor in top left with black screen.
Trying stop/start not help
Reinstall will fix the problem (remove vm and install again)
No seabios display (!)
Nothing in message/dmesg/syslog
There is several hundred of same configuration VM with api install so it's not a human error setup

3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

could be a kernel issue ?
any idea how i can debug with more info ?

Thanks

need help with proxmox

$
0
0
Hey im a student working now with proxmox, and im wondering if u can remote controll the server, like for example hosting the server with ur own internet and then getting accces to it through another internet

Cpanel in Proxmox

$
0
0
Cpanel does not support NAT I am obliged to have a public IP cluster with Proxmox, but I am concerned about security.


This configuration with a public cloud with Proxmox, if you set up a proper firewall, Cpanel would be correct and work as a Proxmox Virtual Machine Cluster?


But as I can have a Proxmox cluster with virtual machines running with cpanel?

Thanks.

Convert command from CLI to Proxmox/GUI to add serial telnet

$
0
0
Hi,

I'm trying to run a Cisco VM, which is supported on Qemu. The image starts, but I need a (virtual) serial port in order to connect to the 'console' port of the router.

This would mean I need to get this;

-serial telnet::13101,server,wait

into Promox somehow. I've added it to the <vm no>.conf file, but that won't work; with netstat -tan on the Proxmox host I don't see the port open.

Can someone help me out ?

High Availability

$
0
0
I havent really been able to find much on this, other than its what is/was used. But is proxmox still based on hard fencing for reliable HA? I understand why, but not really sure its needed when that can be done via software, as in during the startup process to check and make sure any vm's that are supposed to be running on the system arent running else where and if they are, dont start those VM's.

I was checking out Ovirt, it looks like its coming along real nice, but still lacks the features prox provides with its interface and Citrix doesnt use hard fencing and works out well. But with just seeing the release of proxmox 3.2, i been thinking about taking off ovirt and using it again at home as ovirt doesnt support openvz, only KVM, though im sure that will change quickly.

I have a managed switch, so its not hard for me to integrate HA hard fencing. I was just curious.

Thanks.

Rootdelay kernel parameter needed with Proxmox VE 3.2 and Adaptec 6405 with ZMCP

$
0
0
Hello, I am new to proxmox. I installed the latest version and discovered that very often system cannot boot (init can not find root). Rootdelay=someseconds (I set 20 to be sure) fixes this issue. Is there any default delay value or not?

My Isos have disappeared from the "Content" tab.

$
0
0
Hi, like the title said, i've a weird problem, all my isos have disappeared from the "Content" tab but they still are on /vz/template/iso folder.

Someone can tell me how i get my isos back ? I try to reboot the whole dedicated with no effect, i'm lost...

Here are some screenshot:

Putty:

http://www.upimg.fr/ih/k85s.jpg

Proxmox:

http://www.upimg.fr/ih/owx5.jpg


Thank you in advance, and sorry for my english.

EDIT: Updated to 3.2, seems to be resolved.

Update error.

$
0
0
One upgrade try was not succeeded and I reboot node and try again. But this time:

Code:

Setting up gdisk (0.8.5-1) ...
dpkg: error processing pve-manager (--configure):
 Package is in a very bad inconsistent state - you should
 reinstall it before attempting configuration.
dpkg: error processing vzctl (--configure):
 Package is in a very bad inconsistent state - you should
 reinstall it before attempting configuration.
dpkg: dependency problems prevent configuration of proxmox-ve-2.6.32:
 proxmox-ve-2.6.32 depends on pve-manager; however:
  Package pve-manager is not configured yet.
 proxmox-ve-2.6.32 depends on vzctl (>= 3.0.29); however:
  Package vzctl is not configured yet.

dpkg: error processing proxmox-ve-2.6.32 (--configure):
 dependency problems - leaving unconfigured
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-27-pve
Errors were encountered while processing:
 pve-manager
 vzctl
 proxmox-ve-2.6.32
E: Sub-process /usr/bin/dpkg returned an error code (1)

After update to 3.2: VM crashing during backup

$
0
0
Hi,

after upgrading to PVE 3.2 one of our VMs is crashing during backup. We are usgin a LVM-storage and snapshot mode.
Code:

  101: Mar 13 00:42:11 INFO: Starting Backup of VM 101 (qemu)
  101: Mar 13 00:42:11 INFO: status = running
  101: Mar 13 00:42:11 INFO: update VM 101: -lock backup
  101: Mar 13 00:42:12 INFO: exclude disk 'ide1' (backup=no)
  101: Mar 13 00:42:12 INFO: backup mode: snapshot
  101: Mar 13 00:42:12 INFO: ionice priority: 7
  101: Mar 13 00:42:12 INFO: creating archive '/mnt/pve/nas-backup/dump/vzdump-qemu-101-2014_03_13-00_42_11.vma.lzo'
  101: Mar 13 00:42:12 INFO: started backup task 'e3947a88-205a-42c1-9e7c-a352bcef5837'
  101: Mar 13 00:42:15 INFO: status: 0% (806354944/343597383680), sparse 0% (33640448), duration 3, 268/257 MB/s
  101: Mar 13 00:42:26 INFO: status: 1% (3516399616/343597383680), sparse 0% (105631744), duration 14, 246/239 MB/s
  101: Mar 13 00:45:53 INFO: status: 2% (7066157056/343597383680), sparse 0% (204587008), duration 221, 17/16 MB/s
  101: Mar 13 00:46:07 INFO: status: 3% (10373693440/343597383680), sparse 0% (258056192), duration 235, 236/232 MB/s
  101: Mar 13 00:49:00 ERROR: VM 101 not running
  101: Mar 13 00:49:00 INFO: aborting backup job
  101: Mar 13 00:49:00 ERROR: VM 101 not running
  101: Mar 13 00:49:02 ERROR: Backup of VM 101 failed - VM 101 not running

As the storage lies directly on a LVM there no imagefiles involved. Any ideas we we can start to look for the cause of the error?

FYI:
Code:

# pveversion -v
proxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-24-pve: 2.6.32-111
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

--
- Jens -

GlusterFS: storage: invalid format - storage ID '' contains illegal characters

$
0
0
Hi all,

first of all: Proxmox is great ! Keep on going !

While executing a backup task I can see the following error message.
Code:

Parameter verification failed.  (400)

storage: invalid format - storage ID '' contains illegal characters

Couldn't find anything in the mailing list nor the forum, therefore I try it here.
Strange is the missing storage ID.

Code:

pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

I setup up a glusterfs system with two nodes in replicate mode. Details below.

Code:

Server1:/var/log/glusterfs# gluster volume info

Volume Name: datastore
Type: Replicate
Volume ID: 3dcd805e-d289-443d-9cba-5bd03269c0b5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: backup1:/data/gfs_block
Brick2: backup2:/data/gfs_block
Options Reconfigured:
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on

and

Code:

Server1:/var/log/glusterfs# gluster volume status
Status of volume: datastore
Gluster process                                        Port    Online  Pid
------------------------------------------------------------------------------
Brick backup1:/data/gfs_block                          49153  Y      414959
Brick backup2:/data/gfs_block                          49153  Y      852056
NFS Server on localhost                                2049    Y      415221
Self-heal Daemon on localhost                          N/A    Y      415228
NFS Server on backup2                                  2049    Y      852134
Self-heal Daemon on backup2                            N/A    Y      852141

There are no active volume tasks

GlusterFS is ok and working.

Glusterfs was successful integrated via GUI. Storage.cfg of one of the clients.

Code:

Server4:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0

glusterfs: backup1glusterfs
        volume datastore
        path /mnt/pve/backup1glusterfs
        content backup
        server 10.3.2.112
        maxfiles 10

Mount points looks also good
Code:

10.3.2.112:datastore on /mnt/pve/backup1glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Does anybody knows where the issue is located? In the ML I found a patch but this was just for restoring of VM's.
Thanks for any help !

Br
Mr.X

Is running pve inside openvz possible?

$
0
0
It just came to my mind, maybe it is a stupid question, but is it possible? I know I can install pve in kvm guest, and I did it a few times for testing, but since pve over wheezy is supported, is it tecnically possible install pve over openvz debian container, and if not why, what is missing or incompatible? Just to know.

Thanks,
Marco

Backup takes longer as usual logfile: INFO: file has vanished

$
0
0
Suddenly the backups takes much longer ? Looking into the log files is see message: INFO: file has vanished What is the meaning of this ? Backup takes much longer now ? Mar 14 23:00:02 INFO: Starting Backup of VM 3100 (openvz) Mar 14 23:00:02 INFO: CTID 3100 exist mounted running Mar 14 23:00:02 INFO: status = running Mar 14 23:00:02 INFO: mode failure - unable to dump into snapshot (use option --dumpdir) Mar 14 23:00:02 INFO: trying 'suspend' mode instead Mar 14 23:00:02 INFO: backup mode: suspend Mar 14 23:00:02 INFO: ionice priority: 7 Mar 14 23:00:02 INFO: starting first sync /home/private/3100/ to /home/dump/vzdump-openvz-3100-2014_03_14-23_00_02.tmp Mar 14 23:00:11 INFO: file has vanished: "/home/private/3100/tmp/7IS6SPopWf" Mar 14 23:00:11 INFO: file has vanished: "/home/private/3100/tmp/xeNOUKyHIi"

restoring proxmox 3.1 node configuration to 3.2 version after reinstall?

$
0
0
Hello,

i want to reinstall my node. Original version was 3.1-21/93bf03d4
According to https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster i have backup of:
tar -czf /root/pve-cluster-backup.tar.gz /var/lib/pve-cluster
tar -czf /root/ssh-backup.tar.gz /root/.ssh

can i restore those files to new installation of version 3.2 without risk, will everything work OK ?

Regards
RH

SPICE, Browser configuration and Remote configuration Proxmox 3.2

$
0
0
Hi,

I already start a conversation in another thread, but, I start here a clean one because I'm sure that other people will face the same problems and all threads about SPICE do not refer to PROXMOX 3.2

So Hardware/Software

PROXMOX 3.2 upgraded form 3.1 that was working like a charm (only must have missing feature for me was the full screen console)
VM : W7 64Bit (like the video tranning http://www.youtube.com/watch?v=thVmhIw4-jU)
machine that connect to proxmox : W7 64bit
No linux no where

First problem :

After installing

on my machine, the only browser capable to run the spice console without any modification or custom configuration is Firefox, Chrome and Internet Explorer dose not support the virt-viwer by default

IE 8,9 : do not react
Chrome : download a VV file and do not open the viwer directly like Firefox

So if any one knows how to point Chrome directly to the virt-viewer please post it here

Second problem is about network

on my local network https://192.168.1.XXX:8006 when I open SPICE (with Firefox) it's work fine, I have my sooo waited full screen support

on external network https://108.XXX.XXX.XXX:8006 the VNC console work fine as usual, BUT : unfortunately the SPICE console shows Unable to connect to graphic server

I'm happy to hear from you and ready to provide you as many information as you want

regards
Mhamed

[SOLVED] Proxmox at Hetzner

$
0
0
Hi has anyone here a server at Hetzner that they have two Vm running , can they give me an example on how their networksetting are configured so both VM are accessable as "individual" server with own ips on the net?THanksOle
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>