Quantcast
Channel: Proxmox Support Forum
Viewing all 173547 articles
Browse latest View live

Feeling useless right now. Can you help?

$
0
0
Hi there,

I'm very confused with Proxmox.

Let me explain: I just bought a book about network security, where I'm meant to set up proxmox as a virtual pentest lab. Well, I installed proxmox but when I'm supposed to finish the install (i.e. go to the ip address on port 8006 through my web browser), I'm lost.

I'm running proxmox on a machine, let's call it PC2, and I'm writing this post from my main machine, let's call it PC1, which is running on windows. So, when I launch PC1's browser to ipaddress:8006 (didn't forget to type httpS), nothing worked. Duh, I was using wireless internet, so I plugged a RJ45 cable from PC1 to PC2, and still no luck.

I KNOW I'm probably doing something wrong. The book (and the wiki) just say "next step, point your browser to ipaddress:8006 and log in to the web interface for the first configuration". I'd love to, man, I just don't know how to do it. It's obviously not from PC2, but I don't know how to link PC1 to PC2 and make it work.

I feel useless right now, I thought I could handle installing it but it's getting obvious that I can't. And I guess this is one simple thing compared to the rest of the book... So yeah, guys, I might need a little help.

Please?

How to Shrink Memory baloon in guest ?

$
0
0
Hi.
About balloonin.
Guest had min 2GB and max 15GB of ram.
Over high usage it grows ram up to 15GB of max size. And it it working ok.
But now i want so shrink this balloon back, how can i do this ?
i am dooing
echo 1 > /proc/sys/vm/drop_caches
echo 2 > /proc/sys/vm/drop_caches
echo 3 > /proc/sys/vm/drop_caches

in gues machine. and the real usage of memory is 4GB / 15 GB

How can i shrink balloon so that Proxmox will only use 4GB and free up the rest 11GB ?

node red and vm's as black but running ok

$
0
0
Hi

I have a proxmox runnning on online.net and today a very strange thing started happening. All the sudden all the vm's keep working but the web UI shows the node as red and then all vm's as black without any reason.
As I said they all keep working but the web ui is completely useless and then a server reboot is the only option.
However if I wait 10-20 minutes sometimes it comes back to life...

Has anyone experience this issue ?

Thanks
Eduardo

Proxmox VE 4.0.7 - Backup process not ok

$
0
0
Hi all,

I am running
Code:

pveversion -vproxmox-ve: 4.0-7 (running kernel: 4.1.3-1-pve)
pve-manager: 4.0-26 (running version: 4.0-26/5d4a615b)
pve-kernel-4.1.3-1-pve: 4.1.3-7
lvm2: 2.02.116-pve1
corosync-pve: 2.3.4-2
libqb0: 0.17.1-3
pve-cluster: 4.0-14
qemu-server: 4.0-15
pve-firmware: 1.1-6
libpve-common-perl: 4.0-14
libpve-access-control: 4.0-6
libpve-storage-perl: 4.0-13
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-1
pve-container: 0.9-7
pve-firewall: 2.0-6
pve-ha-manager: 1.0-4
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2
lxc-pve: 1.1.2-4
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2

I had no issues with further version but since I upgraded to 4.x I am facing an issue while running backups

KVM no issue at all

Code:

108: Aug 31 20:00:01 INFO: Starting Backup of VM 108 (qemu)108: Aug 31 20:00:01 INFO: status = running
108: Aug 31 20:00:02 INFO: Use of uninitialized value $raw in pattern match (m//) at /usr/share/perl5/PVE/AccessControl.pm line 711.
108: Aug 31 20:00:02 INFO: Use of uninitialized value $raw in pattern match (m//) at /usr/share/perl5/PVE/JSONSchema.pm line 1153.
108: Aug 31 20:00:02 INFO: update VM 108: -lock backup
108: Aug 31 20:00:02 INFO: backup mode: snapshot
108: Aug 31 20:00:02 INFO: ionice priority: 7
108: Aug 31 20:00:02 INFO: creating archive '/BACKUP/dump/vzdump-qemu-108-2015_08_31-20_00_01.vma.lzo'
108: Aug 31 20:00:02 INFO: started backup task '3c9dbf1b-f9bd-429c-9e40-6aefd9ec534c' 108: Aug 31 20:00:05 INFO: status: 1% (520749056/42949672960), sparse 0% (44384256), duration 3, 173/158 MB/s
[...]
108: Aug 31 20:04:49 INFO: status: 100% (4294962960/42949672960), sparse 3% (1655775232), duration 287, 1054/98 MB/s
108: Aug 31 20:04:49 INFO: transferred 42949 MB in 287 seconds (149 MB/s)108: Aug 31 20:04:49 INFO: archive file size: 27.77GB
108: Aug 31 20:04:49 INFO: Use of uninitialized value $raw in pattern match (m//) at /usr/share/perl5/PVE/AccessControl.pm line 711. 108: Aug 31 20:04:49 INFO: Finished Backup of VM 108 (00:04:48)

but the same job on the same backup device is not able to run backups for LXC

Code:

181: Aug 31 20:06:40 INFO: Starting Backup of VM 181 (lxc)181: Aug 31 20:06:40 INFO: status = running
181: Aug 31 20:06:40 INFO: mode failure - storage does not support snapshots
181: Aug 31 20:06:40 INFO: trying 'suspend' mode instead
181: Aug 31 20:06:40 INFO: backup mode: suspend 181: Aug 31 20:06:40 INFO: ionice priority: 7
[...]

Backup device is a LVM volume.

This is a big issue for me as it is impackting the SLA while shutting down the VM instead of using snapshot mode...

I don't know why it is failing. Maybe you have an idea.

Thanks,
Mr.X

interfaces.d/ifcfg-vmbr0 not "seen" by ProxMox 3.4 (ansible ?)

$
0
0
Hi there,

Using ProxMox 3.4 and making use of an Ansible playbook/role to create the Debian interface information in /etc/network/interfaces.d/ifcfg-<interfacename>, but proxmox does not pick it up, unless I move ifcfg-vmbr0's contents to /etc/network/interfaces.

Anybody have a preferred Ansible role/playbook to configure ProxMox network interfaces?

enable numa post install

$
0
0
If I said no to numa in the installer, is there a way to torn it on post-install?

Backup Optimization and Deduplication

$
0
0
Hi everyone,

I'd like to know how you handle backup optimization and deduplication. Is there some golden rule for that?

Currently, I optimize my VMs once a month with cleaning the harddisk of temporary files and zeroing the filesystem for minimal harddisk footprint. Some linux VMs are also equipped with virtio-scsi-adapter to use fstrim/discard, but I use clustered LVM, so there is no real space saving benefit of this yet. It is planned to use gfs for cluster filesystem, but I had no time to try this in my test cluster environment.

I plan to backup my machines without compression and write it to a volume with internal deduplication (ZFS or OpenDedup) over network. Any suggestions on software (e.g. FreeBSD ZFS over Linux-ZFS, etc.)? I know that I need at least 4 GB of RAM per 1 TB of backup storage with a 4k block size for deduplication - depending on the used software.

Best,
LnxBil

Problem with USB pass-thru using a symlink

$
0
0
Hi there,

First time poster and pretty green Proxmox user, but have been very impressed in my experiences so far.

I have managed to migrate my entire home automation server to a new box running Proxmox with debian containers. The last thing I am really struggling with is getting my two USB dongles (Z-Wave and RFXCOM) to pass-thru to my openHAB container (which accesses the dongles).

I have setup a udev rule on the host to symlink each dongle to /dev/zwave and /dev/rfxcom (which is what I had on the original Ubuntu server). This is working fine.

I then added these two devices to my container config file;

Code:

DEVNODES="rfxcom:rw zwave:rw "
When I restart the container and enter it, I can't see either /dev/zwave or /dev/rfxcom. But I can see the actual devices from the host, i.e. /dev/ttyUSB1 and /dev/ttyUSB2.

Any ideas why these symlinks are not being persisted when I restart?

Another thing to note, if I run;

Code:

vzctl set 104 --devnode zwave:rw --save
on the host and enter the container I can see /dev/zwave no worries - but as soon as I restart the container it is lost, and I only have the ttyUSB device available.

I have read so many threads about this and can't seem to find anything obvious which leads me to think I am doing something stupid but I have tried everything I have come across.

If anyone has any suggestions or tips I will be all ears!!

Thanks in advance,
Ben

exit code 100 / timeout

$
0
0
Hi all,


Im recently getting this on my backups.

INFO: trying to get global lock - waiting...
ERROR: can't aquire lock '/var/run/vzdump.lock' - got timeout

and this on updates, manual it doesnt do this


TASK ERROR: command 'apt-get update' failed: exit code 100


What could cause this?

iscsi multipath not working on pve4.0beta

$
0
0
I'm test iscsi multipath with PVE4.0 iso file.I used Freenas9.3 as iscsi provider, and build 2 subnetworks 192.168.40.0/24 and 192.168.41.0/24 as iscsi multipath.After I finished iscsi config and multipath config, I ran multipath -ll, and I got "error parsing config file". moreover, dm seemed now working, because mpath0 and mpath1 have not been built under /dev/mapper.does any encouter similar problem? can any one gave any clue? thanks first.

OpenVZ containers to LXC

$
0
0
May be this question already has been answered. What will happen to the existing OpenVZ containers when upgraded to Proxmox VE 4? Will they simply stop working or will need to be converted to LXC? Or will LXC and OpenVZ will co exist in Proxmox VE 4 and up?

[SOLVED] Headers for linux-headers-4.1.3-1-pve

$
0
0
Hello,

I'm on proxmox 4 beta with the kernel "4.1.3-1-pve" and I need to use gcc but the headers appears to be unavailable.

Is it because of the beta status or is there a way to download the headers ?

Thanks

cluster HA LVM service fails beacuse of a bug

$
0
0
Hello,

I was playing with proxmox cluster for active/passive SAN configuration using HA LVM and ext4.

I configured cluster.conf and lvm.conf but my service was not starting with error;
Code:

Sep 02 00:04:34 rgmanager Starting disabled service service:GlusterHA
Sep 02 00:04:35 rgmanager [lvm] HA LVM:  Improper setup detected
Sep 02 00:04:35 rgmanager [lvm] * initrd image needs to be newer than lvm.conf
Sep 02 00:04:35 rgmanager start on lvm "lvmSAS" returned 1 (generic error)
Sep 02 00:04:35 rgmanager #68: Failed to start service:GlusterHA; return value: 1
Sep 02 00:04:35 rgmanager Stopping service service:GlusterHA
Sep 02 00:04:35 rgmanager [ip] 10.0.0.81/24 is not configured
Sep 02 00:04:35 rgmanager [fs] unmounting /SANSAS
Sep 02 00:04:35 rgmanager [lvm] HA LVM:  Improper setup detected
Sep 02 00:04:35 rgmanager [lvm] * initrd image needs to be newer than lvm.conf
Sep 02 00:04:35 rgmanager [lvm] WARNING: An improper setup can cause data corruption!
Sep 02 00:04:35 rgmanager [lvm] Deactivating sansas/sansas01
Sep 02 00:04:35 rgmanager [lvm] Making resilient : lvchange -an sansas/sansas01
Sep 02 00:04:35 rgmanager [lvm] Resilient command: lvchange -an sansas/sansas01 --config devices{filter=["a|/dev/mapper/sasvol-part1|","a|/dev/mapper/saSep 02 00:04:36 rgmanager [lvm] Removing ownership tag (node01) from sansas/sansas01
Sep 02 00:04:36 rgmanager Service service:GlusterHA is recovering
Sep 02 00:04:36 rgmanager #71: Relocating failed service service:GlusterHA
Sep 02 00:04:36 rgmanager #70: Failed to relocate service:GlusterHA; restarting locally
Sep 02 00:04:36 rgmanager Recovering failed service service:GlusterHA
Sep 02 00:04:36 rgmanager [lvm] HA LVM:  Improper setup detected
Sep 02 00:04:36 rgmanager [lvm] * initrd image needs to be newer than lvm.conf
Sep 02 00:04:36 rgmanager start on lvm "lvmSAS" returned 1 (generic error)
Sep 02 00:04:36 rgmanager #68: Failed to start service:GlusterHA; return value: 1
Sep 02 00:04:36 rgmanager Stopping service service:GlusterHA
Sep 02 00:04:36 rgmanager [ip] 10.0.0.81/24 is not configured
Sep 02 00:04:36 rgmanager [fs] stop: Could not match /dev/mapper/sansas-sansas01 with a real device
Sep 02 00:04:36 rgmanager stop on fs "fsSAS" returned 2 (invalid argument(s))
Sep 02 00:04:36 rgmanager [lvm] HA LVM:  Improper setup detected
Sep 02 00:04:36 rgmanager [lvm] * initrd image needs to be newer than lvm.conf
Sep 02 00:04:36 rgmanager [lvm] WARNING: An improper setup can cause data corruption!
Sep 02 00:04:36 rgmanager [lvm] Deactivating sansas/sansas01
Sep 02 00:04:37 rgmanager [lvm] Making resilient : lvchange -an sansas/sansas01
Sep 02 00:04:37 rgmanager [lvm] Resilient command: lvchange -an sansas/sansas01 --config devices{filter=["a|/dev/mapper/sasvol-part1|","a|/dev/mapper/saSep 02 00:04:37 rgmanager [lvm] Removing ownership tag (node01) from sansas/sansas01
Sep 02 00:04:37 rgmanager #12: RG service:GlusterHA failed to stop; intervention required
Sep 02 00:04:37 rgmanager Service service:GlusterHA is failed
Sep 02 00:04:37 rgmanager #2: Service service:GlusterHA returned failure code.  Last Owner: node01
Sep 02 00:04:37 rgmanager #4: Administrator intervention required.

I updated initrd images again and rebooted but nothing changed.

After a bit debugging i found the failing command in "/usr/share/cluster/lvm.sh"
Code:

...
if [ "$(find /boot -name *.img -newer /etc/lvm/lvm.conf)" == "" ]; then
...

this command did not find my initrd images but some other files.
In default proxmox install the file naming "initrd.img-<VERSION>-pve".

So the correct command needs to be "find /boot -maxdepth 1 -name initrd.img*"

I wanted to report this, but where ?
is this a bug of Proxmox, RedHat or Debian ?

Esxi as VM in Proxmox4

$
0
0
Hello,

Is it possible to run an ESXi-server as a VM in proxmox4?
When I start the VM I get some kind of "error", see attachment.

Can someone help please to solve the issue?

Thanks,

Rudi
Attached Images

glusterfs + live migrate causes split brain

$
0
0
Hi,
I have strange problem in my 2 locations.
2 node + quorum on NAS
glusterf on both nodes as shared storage

When i do live migration or HA kicks in when node is going down. i see that glusterfs is doing splitbrain.
Anybody know why ?

root@node1:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.4-1

Resize iscsi volume

$
0
0
Hello, use starwind as iscsi target. In starwind i resize imagefile from 100 to 110 Gb, but Proxmox see only 100 Gb. How i can resize iscsi storage in proxmox?

failed to add node to new 4.0 cluster

$
0
0
Just created a new 4.0 test cluster with boxes that had authority keys in place and password-less ssh root access between clean installed+patched boxes.
On first node I created a new cluster with 'pvecm create clustername' and adding 2. node seemed to take like forever and eventually I interrupted that,
wonder how to recover from this 'Activity blocked' state and my nodes?

Code:

root@n1:~# pvecm status
Quorum information
------------------
Date:            Wed Sep  2 09:49:13 2015
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          4
Quorate:          No


Votequorum information
----------------------
Expected votes:  2
Highest expected: 2
Total votes:      1
Quorum:          2 Activity blocked
Flags:           


Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.3.0.1 (local)

On node 2 I got this:

Code:

root@n2:~# pvecm add n1
copy corosync auth key
stopping pve-cluster service
backup old database
waiting for quorum...^C
root@n2:~# pvecm add 10.3.0.1
can't create shared ssh key database '/etc/pve/priv/authorized_keys'
authentication key already exists
root@n2:~# ping n1
PING n1 (10.3.0.1) 56(84) bytes of data.
64 bytes from n1 (10.3.0.1): icmp_seq=1 ttl=64 time=0.171 ms
64 bytes from n1 (10.3.0.1): icmp_seq=2 ttl=64 time=0.158 ms
^C


This is found in /var/log/daemon.log:

Code:

Sep  2 09:42:22 n2 pmxcfs[1406]: [main] notice: teardown filesystem
Sep  2 09:42:24 n2 pmxcfs[1406]: [main] notice: exit proxmox configuration filesystem (0)
Sep  2 09:42:24 n2 pmxcfs[3204]: [quorum] crit: quorum_initialize failed: 2
Sep  2 09:42:24 n2 pmxcfs[3204]: [quorum] crit: can't initialize service
Sep  2 09:42:24 n2 pmxcfs[3204]: [confdb] crit: cmap_initialize failed: 2
Sep  2 09:42:24 n2 pmxcfs[3204]: [confdb] crit: can't initialize service
Sep  2 09:42:24 n2 pmxcfs[3204]: [dcdb] crit: cpg_initialize failed: 2
Sep  2 09:42:24 n2 pmxcfs[3204]: [dcdb] crit: can't initialize service
Sep  2 09:42:24 n2 pmxcfs[3204]: [status] crit: cpg_initialize failed: 2
Sep  2 09:42:24 n2 pmxcfs[3204]: [status] crit: can't initialize service
Sep  2 09:42:24 n2 pve-ha-crm[1800]: ipcc_send_rec failed: Transport endpoint is not connected
Sep  2 09:42:24 n2 pve-ha-crm[1800]: ipcc_send_rec failed: Connection refused
Sep  2 09:42:24 n2 pve-ha-crm[1800]: ipcc_send_rec failed: Connection refused
Sep  2 09:42:24 n2 pve-ha-lrm[1810]: ipcc_send_rec failed: Transport endpoint is not connected
Sep  2 09:42:24 n2 pve-ha-lrm[1810]: ipcc_send_rec failed: Connection refused
Sep  2 09:42:24 n2 pve-ha-lrm[1810]: ipcc_send_rec failed: Connection refused
Sep  2 09:42:25 n2 corosync[3220]:  [MAIN  ] Corosync Cluster Engine ('2.3.4.22-8252'): started and ready to provide service.
Sep  2 09:42:25 n2 corosync[3220]:  [MAIN  ] Corosync built-in features: augeas systemd pie relro bindnow
Sep  2 09:42:25 n2 corosync[3222]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Sep  2 09:42:25 n2 corosync[3222]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Sep  2 09:42:25 n2 corosync[3222]:  [TOTEM ] The network interface [10.3.0.2] is now up.
Sep  2 09:42:25 n2 corosync[3222]:  [SERV  ] Service engine loaded: corosync configuration map access [0]
Sep  2 09:42:25 n2 corosync[3222]:  [QB    ] server name: cmap
Sep  2 09:42:25 n2 corosync[3222]:  [SERV  ] Service engine loaded: corosync configuration service [1]
Sep  2 09:42:25 n2 corosync[3222]:  [QB    ] server name: cfg
Sep  2 09:42:25 n2 corosync[3222]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Sep  2 09:42:25 n2 corosync[3222]:  [QB    ] server name: cpg
Sep  2 09:42:25 n2 corosync[3222]:  [SERV  ] Service engine loaded: corosync profile loading service [4]
Sep  2 09:42:25 n2 corosync[3222]:  [QUORUM] Using quorum provider corosync_votequorum
Sep  2 09:42:25 n2 corosync[3222]:  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Sep  2 09:42:25 n2 corosync[3222]:  [QB    ] server name: votequorum
Sep  2 09:42:25 n2 corosync[3222]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Sep  2 09:42:25 n2 corosync[3222]:  [QB    ] server name: quorum
Sep  2 09:42:25 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:4) was formed. Members joined: 2
Sep  2 09:42:25 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:25 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:25 n2 corosync[3214]: Starting Corosync Cluster Engine (corosync): [  OK  ]
Sep  2 09:42:26 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:8) was formed. Members
Sep  2 09:42:26 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:26 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:28 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:12) was formed. Members
Sep  2 09:42:28 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:28 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:29 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:16) was formed. Members
Sep  2 09:42:29 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:29 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:30 n2 pmxcfs[3204]: [status] notice: update cluster info (cluster name  test-pmx, version = 2)
Sep  2 09:42:31 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:20) was formed. Members
Sep  2 09:42:31 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:31 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:31 n2 pmxcfs[3204]: [dcdb] notice: members: 2/3204
Sep  2 09:42:31 n2 pmxcfs[3204]: [dcdb] notice: all data is up to date
Sep  2 09:42:31 n2 pmxcfs[3204]: [status] notice: members: 2/3204
Sep  2 09:42:31 n2 pmxcfs[3204]: [status] notice: all data is up to date
Sep  2 09:42:32 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:24) was formed. Members
Sep  2 09:42:32 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:32 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:34 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:28) was formed. Members
Sep  2 09:42:34 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:34 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:35 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:32) was formed. Members
Sep  2 09:42:35 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:35 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
Sep  2 09:42:36 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:36) was formed. Members
Sep  2 09:42:36 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:42:36 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.
...
Sep  2 09:46:46 n2 corosync[3222]:  [TOTEM ] A new membership (10.3.0.2:732) was formed. Members
Sep  2 09:46:46 n2 corosync[3222]:  [QUORUM] Members[1]: 2
Sep  2 09:46:46 n2 corosync[3222]:  [MAIN  ] Completed service synchronization, ready to provide service.



root@n2:~# grep -c 'Members\[1\]: 2' /var/log/daemon.log
183

After a reboot n2 says:

Code:

from daemon.log:
...
Sep  2 10:27:31 n2 networking[1159]: done.
Sep  2 10:27:31 n2 pmxcfs[1416]: [quorum] crit: quorum_initialize failed: 2
Sep  2 10:27:31 n2 pmxcfs[1416]: [quorum] crit: can't initialize service
Sep  2 10:27:31 n2 pmxcfs[1416]: [confdb] crit: cmap_initialize failed: 2
Sep  2 10:27:31 n2 pmxcfs[1416]: [confdb] crit: can't initialize service
Sep  2 10:27:31 n2 pmxcfs[1416]: [dcdb] crit: cpg_initialize failed: 2
Sep  2 10:27:31 n2 pmxcfs[1416]: [dcdb] crit: can't initialize service
Sep  2 10:27:31 n2 pmxcfs[1416]: [status] crit: cpg_initialize failed: 2
Sep  2 10:27:31 n2 pmxcfs[1416]: [status] crit: can't initialize service
...


root@n2:~# pvecm status
Quorum information
------------------
Date:            Wed Sep  2 10:28:39 2015
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000002
Ring ID:          892
Quorate:          No


Votequorum information
----------------------
Expected votes:  2
Highest expected: 2
Total votes:      1
Quorum:          2 Activity blocked
Flags:           


Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 10.3.0.2 (local)


Any hints appreciated, TIA

ESXi to PVE migration

$
0
0
Hello everybody,
I'm planning the migration from our current virtual environment from VMware ESXi 5.5 to Proxmox.
I need to migrate VMs from main server to a temporary server, install proxmox on the main server, and then move VMs back in there.

Since now I have a FC-SAN available, here's what I would like to do:
1) Install PVE on temporary server
2) Create a RAID10 volume on the SAN and export a LUN
3) Attach the SAN LUN from the temporary server and create LVM partition on it for storing VMs data.
4) Migrate all VMs from ESXi to Proxmox
5) Shutdown ESXi host, reinstall it with Proxmox and reconfigure network and storage.
6) Reconfigure VMs on the new host
7) Shutdown the temporary server.

At point 6: How do I migrate VMs from the temporary host to the new one? Is copying config files from /etc/pve/nodes/nodename/qemu-server to the same path in the new server enough? (after, of course, have reconfigured network and storage the same way as the temp node)

I would like to avoid creating a cluster just to move vms from an host to another...

Thanks

High Availability Cluster with PVE-zsync

$
0
0
Hello,

Is it possible to use the High Availability Cluster feature in combination with zsync instead of a shared storage? Any idea's on the approach?

And when i click on the tab "HA" than its all grayed out, any thoughts?

Best regards,

Rens

[Proxmox 4b1] ERROR: online migrate failure - aborting

$
0
0
Hi all,I've installed a mini Cluster running on 2 NUC's, it's meant to be my test-env! Installed yesterday, upgraded and running fine so far, except the migration... EDIT: And it seems I'm not allowed to post:
Quote:

You are not allowed to post any kinds of links, images or videos until you post a few times.
But as you'll see in my paste, there's no such thing's in my post!!!
Viewing all 173547 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>