Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

sheepdog 0.6 cache and live migration

$
0
0
Hi, I'm trying to test sheepdog and see what can do for me, so I'm a newbie on this regard.
I've read that sheepdog cache (so the caching of the node regards the rest of the sheepdog cluster, i.e. like sheep -w object,size=20000 /mnt/sheepdog) gives a big performance boost.
As far as I've been able to decode proxmox sheepdog script, it's not activated, nor is possible to do so using the /etc/default/sheepdog config file.
I ask: is it activation planned?
If so, will the live migration take it into account?
As far as I've understood, with cache there is the need of:
- pause VM with qm monitor command
- collie vdi cache flush test -a target_node
- collie vdi cache delete vdiname
then migrate
Thanks a lot

2 node cluster, shared storage, no HA, one dies, how run VM on the other?

$
0
0
Hi, I've created a 2 node cluster, NO HA (don't want automatic stuff going on, nor I've clear ideas about fencing, redundancy for network, etc), and shared storage.
Let's say that I've VM 101 running on node 1, shared storage, and VM 102 running on node 2, same shared storage.
Now node 2 dies. I would like to send node 2 to repair and run VM 102 on node 1.
I thought that I could have done it this way but I got an error (from node 1, of course):
Code:

# mv /etc/pve/nodes/prox02/qemu-server/102.conf  /etc/pve/nodes/prox01/qemu-server/102.conf
mv: cannot move `/etc/pve/nodes/prox02/qemu-server/102.conf' to `/etc/pve/nodes/prox01/qemu-server/102.conf': Permission denied

the same with 'cp' command. The config file is accessible since a "cat" works fine.
Any idea? Is there a better aproach?

SOLUTION:
ok, was due to quorum problem, the cluster was inactive.
After
pvecm expected 1
I was able to move the config file.
I don't understant this quorum stuff when there is NO HA involved, I've turned on the second node and expected has jumped to 2 again (like quorum), and things seem to work smootly (second node does not show the Vm config, so someway has synched it... magic?)

Thanks a lot

Proxmox falls when is in production

$
0
0
Hi friends,

I have a big problem.

My configuration is a Cluster with 2 nodes Proxmox 3. (Dell 1950 2CPU and 16GB RAM).

This a partition table in both servers:

root@proxmox2:/var/log# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 1.6G 336K 1.6G 1% /run
/dev/mapper/pve-root 95G 2.5G 88G 3% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 41M 3.1G 2% /run/shm
/dev/mapper/pve-data 791G 46G 746G 6% /var/lib/vz
/dev/sda1 495M 34M 436M 8% /boot
/dev/fuse 30M 24K 30M 1% /etc/pve
172.16.0.2:/var/lib/vz/storage-proxmox2 791G 46G 746G 6% /mnt/pve/storage-proxmox2
172.16.0.1:/var/lib/vz/storage-proxmox1 791G 156G 636G 20% /mnt/pve/storage-proxmox1


Connected directly with a Gb nic.


I installed the OS in servers in my office with a local ips.
I have been tested with multiple cloning, migration, etc, the CPU load up to 3 or 4, and the VM are not aware. All perfect!! Proxmox is incredible, not fails!


After, I install the servers in the IDC, and I change the IPS of nodes here:

nano /etc/network/interfaces //Edit vmbr0
nano /etc/hosts
nano /etc/issue



I installed a VM-100 (The VM is running the SMTP service on it and Apache Service) and create a Template, all work correctly.

The next day, I Clone Template to other node (Full Clone), the CPU load up to 9, 10, 11, 12.... and VM-100 down. This node not works in Firefox. I forced restart Node and everything returns to normal.

The next day do the same test with a Template, and fails again...


Today, the VM-101 is work in the node, but DNS's pointing to other server
, (no customer are using this VM, just is running for testing purposes).
Now, I test the cloning with a Template again, works correctly.... the CPU load up to 2 or 3, clonation is ready in 3 minutes.


Any explanation?


Thanks!! ;)

web interface + login ask + multinode

$
0
0
Hey,

I try proxmox 3.0 on vmware player for try the products. I have an inconvenience when i am on web interface.

1 - i log on ihs (which is on first node)
2 - i navigate on ihs
3 - when i click on second node, ihs ask me to tape my login/password
4 - i catch my account but ihs loop on this ask

I can't use the cluster configuration with this problem

anyone can help me?

Thanks

Network Configuration

$
0
0
Hi, I have a question about network model.
I want to use such configuration, 10 vlans in trunk on switch (HP 1910) and two tagged vlans in /etc/network/interfaces (one for storage and one for management) on server and create virtual machine with tag.
Is it possible?

PVE 3.0: Reverting to standalone server?

$
0
0
I removed a node from a cluster using pvecm delnode xxxxx.Now I can no longer connect to the web interface.How can I fix this short of re-installing PVE 3.0?Thanks,Wolf

Install Proxmox 3.0 on Debian wheezy - dependency problems ?

$
0
0
I try to install Proxmox 3.0 on Debian 7.1 and got some dependency problems, how can I solve this ?

Code:

root@proxmox3wn02:~# apt-get install proxmox-ve-2.6.32 ntp ssh lvm2 postfix ksm-control-daemon vzprocps open-iscsi bootlogd
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
ntp ist schon die neueste Version.
open-iscsi ist schon die neueste Version.
ssh ist schon die neueste Version.
postfix ist schon die neueste Version.
bootlogd ist schon die neueste Version.
lvm2 ist schon die neueste Version.
ksm-control-daemon ist schon die neueste Version.
proxmox-ve-2.6.32 ist schon die neueste Version.
vzprocps ist schon die neueste Version.
0 aktualisiert, 0 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
11 nicht vollständig installiert oder entfernt.
Nach dieser Operation werden 0 B Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren [J/n]? j
pve-cluster (3.0-4) wird eingerichtet ...
[....] Starting pve cluster filesystem : pve-cluster[main] crit: Unable to get local IP address
 (warning).
invoke-rc.d: initscript pve-cluster, action "start" failed.
dpkg: Fehler beim Bearbeiten von pve-cluster (--configure):
 Unterprozess installiertes post-installation-Skript gab den Fehlerwert 255 zurück
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von redhat-cluster-pve:
 redhat-cluster-pve hängt ab von pve-cluster; aber:
  Paket pve-cluster ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von redhat-cluster-pve (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von fence-agents-pve:
 fence-agents-pve hängt ab von redhat-cluster-pve; aber:
  Paket redhat-cluster-pve ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von fence-agents-pve (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von libpve-access-control:
 libpve-access-control hängt ab von pve-cluster; aber:
  Paket pve-cluster ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von libpve-access-control (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von clvm:
 clvm hängt ab von redhat-cluster-pve; aber:
  Paket redhat-cluster-pve ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von clvm (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von libpve-storage-perl:
 libpve-storage-perl hängt ab von clvm; aber:
  Paket clvm ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von libpve-storage-perl (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von qemu-server:
 qemu-server hängt ab von libpve-storage-perl; aber:
  Paket libpve-storage-perl ist noch nicht konfiguriert.
 qemu-server hängt ab von pve-cluster; aber:
  Paket pve-cluster ist noch nicht konfiguriert.
 qemu-server hängt ab von redhat-cluster-pve; aber:
  Paket redhat-cluster-pve ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von qemu-server (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von resource-agents-pve:
 resource-agents-pve hängt ab von redhat-cluster-pve; aber:
  Paket redhat-cluster-pve ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von resource-agents-pve (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von pve-manager:
 pve-manager hängt ab von qemu-server (>= 1.1-1); aber:
  Paket qemu-server ist noch nicht konfiguriert.
 pve-manager hängt ab von pve-cluster (>= 1.0-29); aber:
  Paket pve-cluster ist noch nicht konfiguriert.
 pve-manager hängt ab von libpve-storage-perl; aber:
  Paket libpve-storage-perl ist noch nicht konfiguriert.
 pve-manager hängt ab von libpve-access-control (>= 3.0-2); aber:
  Paket libpve-access-control ist noch nicht konfiguriert.
 pve-manager hängt ab von redhat-cluster-pve; aber:
  Paket redhat-cluster-pve ist noch nicht konfiguriert.
 pve-manager hängt ab von resource-agents-pve; aber:
  Paket resource-agents-pve ist noch nicht konfiguriert.
 pve-manager hängt ab von fence-agents-pve; aber:
  Paket fence-agents-pve ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von pve-manager (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von vzctl:
 vzctl hängt ab von pve-cluster; aber:
  Paket pve-cluster ist noch nicht konfiguriert.
 vzctl hängt ab von libpve-storage-perl; aber:
  Paket libpve-storage-perl ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von vzctl (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von proxmox-ve-2.6.32:
 proxmox-ve-2.6.32 hängt ab von pve-manager; aber:
  Paket pve-manager ist noch nicht konfiguriert.
 proxmox-ve-2.6.32 hängt ab von qemu-server; aber:
  Paket qemu-server ist noch nicht konfiguriert.
 proxmox-ve-2.6.32 hängt ab von vzctl (>= 3.0.29); aber:
  Paket vzctl ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von proxmox-ve-2.6.32 (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
Fehler traten auf beim Bearbeiten von:
 pve-cluster
 redhat-cluster-pve
 fence-agents-pve
 libpve-access-control
 clvm
 libpve-storage-perl
 qemu-server
 resource-agents-pve
 pve-manager
 vzctl
 proxmox-ve-2.6.32
E: Sub-process /usr/bin/dpkg returned an error code (1)

Display Inconsistency

$
0
0
I'm seeing a small inconsistency...nothing impacting usability. I have two servers I installed at the same time, with the same version, and I'm seeing a GUI display inconsistency. If you look at the two screenshots below, just to the right of the PROXMOX logo in the top left corner you'll see that one shows the current version, while the other does not. I included the Summary screen of the node so you can see that the version and kernel are 100% the same. I don't know what is causing the display inconsistency, but I thought I'd share it just in case anyone cares:

Screen Shot 2013-06-19 at 2.32.45 PM.pngScreen Shot 2013-06-19 at 2.33.01 PM.png

Adding Hardware RAID

$
0
0
I initially had 6 - 2tb Disks.
I was warned against SoftRaid and FakeRaid, so I installed Proxmox on a single disk, and rotated backups to the other disks every 12 - 24 hours.
I am ready to buy my RAID Card and add RAID to this mix. How can I establish RAID when there wasn't anything before?
If I install PROXMOX on a different disk, will it recognize my backups and 1 VM?
If possible, I would like to move it to completely different hardware that has EEC Memory and RAID. Please advise.

cant launch vm's

$
0
0
i tried just installed proxmox to test it and i got that error when trying to start a vm with nat networking since i have not setup bridged networking rules yet.

WARNING: failed to find acpi-dsdt.aml

qemu: could not load PC BIOS 'bios.bin'
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -name tes2 -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -k en-us -m 512 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'file=/var/lib/vz/template/iso/debian-7.1.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/101/vm-101-disk-1.qcow2,if=none,id=drive-ide0,format=qcow2,aio=native,cache=none' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -netdev 'type=user,id=net0,hostname=tes2' -device 'e1000,mac=B6:11:0E:BB:45:6F,netdev=net0,bus=pci.0 ,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.14.3-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1


Windows 7 KVM's not starting.

$
0
0
Hello
I've 3 windows KVM that are not working. in console that are stuck at:
"Booting from Hard Disk..."

there a /etc/pve/qemu-server/ config file :
<pre>
bootdisk: ide0
cores: 2
ide0: drbd-fbc241:vm-310-disk-1,size=6148M
ide2: none,media=cdrom
memory: 1024
name: Win7-template
net0: e1000=36:B2:54:4B:63:CD,bridge=vmbr0
ostype: win7
sockets: 1
</pre>

and pveversion -v :
<pre>
fbc241 /etc/pve/qemu-server # pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1
</pre>




Any clues on how to debug or fix the issue?

'ipcc_send_rec failed' on a non-cluster installation

$
0
0
Hi,

I do not run a cluster, but I get 'ipcc_send_rec failed: Connection refused' error messages in /var/log/sylsog.
What I did: I 'cloned' (created LVM volumes and rsynced the volumes) the PVE Installation to an new hard disk. The systems boots up properly, but PVE is not running.

Code:

/etc/pve# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

Issue with RGMANAGER After upgrade 2.3 - 3.0

$
0
0
Hi, I upgrade form 2.3 to 3.0 and now I have a problem with rgmanager.

If I try to install a new cluster.conf I cant :( and If I tried to use "fence_tool leave" I obtain that report:

found dlm lockspace /sys/kernel/dlm/rgmanagerfence_tool: cannot leave due to active systems



What do you recommend to do?

Thanks.

VMA archive restore outside of Proxmox

$
0
0
Hi,

After reading the (sparse) documentation in the wiki and in repository as well as a couple of threads here in there forum, a couple of thrilling questions raised in my mind:
  • Is it possible and how do you restore VMA archives outside, I mean without, a recent PVE install ?
  • Is there a GUI/CLI option to fallback to old XXth century tar backup method ?


Thank you in advance for your enlightenments
Bests

Proxmox Mail Gateway: Hotfix 3.1-5773

$
0
0
We just released hotfix 3.1-5773 for our Proxmox Mail Gateway 3.1.

Release Notes

06.06.2013: Proxmox Mail Gateway 3.1-5773

  • ClamAV 0.97.8
  • Avira SAV bug fix (license check)
  • Spamassassin rule updates


19.04.2013: Proxmox Mail Gateway 3.1-5741

  • fix admin permissions for cluster nodes (admin can now reboot slaves)

Download
http://www.proxmox.com/downloads/category/service-packs
__________________
Best regards,

Martin Maurer

Sheepdog storage image migration/creation bug

$
0
0
Hello!

I have encountered strange bug in storage migration system (or gui?)

I have VM with two HDD:
vm-500-disk-1
vm-500-disk-2

I have migrated "vm-500-disk-1" to sheepdog storage, now I click on "vm-500-disk-2" and start migration.
Here is error:
Code:

create full clone of drive virtio1 (local:500/vm-500-disk-2.raw)
TASK ERROR: storage migration failed: sheepdog create vm-500-disk-1' error: Failed to create VDI vm-500-disk-1: VDI exists already

It seems that Proxmox trying to migrate wrong image

Update:

Also I can't create second HDD on sheepdog storage:
Code:

sheepdog create vm-112-disk-1' error: Failed to create VDI vm-112-disk-1: VDI exists already (500)
Here I add "vm-112-disk-2", but Proxmox tries to add "vm-112-disk-1"

mount a new LV as /var/lib/vz

$
0
0
Hi,

this is something I never did before, and need some info/help:
I installed pve 3.0 over a wheezy 7.1, which was partitioned like this:
- a boot partition (~500 MB)
- a swap partition (~5 GB)
- a free partition (~240 GB)

the free partition was added as PV to lvm
tha PV was added to a "pve" VG
two LVs were created on the "pve" VG
- "root" (~8GB)
- "data" (~210GB)

this was before installing pve over debian, so i thought to prepare those two typical pve LVs, to use them after pve install:
i was able to mount /root on pve-root, but since there weas no /var/lib/vz folder, i left pve-data unused

then I installed pve, went smoothly, and now I have:
Code:

pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

now of course /var/lib/vz is too small, and wish to mount it on the pve-data LV waiting there:

I have currently no vm/ct created atm, so what is the best procedure? I thought I could simply
- mount pve-data elsewhere,
- move everything from /var/lib/vz there,
- remove /var/lib/vz,
- mount pve-data as /var/lib/vz?
- update fstab/mtab?

Should this work or there's a better way?

Should I have done the whole procedure differently/better, how?

Any suggestions welcome,

Thank you,
Marco

How to add SSD as additional local storage for OpenVZ?

$
0
0
We are running Proxmox VE 3.0 on several nodes, installed on HW RAID10 arrays. General performance is fine, but there are particular containers that use extremely high IO. We could very much benefit from write-back SSD caching (bcache), but it looks like it's not going to happen anytime soon (soonest is after RHEL7 gets released).

According to the storage wiki:
Quote:

OpenVZ containers must be on local storage or NFS
As these high IO containers are very latency sensitive, we should forget NFS storage, gigabit is simply not enough.

We are looking for a reliable way to add an SSD (as a locally mounted volume or directory) to an already installed Proxmox VE node to place 1-2 containers on it. Some kind of backup (not necessarily snapshot) should work on it, and preferably migrate as well.

Has anyone done this before successfully?

Request: Quick guide to migrate system drive

$
0
0
I'm running out of space on the system drive of a PMX3 installation, and I'd like to migrate it to a bigger drive. I have the new drive connected, and was looking for a quick guide on now to migrate everything to the new drive. I intend to remove the old drive, and boot from the new drive. (I'm not an expert with lvm so that's probably the piece I understand the least.. the rest seems easy).

Thanks in advance.

Proxmox 3.x Two-Node Cluster

$
0
0
Hello All,

I am a new member who has been following this project as well as a couple of others including OpenStack and OpenNebula for more than a year but decided that Proxmox was more suitable for our needs. I have played with the older 1.X and 2.X versions briefly via VMWare Fusion on my iMac but decided to jump in properly at Version 3.X to setup a production system especially since there is better support for Ceph.

So I am looking to setup a Two-Node cluster and have a couple questions as follows:

1) Is the procedure exactly the same for creating a version 2.X TwoNode cluster as http://pve.proxmox.com/wiki/Two-Node...bility_Cluster

2) Both Proxmox 3.x Nodes are on the same VLAN with a public IP range that can be moved / Shared between the 2-Nodes as I thought this would have been necessary in the case of Live migration of KVM VMs. So are there any special network configuration requirements that I need to consider?

Any other comments or assistance would be greatly appreciated.

Thanks in Advance

Samwayne
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>