Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

pvetest WebUI upload ISO error

$
0
0
Quote:

# pveversion -v
pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-13
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-3
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1


proxmoxupload.png

It's a two nodes cluster.
upload iso to one node's local storage.

first time upload error:
Quote:

starting file import from: /var/tmp/CGItemp33068
target node: proxmox20150
target file: /var/lib/vz/template/iso/virtio-win-0.1-52.iso
file size is: 58497024
command: cp /var/tmp/CGItemp33068 /var/lib/vz/template/iso/virtio-win-0.1-52.iso
TASK ERROR: import failed: cp: cannot stat `/var/tmp/CGItemp33068': No such file or directory
second time upload success.
Attached Images

Networking - Two subnets

$
0
0
Hi,

I have two subnets, a /29 and a /27

xxx.xxx.199.208/29
xxx.xxx.221.64/27

The /29 IP block is available directly from the network drop
The /27 IP block is statically routed to xxx.xxx.199.210 of the /29

I will be using the /27 for containers. But can't get it through using venet. Here are my interfaces:

Code:

# network interface settingsauto lo
iface lo inet loopback


auto eth0
allow-hotplug eth0
iface eth0 inet static
    address  xxx.xxx.199.210
    netmask  255.255.255.248
    gateway  xxx.xxx.199.209
    broadcast  xxx.xxx.199.215
    network xxx.xxx.199.208
    dns-nameservers 8.8.8.8 8.8.4.4


auto eth0.100
allow-hotplug eth0.100
iface eth0.100 inet static
    address xxx.xxx.221.65
    netmask 255.255.255.224
    broadcast xxx.xxx.221.95
    network xxx.xxx.221.64


auto eth0.101
allow-hotplug eth0.101
iface eth0.101 inet static
        address xxx.xxx.221.66
        netmask 255.255.255.224
        broadcast xxx.xxx.221.95
        network xxx.xxx.221.64


auto eth0.102
allow-hotplug eth0.102
iface eth0.102 inet static
        address xxx.xxx.221.67
        netmask 255.255.255.224
        broadcast xxx.xxx.221.95
        network xxx.xxx.221.64


auto eth0.103
allow-hotplug eth0.103
iface eth0.103 inet static
        address xxx.60.221.68
        netmask 255.255.255.224
        broadcast xxx.xxx.221.95
        network xxx.xxx.221.64


auto eth0.104
allow-hotplug eth0.104
iface eth0.104 inet static
        address xxx.xxx.221.69
        netmask 255.255.255.224
        broadcast xxx.xxx.221.95
        network xxx.xxx.221.64


auto eth0.105
allow-hotplug eth0.105
iface eth0.105 inet static
        address xxx.xxx.221.70
        netmask 255.255.255.224
        broadcast xxx.xxx.221.95
        network xxx.xxx.221.64


iface eth1 inet manual


iface vmbr0 inet manual
    bridge_ports none
    bridge_stp off
    bridge_fd 0

NFS Write-Issue??

$
0
0
Hi there,
i've got a 3 head cluster running and a nfs-storage server based on debian squeeze. the Storage ist mounted on the cluster and i 've got some machines running there.

The thing is, when a machine has write transactions i can see in iotop that the transaction is not pushed directly to the nfs-server. its some seconds below. The nfs-server flushes the data and then, silence. in this time the rsync-command doesn't transfer any data since the write-io on the nfs-server is quiet again.

is there something wrong in my configuration?

here's the nfs-part of the storage.cfg
nfs: NFS-Storage0
path /mnt/pve/NFS-Storage0
server 10.1.0.200
export /mnt/drbd
options vers=3
content images,iso,vztmpl,backup
maxfiles 3


i'm running this version of pve:
pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1



thanks a lot

Host

$
0
0
Proxmox is/was available as a package install on debian.
Great.

Proxmox is Perl, a web server, openVZ and KVM

Thoses are available on FreeBSD (and there is even jail that could be added)

It would be cool to port proxmox on FreeBSD port system (because IPTable vs PF and more)

Cluster with unicast

$
0
0
Hello,

As OVH doesn't support multicast, I decided to configure my cluster to use unicast.

So, I created a cluster on Node 1, changed conf to add unicast", and added the following line in /etc/hosts "IP.OF.NOD.2 nsXXXXXX.ovh.net nsXXXXXX" and rebooted the node.

I reinstalled node 2 (to be sure) and do "pvecm add IP.OF.NOD.1". As expected, I get "Waiting for quorum... Timed-out waiting for cluster". After that, I added thte following line in /etc/hosts ""IP.OF.NOD.1 nsXXXXXX.ovh.net nsXXXXXX" and rebooted the node.

But, after that, if I do "/etc/init.d/cman restart" on node 2, I get an error : Waiting for quorum... Timed-out waiting for cluster.

What i'm doing wrong ?

-Ch@rlus

Executable files in /etc/pve

$
0
0
Hi,

I'd like to put an executable file in /etc/pve to share it easily with the other nodes of the cluster (An init.d script used as a HA resource), but I can't chmod +x :

Code:

chmod: changing permissions of `/etc/pve/myscript': Function not implemented
That's frustrating because the OpenVZ action scripts in /etc/pve/openvz are marked as executable.

Is there a trick to mark a file as executable under /etc/pve?

/etc/pve/nodes/****/openvz - frozen / not responding

$
0
0
all nodes on the cluster are not able to be managed at this point. Seems to be some issue with the pve mount.

While i can access /etc/pve.... i have issues once getting to the nodes/****/openvz directory

if i run an ls on this directory for any of my nodes.... it hangs.


I cannot provision, delete, or modify any vms on the cluster now.... not sure what the issue is.


#############

Troubleshooting:

I went to cluster.conf and see node 11 is id 1 so i rebooted node 11 thinking that for some reason there was issues reading the pve mount from the master node... but after reboot the problem exists.

All nodes are online ( green ) but we cannot make any adjustments to the cluster


#############

output of cluster.conf

<?xml version="1.0"?>
<cluster config_version="49" name="FL-Cluster">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<clusternodes>




<clusternode name="proxmox4" votes="1" nodeid="4"/>
<clusternode name="poxmox5" votes="1" nodeid="5"/>
<clusternode name="proxmox6" votes="1" nodeid="6"/>
<clusternode name="proxmox7" votes="1" nodeid="7"/>
<clusternode name="proxmox9" votes="1" nodeid="9"/>
<clusternode name="Proxmox10" votes="1" nodeid="10"/>
<clusternode name="proxmox8" votes="1" nodeid="8"/>


<clusternode name="proxmox11" votes="1" nodeid="1"/>
<clusternode name="proxmox2" votes="1" nodeid="2"/>


<clusternode name="proxmox3a" votes="1" nodeid="3"/></clusternodes>
<rm/>
</cluster>



output of mount:

root@proxmox2:~# mount
/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)
none on /sys/kernel/config type configfs (rw)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,default_permissions,allow_other)

#############

nothing in syslogs

#############

root@proxmox2:~# cat /var/log/syslog | tail
Feb 28 13:15:00 proxmox2 pmxcfs[525189]: [status] notice: received log
Feb 28 13:15:00 proxmox2 pmxcfs[525189]: [status] notice: received log
Feb 28 13:17:01 proxmox2 /USR/SBIN/CRON[788634]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Feb 28 13:18:56 proxmox2 pvedaemon[525098]: <root@pam> successful auth for user 'root@pam'
Feb 28 13:27:12 proxmox2 pmxcfs[525189]: [status] notice: received log
Feb 28 13:28:24 proxmox2 rrdcached[1761]: flushing old values
Feb 28 13:28:24 proxmox2 rrdcached[1761]: rotating journals
Feb 28 13:28:24 proxmox2 rrdcached[1761]: started new journal /var/lib/rrdcached/journal//rrd.journal.1362076104.073732
Feb 28 13:28:24 proxmox2 rrdcached[1761]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1362068904.074127
Feb 28 13:29:21 proxmox2 pmxcfs[525189]: [status] notice: received log

Multicast

$
0
0
Am I correct in thinking that proxmox uses multicast by default?

I am having issues with corosync re transmissions and decided to test my multicast traffic. On my DRBD/Cluster network I have no multicast. How could this occur on a dedicated network with no switch in-between The interface of my lan/management has no issues with multicast and this is going over a number of my lan switches. Then I have to wonder, how in the world are my clusters even working if multicast doesn't work on my DRBD/Cluster network.

Quote:

root@proxmox2:~# asmping 224.0.2.1 10.211.47.1
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.211.47.1 from 10.211.47.2
unicast from 10.211.47.1, seq=1 dist=0 time=0.235 ms
unicast from 10.211.47.1, seq=2 dist=0 time=0.204 ms
unicast from 10.211.47.1, seq=3 dist=0 time=0.219 ms
unicast from 10.211.47.1, seq=4 dist=0 time=0.104 ms
unicast from 10.211.47.1, seq=5 dist=0 time=0.227 ms
Quote:

root@proxmox2:~# asmping 224.0.2.1 10.80.12.125
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.80.12.125 from 10.80.12.130
unicast from 10.80.12.125, seq=1 dist=0 time=1.218 ms
multicast from 10.80.12.125, seq=1 dist=0 time=1.236 ms
unicast from 10.80.12.125, seq=2 dist=0 time=0.287 ms
multicast from 10.80.12.125, seq=2 dist=0 time=0.272 ms
unicast from 10.80.12.125, seq=3 dist=0 time=0.268 ms
multicast from 10.80.12.125, seq=3 dist=0 time=0.253 ms
Quote:

root@proxmox2:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="10" name="proxmox">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="3" label="proxmox_qdisk" master_wins="1" tko="10"/>
<totem token="54000"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.126" lanplus="1" login="USERID" name="ipmi1" passwd="PASSW0RD" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.131" lanplus="1" login="USERID" name="ipmi2" passwd="PASSW0RD" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ipmi1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ipmi2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="105"/>
<pvevm autostart="1" vmid="103"/>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="104"/>
</rm>
</cluster>

Quote:

root@proxmox1:~# pvecm s
Version: 6.2.0
Config Version: 10
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 652
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 0
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: proxmox1
Node ID: 1
Multicast addresses: 239.192.55.50
Node addresses: 10.211.47.1
Quote:

root@proxmox2:~# pvecm s
Version: 6.2.0
Config Version: 10
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 652
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: proxmox2
Node ID: 2
Multicast addresses: 239.192.55.50
Node addresses: 10.211.47.2

Multicast

$
0
0
I am having issues with corosync totem re transmission issues. I decided to test multicast on my dedicated DRBD/Cluster network. To my surprise multicast is broke on the 10GB DRBD/Cluster Network. This is a dedicated 10GB network with absolutely no switch inbetween. From what I have read multicase issues are typically cause by a switch, which obviously isn't my case. My next questions is how my cluster have continued to even operate without multicast traffic.

Quote:

root@proxmox2:~# asmping 224.0.2.1 10.211.47.1
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.211.47.1 from 10.211.47.2
unicast from 10.211.47.1, seq=1 dist=0 time=0.156 ms
unicast from 10.211.47.1, seq=2 dist=0 time=0.111 ms
unicast from 10.211.47.1, seq=3 dist=0 time=0.193 ms
unicast from 10.211.47.1, seq=4 dist=0 time=0.209 ms
unicast from 10.211.47.1, seq=5 dist=0 time=0.219 ms
unicast from 10.211.47.1, seq=6 dist=0 time=0.147 ms
Quote:

root@proxmox2:~# asmping 224.0.2.1 10.80.12.125
asmping joined (S,G) = (*,224.0.2.234)
pinging 10.80.12.125 from 10.80.12.130
unicast from 10.80.12.125, seq=1 dist=0 time=1.363 ms
multicast from 10.80.12.125, seq=1 dist=0 time=1.342 ms
unicast from 10.80.12.125, seq=2 dist=0 time=0.301 ms
multicast from 10.80.12.125, seq=2 dist=0 time=0.282 ms
unicast from 10.80.12.125, seq=3 dist=0 time=0.183 ms
multicast from 10.80.12.125, seq=3 dist=0 time=0.198 ms
unicast from 10.80.12.125, seq=4 dist=0 time=0.216 ms
multicast from 10.80.12.125, seq=4 dist=0 time=0.197 ms

Quote:

<?xml version="1.0"?>
<cluster config_version="10" name="proxmox">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="3" label="proxmox_qdisk" master_wins="1" tko="10"/>
<totem token="54000"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.126" lanplus="1" login="USERID" name="ipmi1" passwd="PASSW0RD" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.131" lanplus="1" login="USERID" name="ipmi2" passwd="PASSW0RD" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ipmi1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ipmi2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="105"/>
<pvevm autostart="1" vmid="103"/>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="104"/>
</rm>
</cluster>

Quote:

root@proxmox1:~# pvecm s
Version: 6.2.0
Config Version: 10
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 652
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 0
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: proxmox1
Node ID: 1
Multicast addresses: 239.192.55.50
Node addresses: 10.211.47.1
Quote:

root@proxmox2:~# pvecm s
Version: 6.2.0
Config Version: 10
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 652
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 8
Flags:
Ports Bound: 0 177 178
Node name: proxmox2
Node ID: 2
Multicast addresses: 239.192.55.50
Node addresses: 10.211.47.2

permissions issue in centos templates /etc/cron.daily

$
0
0
Hey folks,

I'm not sure this is the right place though to report this, but it seems the daily cron jobs in http://download.proxmox.com/applianc...1_amd64.tar.gz (except logrotate) are set to 644 instead of 755. Easy enough to fix up after an install with a chmod 755 /etc/cron.daily/* though. Are there other channels I should report this throgh?

Regards,
eswood

Apticron unable to connect to proxmox depository

$
0
0
Hi all,


I have installed apticron package to be warned about news updates but tonight apticron could not connect to proxmox depository.

I'm the only one ?

Here the mail from apticron :

Code:

/etc/cron.daily/apticron:
W: Failed to fetch
http://download.proxmox.com/debian/dists/squeeze/Release.gpg  Could not connect todownload.proxmox.com:80 (188.165.151.222). - connect (110: Connection timed out)

W: Failed to fetch
http://download.proxmox.com/debian/dists/squeeze/pve/i18n/Translation-en.bz2  Unable to connect to download.proxmox.com:http:

W: Failed to fetch
http://download.proxmox.com/debian/dists/squeeze/pve/i18n/Translation-en_US.bz2  Unable to connect to download.proxmox.com:http:

W: Failed to fetch
http://download.proxmox.com/debian/dists/squeeze/pve/binary-amd64/Packages.gz  Unable to connect to download.proxmox.com:http:

E: Some index files failed to download, they have been ignored, or old ones used instead.


In my sources.list file I have this :


Code:

deb http://ftp.fr.debian.org/debian squeeze main contrib


# PVE packages provided by proxmox.com
deb http://download.proxmox.com/debian squeeze pve


# security updates
deb http://security.debian.org/ squeeze/updates main contrib


So if we see in details, there is only the proxmox depository which is unable to connect.
The connection timedout is cause by the network or it's Proxmox which work on tonight ?


Regards

Templates and LVM Storage - Could Create a VM from anotther .raw?

$
0
0
Hi there,

I have a LVM disk for storage, for this I only can create kvm machines in raw extension.
If I have a machine VM-101 (RAW) with Apache or something similar, and I want to create other 7 machines like VM-101. How could I do this?

I think that the only way is create a VM an then replace the vm-XXX-disk-1 with:

dd if=/dev/datastore/vm-101-disk-1 of=/dev/datastore/vm-102-disk-1

Is it the only way?

Thanks

TIPS: Install Proxmox on >= 2 TB HDD

$
0
0
Spend 6 hours for me to make Proxmox able to install and boot on 2 TB HDD. I like to share it here, because I see many tutorial using compleks solution, such as:
  1. Use small HDD first as boot device
  2. Manually create partition <= 1 TB
  3. Use Windows to create the partitions
  4. Modify boot info to make it bootable after successful installation


Here the steps, I am using 2 TB WDC Green.


  1. Make sure your BIOS support >= 2 TB HDD. In this case, I am using Gigabyte GA-970A-D3 Rev 3.0. Since I'm using AMD3+ cpu then I neet to upgrade my BIOS.
  2. Now goto BIOS - HDD settings: select AHCI then select "As SATA" not "As IDE"
  3. See on the BIOS screen there is information which port for "As SATA", in mine port 4/5
  4. So I move my HDD cable from port 0 to 5
  5. Boot using Proxmox 2.2 CD and its works!


HTH

Hostname not updating after re template

$
0
0
Hi all,

I have a question about Hostname/re templating a Openvz container.

Up until now I have been going into a container file system from the host
Code:

cd /var/lib/vz/private/101
and issuing a tar command to create a template after I have installed a few things.
Code:

tar -cvzpf /var/lib/vz/template/cache/test1_amd64.tar.gz .
This seems to work apart from the hostname is not updating. In partular im using the centos-6-x86_64.tar.gz from http://openvz.org/Download/template/precreated

So if I make a container from centos-6-x86_64.tar.gz called test1 the hostname will be test1 and there will be entries in the /etc/hosts file to reflect this. If I then create a new template from that machine by the method above and then use it to create a new container called test2 the hostname will still be test1 but the /etc/hosts file is updated to test2

Is there anyway that I can make the hostname update automatically? as the old hostname will not be reflected in the host file.

Proxmox is perfect, to be wonderful, just need...

$
0
0
For over four months, I have been testing virtualization technologies, software programs, integration with billing modules, performance and everything at all.


Proxmox is really perfect in all aspects, but there are some items that we can not use as our main virtualization software.


1) Monthly Bandwidth / Monthly traffic (It's expensive in all datacenters)
2) IP management - so that only be possible to add the ips on VMs that were configured for that VM (by mac or other thing), and if you not setup these ips within the guest, it gets the ip from Proxmox configuration automatically (Like a dhcp)
3) A good module for whmcs developed by Proxmox Staff - I belive if its developed by proxmox staff, this will work like a big boss!


These 3 items work very well with SolusVM, and whmcs module for it was also developed by their team, that make the module works perfectly without bugs.


I believe with this, the ProxMox will be the best virtualization software/manager.

Who agree with me?

(ProxMox staff, its only a suggestion)

And thank you for the big and great ProxMox!

Proxmox multi-node cluster with single IP (IPv4)

$
0
0
(How) is it possible to have a two-node cluster with only a single external IP (IPv4)?
I would like to get a second node as a failover for the first, but I have only the single IPv4.

Each node could connect externally with an IPv6. Is IPv6 in Proxmox stable enough to set up failover this way?
Could the two nodes connect directly through their second NIC in a private (IPv4 network), for instance using a Xover cable?

Is all this making any sense?

Can log in via ssh but not via the web console

$
0
0
Hi

I have a proxmox installation that I can ssh to but I cannot log in via the web console. I have reset the root password via ssh with care and am certain I have it right as I can log on to the cli with it. I get a wrong username or password error when I try and log in via the web console.

Has anyone encountered this before?

Thanks

Fergus

iscsi as a distributed (shared) storage on proxmox

$
0
0
Hello,

What I'm trying to achieve is attaching a single iscsi target to 2 proxmox servers using a clustered file system.

1. Anyone ever did this with proxmox? If so, what clustered file system did you use?
2. If I do this, would I be able to live migrate or use HA with proxmox just like using a NFS share?

Thanks,
Oktay

HA with DRBD without fencing is working - can Proxmox do it - perhaps with hacks?

$
0
0
Hello

we have been running for years HA (Heartbeat, DRBD, Openvz, 2 NICs and serial cable) on two node clusters, in a master-master configuration. Every machine has an active DRBD master partition and a passive DRBD slave partition. Both servers run VMs in their active DRBD partitions (which are copied in realtime to the other machine). In case of a hardware failure the surviving machine mounts the inactive slave partition as a second DRBD master and starts the contained VMs.

Despite all comments here that this shouldnt work reliably I can only say that it does. The two connections, Ethernet and Serial, ensure that no false hardware failure is detected.

Also a split-brain situation cannot happen because the surviving master IS the new master and has the active partition. DRBD must be configured so that when the slave is up again that its partitions are in passive mode and VMs do not migrate back without admin intervention.

We have had this config running for years, we have had hardware failures and HA worked just fine. It is a highly reliable and cheap setup.

This seems to be also confirmed by DRBD

"Even though DRBD-based clusters utilize no shared storage resources and thus fencing is not strictly required from DRBD’s standpoint, Red Hat Cluster Suite still requires fencing even in DRBD-based configurations."
http://www.drbd.org/users-guide/ch-rhcs.html

We would love to use Proxmox. Can Proxmox be hacked to support our configuration?

Thanks for any hints.

Geejay

One physical NIC multiple subnets

$
0
0
Hi all,

I'm trying configure network with proxmox host but I am finding some problems.

my idea is configure 3 subnets on the same NIC.


subnet 1: 172.16.10.0/24
subnet2: 172.16.15.0/24
subnet3: 172.16.20.0/24


Router: 192.168.1.1

I created four bridge interfaces


vmbr0
vmbr10
vmbr15
vmbr20

My network configuration in Proxmox server is this:

Name Active Autostart Port/salves Ip address Subnet mask Gateway
eth0 yes No
vmbr0 yes yes eth0 192.168.1.254 255.255.255.0 192.168.1.1
vmbr10 yes yes vmbr0 172.16.10.254 255.255.255.0
vmbr15 yes yes vmbr0 172.16.15.254 255.255.255.0
vmbr20 yes yes vmbr0 172.16.20.254 255.255.255.0





first , I installed a virtual firewall in my infrastructure, virtual appliance ASTARO.


the Firewall has two interfaces, this config is the astaro guest config


External eth1: 192.168.1.100/24 GW:192.168.1.1
Internal eth0: 172.16.10.100/24


this config is the proxmox config with astaro gest:

Network Device (net0), bridge=vmbr0
Network Device (net1), bridge=vmbr10

My idea is that all the subnets pass using the firewall to exit the internet and go to the other networks.

I've gotten physical computers connected through the Internet network 172.16.10.0/24 go, but I have not gotten my VM with an CT.


in CT have tried to configure both interfaces as venet bridge and not working.


In summary:


- Multiple subnets on the same physical interface
- A firewall that controls the traffic on my network


If someone could help I would be very grateful that took several days with this and can not.


If you need more information as routing table, network configuration, etc... no problem.
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>