Quantcast
Channel: Proxmox Support Forum
Viewing all 171725 articles
Browse latest View live

[BUG REPORT]VNC Console disconnects every few seconds

$
0
0
after disconnection, it reconnects automaticcally

Add IPv6 address to container VE 3.3

$
0
0
Hi,

I just got IPv6 at home, and I can't figure out how I add IPv6-addresses to containers. (me + networks = bad combination:()

Changes I have made:
1. added net.ipv6.conf.all.forwarding=1 and net.ipv6.conf.all.proxy_ndp=1 to /etc/sysctl.conf + reboot
2. Added IP(2001:xxxx:xxxx:0:21e:67ff:1:2) to container in web GUI.

What else do I have to do to make this work? :)

There are a couple of post like this, but most of them are "This is what I got now" without an clear explanation on how they got there.

ping6:
Code:

node -> container - Works.
container -> node - Works.
node -> Any IPv6 - Works.
Any IPv6 -> node - Works.
container -> Any IPv6 - Fails. (no reply, just PING ipv6.google.com(arn02s06-in-x06.1e100.net) 56 data bytes)
Any IPv6 -> container - Fails (Destination unreachable: Address unreachable)

node ip route list:
Code:

192.168.1.5 dev venet0  scope link 192.168.1.6 dev venet0  scope link
192.168.1.9 dev venet0  scope link
192.168.1.14 dev venet0  scope link
192.168.1.0/24 dev vmbr0  proto kernel  scope link  src 192.168.1.3
10.0.0.0/8 dev vmbr1  proto kernel  scope link  src 10.0.0.3
default via 192.168.1.1 dev vmbr0

container ip route list:
Code:

default dev venet0  scope link
node route -6:
Code:

Kernel IPv6 routing tableDestination                    Next Hop                  Flag Met Ref Use If
2001:xxxx:xxxx:0:21e:67ff:1:2/128 ::                        U    1024 0    0 venet0
2001:xxxx:xxxx::/64            ::                        UAe  256 0    58 vmbr0
fd6b:xxxx:xxxx::/64            ::                        UAe  256 0    58 vmbr0
fe80::1/128                    ::                        U    256 0    0 venet0
fe80::/64                      ::                        U    256 0    0 vmbr0
fe80::/64                      ::                        U    256 0    0 vmbr1
fe80::/64                      ::                        U    256 0    0 eth0
fe80::/64                      ::                        U    256 0    0 venet0
::/0                          fe80::e091:f5ff:fecc:a5a1  UGDAe 1024 0  113 vmbr0
::/0                          ::                        !n  -1  1  335 lo
::1/128                        ::                        Un  0  1    3 lo
2001:xxxx:xxxx:0:21e:67ff:fe6e:6101/128 ::                        Un  0  1    3 lo
fd6b:xxxx:xxxx:0:21e:67ff:fe6e:6101/128 ::                        Un  0  1    76 lo
fe80::1/128                    ::                        Un  0  1    0 lo
fe80::21e:67ff:fe6e:6101/128  ::                        Un  0  1    0 lo
fe80::21e:67ff:fe6e:6101/128  ::                        Un  0  1    7 lo
fe80::ccfd:d7ff:fefc:f48f/128  ::                        Un  0  1    0 lo
ff00::/8                      ::                        U    256 0    0 vmbr0
ff00::/8                      ::                        U    256 0    0 vmbr1
ff00::/8                      ::                        U    256 0    0 eth0
ff00::/8                      ::                        U    256 0    0 venet0
::/0                          ::                        !n  -1  1  335 lo

container route -6:
Code:

Kernel IPv6 routing tableDestination                    Next Hop                  Flag Met Ref Use If
2001:xxxx:xxxx:0:21e:67ff:1:2/128 ::                        U    256 0    0 venet0
fe80::/64                      ::                        U    256 0    0 venet0
::/0                          ::                        U    1  0    0 venet0
::/0                          ::                        U    1024 0    0 venet0
::/0                          ::                        !n  -1  1  150 lo
::1/128                        ::                        Un  0  1    23 lo
2001:xxxx:xxxx:0:21e:67ff:1:2/128 ::                        Un  0  1    7 lo
ff00::/8                      ::                        U    256 0    0 venet0
::/0                          ::                        !n  -1  1  150 lo

Help with my first HA CEPH Setup

$
0
0
Hi

i would like to take your help to build my first HA Ceph setup and would have some questions about the generel operation of CEPH

For my first basic test i was only able to do it with 3 old Workstations to see the basic functionality of CEPH. It worked quite flawlessly except for the poor performance, but this is the problem of my test-setup with only one OSD and GBit eth per Server.

My current planned Setup would look like this (3 times of course)
Dell PowerEdge R720
2x Xeon E5-2630v2, 2.6GHz
128GB RAM
Perc H710 Controller
Dual Hot-Plug 750W Netzteil
iDrac Enterprise for HA Fencing

The first big question would be if it would be possible or better advisable to use the Dell Internal Dual SD Module to install ProxmoxVE on it.
Of course i would replace the default 2x2GB SD Cards with 2x32GB SDHC Cards (http://geizhals.at/a1131370.html)
Background is that with this i could spare 2 HDD Caddys.

Should i mirror the recommended Journal-SSD.
What happens if a non-mirrored Journal-SSD fails?

With the Storage i'm quite unsure what to take, either 4x4TB 7k2 SAS Disks (http://geizhals.at/a860321.html) or better 6x1TB 10k SATA Disks (http://geizhals.at/a764000.html)
I think the 6x1TB Disks would provide a better performance because of more disks and the better rotation rate they have.
And 6 TB Storage would be perfectly adequate for us at the moment and the coming future.

The Ceph-Storage Network would be implemented by 2x10GBase-T interfaces, here a simple visio how i imagined it.

ceph-ha.JPG


What do you think, will i get a proper performance. At the end i think the disks sould be the limitating factor and not the ceph network.


Furthermore i would have a few short questions about ceph and some constellations that may occur:

- How is the loss of one HDD handled? Delete the associated OSD, change the the disk and create a new OSD?
- When creating the 4 or 6 OSDs with a dedicated Journal-SSD, do i have to create 6 partitions with e.g. 10GB?
And then install it by -journal_dev /dev/sdb1; -journal_dev /dev/sdb2 or just use "-journal_dev /dev/sdb" and the system makes the partitions for every OSD by itself?
- If i want to add a fourth server later, how reacts the redundancy? Is it increased to 4 or stays it at 3 and afterwards the data is distributed among all 4 Servers and there OSDs?
- If i want to change the Disks against larger ones in the future. Do i just have to change disk per disk and everytime delete and create the new osd and wait for the rebuild after every single change?


I'm very grateful for any help and advisory i can get in this situation.
regards
Attached Images

Proxmox VE 3.3 does not Boot after install

$
0
0
Hi,

I would like to test Proxmox VE 3.3.
The Testenvironement looks like:
  • ASUS B85M-G
  • Intel i5
  • 16 GBRAM
  • A SanDisk Ultra Plus 128 GB as Disk (SSD).


I disabled the UEFI Secure Boot Option in bios.


I put the "proxmox VE 3.3 ISO installer" on a USB stick and installed from USB. All went fine during installation (no errors), but afterward the system does not boot. Nothing, no boot disk.

I installed a normal debian 7 by using debian netinstall on usb and the installation works fine and it boots afterwards. A VMWare ESXi Installation boots too.

I tried the "proxmox VE 3.2 ISO installer" but with the same result.

I would be happy if someone can give me a hint howto fix that.

Thanks in advance
best regards
carsten

ASP+MSSQL is 2x slower than in other environments.

$
0
0
Hi, I have an ASP.NET + MSSQL 2012 application on Windows 2012 R2 VM (LVM storage, virtio drivers). The problem is that this application is about 2x slower than on ESXi VM or on real HW server (similar config) or even on a working notebook. I've tried to use https://pve.proxmox.com/wiki/Performance_Tweaks or disable Balloon service driver, but no luck. Does anybody use high load ASP+MSSQL apps? Do you have any problems?

Code:

pveversion --verbose
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-35
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8                                                                                                                                                                                                                                                           
vzctl: 4.0-1pve6                                                                                                                                                                                                                                                         
vzprocps: 2.0.11-2                                                                                                                                                                                                                                                       
vzquota: 3.1-2                                                                                                                                                                                                                                                           
pve-qemu-kvm: 2.1-9                                                                                                                                                                                                                                                       
ksm-control-daemon: 1.1-1                                                                                                                                                                                                                                                 
glusterfs-client: 3.5.2-1

Moved from private LAN to Root-Server

$
0
0
Hello,

I moved with my VMs from a private hosted Proxmox instance to a external hosted Root-Server. The situation before was:

  • Proxmox itself and all my VMs had an IP in my LAN
  • the internet router in my LAN did portforwarding to my apache VM
  • my apache VM was a reverse proxy to all other VMs (Wiki, Redmine, Calendar, ...)


The situation now:
  • Proxmox is running on the Root-Server with a static internet IP
  • I created with vmbr0 an internal network, which is similar to the former LAN


Question: How do I connect my apache reverse proxy VM to the public static internet IP (instead of proxmox itself) and let the apache route to proxmox, wiki, calendar, ... ?

Here is my current configuration (/etc/network/interfaces):

Code:

### Hetzner Online AG
auto lo
iface lo inet loopback

auto  eth0
iface eth0 inet static
  address  176.9.15.182
  broadcast 176.9.15.187
  netmask  255.255.255.248
  gateway  176.9.15.176
  # default route to access subnet
  up route add -net 176.9.15.175 netmask 255.255.255.248 gw 176.9.15.176 eth0

auto vmbr0
iface vmbr0 inet static
  address 192.168.6.1
  netmask 255.255.255.0
  network 192.168.6.0
  broadcast 192.168.6.255

How do I proceed from here?

Ghost VM responding to ping when is stopped

$
0
0
Hello, this is my first post in this great Forum. Please, excuse me for my English, it's not my language.

I've run a two node Proxmox HA Cluster with quorum disk and fencing devices. Two Fujitsu servers (Proxmox01 and Proxmox02) that have each one a bonding of 2 Intel ethernet interfaces each one connected to a LAG in a 2 units Netgear GST24TS stack switch.

I've created a KVM guest in Proxmox02 with two bridged virtual nic's each one in a different vlan tag over the same bonding interface:

eth0: Vlan tag 301 type Intel e1000 MAC Addres BA:FE:3A:4A:4C:6D bridge vmbr0
eth1: 1Vlan tag 206 type Intel e1000 MAC Addres 62:95:2E:C4:F9:3C bridge vmbr0

Then I've installed CentOS Linux in the guest VM and configured the network with:

eth0 10.140.131.170/27
eth1: 5.10.206.199/26

When I have the guest VM totally stopped I can ping 10.140.131.170 and I've got response :


ping 10.140.131.170

PING 10.140.131.170 (10.140.131.170) 56(84) bytes of data.
64 bytes from 10.140.131.170: icmp_seq=1 ttl=63 time=43.1 ms
64 bytes from 10.140.131.170: icmp_seq=2 ttl=63 time=3.94 ms
64 bytes from 10.140.131.170: icmp_seq=3 ttl=63 time=2.93 ms
64 bytes from 10.140.131.170: icmp_seq=4 ttl=63 time=2.57 ms
64 bytes from 10.140.131.170: icmp_seq=5 ttl=63 time=2.88 ms
64 bytes from 10.140.131.170: icmp_seq=6 ttl=63 time=2.54 ms
64 bytes from 10.140.131.170: icmp_seq=7 ttl=63 time=3.48 ms
64 bytes from 10.140.131.170: icmp_seq=8 ttl=63 time=5.22 ms
^C
--- 10.140.131.170 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7010ms
rtt min/avg/max/mdev = 2.540/8.340/43.136/13.177 ms

If I do an arping to 10.140.131.170 I've got response from the MAC address of the guest virtual nick: BA:FE:3A:4A:4C:6D. So it's not another host answering to ping because I have two hosts with the same IP address in my network.

I've cleared the ARP cache of my computer and arp cache of the switch and the router. And still get response from ping and arping when the guest VM is stopped. Also I tried to reboot the Proxmox02 server and still get response from ping and arping, once rebooted.

Correct me if I'm wrong, but if the VM guest is stopped (totally shutdown) I don't have to get response from ping or arping.

The same thing is happening with eth1 but with a difference. If the guest VM is stopped ping to 5.10.206.199 is not responding, but arping gets response.

I've tried the same with another guest VM in Proxmox01 and it's working like it's supposed to. When the VM guest is stopped I can't get any ping or arping response from its ip addres (192.168.100.200). Then I've migrated this guest VM to Proxmox02 and stopped it. And I get the same problem. Now ping and arping get response with the guest VM shutdown and stopped. Even I can do an nmap and i'ts showing the same ports that have the guest VM whn is running:

nmap 192.168.100.200

Starting Nmap 6.40 ( http://nmap.org ) at 2014-11-04 17:26 CET

Nmap scan report for 192.168.100.200
Host is up (0.0083s latency).

Not shown: 996 filtered ports
PORT STATE SERVICE

80/tcp open http
222/tcp open rsh-spx
443/tcp open https
9080/tcp open glrpc


I can try to connect to the Tomcat server running on the stopped guest VM and I get a 404 error:


HTTP Status 404 - /debug.jsp;jsessionid=3C7377341682232DBEE7A5214104F 503



type Status report


message /debug.jsp;jsessionid=3C7377341682232DBEE7A5214104F 503


description The requested resource is not available.

Or try to connect to SSH in port 222 when the VM is stopped:


ssh 192.168.100.200 -p222

root@192.168.100.200's password:

Permission denied, please try again.
root@192.168.100.200's password:
Permission denied, please try again.
root@192.168.100.200's password:
Connection closed by 192.168.100.200


I'ts like a ghost VM! :confused:

What is happening? Maybe a Proxmox bug?

It's like if when I stop a guest VM, Proxmox, only unmount the filesystem and the system is still running in memory, instead of completely shutdown the system.

Is there any command tool to show the arp table of the VM guest?

I've tried with arp -n in Proxmox but is only showing the arp entries of the IP's assigned to the physical interfaces not the entries relative to the virtual interfaces of the guests.

This is a big problem for me for two reasons:


  1. If a server goes down my monitoring system still gets response from ping and don't send me the alert.
  2. If I want to assign one public IP address to a new VM guest and before the same address was used in another VM guest I don't have connectivity because the MAC address of the old VM is still responding and generating fake arp entries.


Proxmox details:

root@proxmox01:~# pveversion -v

proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)

pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)

pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-32-pve: 2.6.32-136

pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


root@proxmox02:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2

pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


Thank you.

New VMs are retaining lvm data from previously removed VMs

$
0
0
Somewhere some data is being saved, I'm just not 100% sure where that would be.

A little while back I was restoring VM backups, one of which I cancelled as shown:

Code:

/var/log/pve/tasks# cat B/UPID:labhost:00041992:01758D18:5453CAAB:qmrestore:130:xxx@ad:restore vma archive: zcat /labbackups/dump/vzdump-qemu-134-2014_10_17-10_51_08.vma.gz|vma extract -v -r /var/tmp/vzdumptmp268690.fifo - /var/tmp/vzdumptmp268690
CFG: size: 278 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-ide0
CTIME: Fri Oct 17 10:51:10 2014
  Logical volume "vm-130-disk-1" created
new volume ID is 'lvmGroupLAB:vm-130-disk-1'
map 'drive-ide0' to '/dev/xxxVolGrp/vm-130-disk-1' (write zeros = 1)
progress 1% (read 343605248 bytes, duration 3 sec)
progress 2% (read 687210496 bytes, duration 7 sec)
progress 3% (read 1030815744 bytes, duration 12 sec)

This was an ubuntu 14.04 server VM. I'm finding now that when I create new VMs and try to install ubuntu on them, the new "vm-###-disk" has vg and lv data that corresponds to the VM i was restoring on Oct 17th. This causes the install to fail with: "Because the volume group(s) on the selected device consist of physical volumes on other devices, it is not considered safe to remove its LVM data automatically. If you wish to use this device for partitioning, please remove it's LVM data first."

There's ways around this, but newly created VMs shouldn't be retaining info from old VMs/cancelled qmrestores.

I've tried updating proxmox and rebooting to clear whatever file this data is being retained in, but no success. Where could I look?

As it's likely to be requested:

Code:

proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-33-pve: 2.6.32-138
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-1
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


CPU pinning in KVM

$
0
0
Hi to all.

I have a server with 2 processors of 10 cores and 20 sub-threads (Hyperthreading).

My preoccupation is with the CPU pinning in KVM, i want to reach the max of performance possible taking advantage the cache of processor, and I need to know what configuration should i apply on PVE for get this best performance?.

Please if anybody can explain me how apply the configuration in PVE (I will be using a data base into the KVM VM).

Best regards
Cesar

Unable to create volume group at /usr/bin/proxinstall line 630

error exit code 139(500)

$
0
0
Hello Proxmox user,

i am running with proxmox 3.1-3 and have bug if i saved notes , *See Attachment.

pls suggest me a clue, for fixing this problem :(




Thanks

Unable to get ceph-fuse working

$
0
0
I have a 3 node proxmox ceph cluster working, with vm images.

I'm trying to mount the ceph file system with fuse, but it just hangs on "starting ceph client"

Code:

root@vng:/etc/pve# ceph-fuse /mnt/ceph
ceph-fuse[28275]: starting ceph client

or

Code:

root@vng:/etc/pve# ceph-fuse -m 10.10.10.241:6789 /mnt/ceph
ceph-fuse[28275]: starting ceph client

It seems to be not a problem for everyone else ...

Any thoughts?

thanks.

Capture packets with VLAN tag (Virtio / E1000 / VMXNET3) on Windows box

$
0
0
Hi,

My Proxmox is set with OVS with a port mirroring configured to send traffic to a Windows VM where I have installed my analysis product.
I am unable to see the VLAN tag.
To double check, I change my NIC driver to the old RTL8139 and now I am able to see the VLAN (but it is not a good solution as the drivers is very old and have a lot of limitation + it is fast ethernet).

I have tried to play with some tricks by Intel for E1000 but impossible to make it works.

Anyone has successfully captured packet with VLAN tag with VMXNET3 on a Windows box?

Update :
For VitrIO, it just needs to disable 'Priority and VLAN tagging' on the NIC Properties to make it work.
I tried to do the same for VMXNET3 (+ playing with some offload options) but the problem remains the same..
I would prefer to use this card on this VM for some inter-compatibility issue.


Many thanks for your help :-)

belette

Proxmox and systemd?

$
0
0
Hello,

As i'm already struggling with systemd and/or its ugly siblings on 2 systems I was wondering if the proxmox team plans on implementing systemd with the upcoming release of debian jessie, which most likely will ship with systemd as standard.

To prepare for the worst I just set up a test-system which upon switching to systemd did just what I expected: completely fail to boot, dropping to the systemd-console after a few minutes - just as both other systems that switched to systemd. At least apt gives now a proper warning about switching the init-system, which wasn't the case for both other sytstems that pulled systemd on dist-upgrade without further warnings...

I deeply hope proxmox will stay on sysvinit - production virtualization servers with a broken (or "work in progress" as the systemd-developers call it) init system would be a major desaster. As far as I can see there are currently no packages used by the default proxmox setup which depend on systemd (and hopefully won't change this), so there seems to be no reason to change the init system...

If there are plans on adapting systemd - is there any schedule?

Thanks

migration gotcha

$
0
0
Not a fault or bug but just something to look for... ( at least I don't think so.. maybe info indicating why the failure in someway?)

I had a VM migration that kept saying it was failing because of a missing iso for the cdrom..

GUI indicated that no iso or hardware was to be used.. argh...

Anyway I cat the conf file for the VM in question and I could see that it was indeed tied to the missing iso.

But mentally I was expecting the conf file to contain certain lines and thats kinda what I missed.. in the rush/confusion what I was seeing last few lines.

Come to find out.. there was a snapshot attached to the VM... deleting that migration worked..

YMMV

Parallels container migration and Proxmox Repositories

$
0
0
Hello Community!

I have only two questions:

1. Is it possible to migrate OpenVZ containers created by Parallels Cloud Server 6 (successor of Virtuozzo) to Proxmox?

2. I installed Proxmox successfully but I would like to test it first (and I will surely buy a license after the test). I have disabled the Enterprise Repository accordingly to the wiki, but even after a server reboot, the message of "no valid subscription" still appears when I log in. What is the correct way to use the no-subscription repository and Proxmox "understand" this?


Thank you!

Bridge does not exist

$
0
0
Hey everyone. I am having trouble using an OVS Bridge in a CT. My interface config is the following:
allow-vmbr1 eth1
iface eth1 inet manual
ovs_type OVSPort
ovs_bridge vmbr1

auto vmbr1
allow-ovs vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports eth1

When I start the CT the error message is: bridge 'vmbr1' does not exist.
What am I missing? Thanks in advance.

Proxmox web GUI VNC console stopped working on all VMs - Solved PC issue

$
0
0
Proxmox Virtual Environment Version: 3.2-4/e24a91c1, system running for 5 months. Was adding more VMs and don't know if this is a coincidence but when we reached the 25th VM, the VNC console though the web gui stopped working for all VMs. The following error is shown in syslog:

Code:

Nov  5 11:10:48 proxmox pvedaemon[5044]: <root@pam> end task UPID:proxmox:00019E85:007A8631:545A681E:vncproxy:103:root@pam: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 103 2>/dev/null'' failed: exit code 1
I shutdown each VM individually and rebooted the host then started each one up yesterday. The consoles started working. Came in today and the VNC consoles had stopped working again with the same error message. Is there a limit to the number of VNC that are available or nc's ports? I can ssh into the VMs without a problem and access their web GUIs. :(

Need some advice for my little proxmox project please :)

$
0
0
Hi everybody, i've been playing with proxmox for some months and really appreciate it ! so i decided to go with this solution for my home server :)

Now i have to idea and i'm trying to find the best, i'm sure you can help :cool:

The server will be used for many vm like firewall, proxy, plex, owncloud, ... and as a NAS ! ( openmediavault )
I have 6 HDD, no raid controller and no ZFS because i don't have ECC ram yet...So the 6HDD are connected directly to my motherboard...

i have found to solution for my need, please help me to choose one....

First idea :

i have just managed to do that today : i installed proxmox on two good usb stick in RAID1, on this ARRAY i have proxmox running and a small raw file for the openmediavault vm. So proxmox and OMV are running from the USB key. I passed the 6HDD directly to OMV because i have Vt-D capable hardware. From the OMV VM ,I created a RAID array that i want to use as a storage for my raw or qcow file from proxmox. So i set up a NFS share on OMV and added it to proxmox. When Proxmox boot, OMV start too and proxmox can find his NFS storage in a minute.

I have made some bench with "dd" and i found that i reach 370Mo/s in write directly on the OMV vm that have the HDD "passed trought" and i lose 70mo/s when I do the same command on the proxmox that use the NFS share. I have already set NFS correctly ( mouting option, MTU, ... ).

Question :

A HDD "passed trought" have the same performance that i would have directly on a physical machine or we loose some IO ?
Why can i reach 300Mo by a single virtual e1000 network card ?!

Second idea ( more simple )

Juste make a raid 10 array with my 6 drive, put proxmox on it and add a lvm as a storage.

Question :

The performance will be FAR better or just a little ?
Is it a problem to have verry large raw or qcow file ? As i said i want to set an openmediavault VM and this VM will have 2.5To of disk size. That's why i wanted to pass the HDD directly to it. I don't know if 2.5To qcow2 or raw file is a good idea...



Please help me with this ^^


Sorry for my english, i do my best, it's easy for me to read but it's more complicated to write...

VNC issues ( proxmox 3.3 )

$
0
0
Hello

i installed proxmox 3.3.3 and i have issues to run vnc via external webpage

i saw that now its novnc but is there any way to add it to external or even way to back old java way ?-



i used this before :


php :
$response = $pve2->post("/nodes/$nodename/qemu/$id/vncproxy",$parameters);



jv :
<div class="alert alert-block alert-warning fade in">

<applet id='pveKVMConsole-1018-vncapp' border='false' code='com.tigervnc.vncviewer.VncViewer' archive='https://<?=$hostname?>:8006/vncterm/VncViewer.jar' width='100%' height='100%' style='width: 800px; height: 700px; '>
<param name='id' value='pveKVMConsole-1018-vncapp'>
<param name='PVECert' value='<?php echo str_replace("\n", '|', $response["cert"]); ?>'>
<param name='HOST' value='<?php echo $hostname; ?>'>
<param name='PORT' value='<?php echo $response["port"]; ?>'>
<param name='USERNAME' value='<?php echo $response["user"]; ?>'>
<param name='password' value='<?php echo $response["ticket"]; ?>'>
<param name='Show Controls' value='Yes'>
<param name='Offer Relogin' value='No'>
</applet> <br>
</div>




its working on old proxmox versions but new no - is there anyway to make same to new version or at least way to back old vnc to new version of proxmox - i tried to downgrade proxmox but it doesnt help



thanks
Viewing all 171725 articles
Browse latest View live