Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

vmbr0 exists, but VM/CT cannot be connected to it

$
0
0
Dear Community,

I am facing a weird issue after installing Proxmox. I setup the bridge vmbr0 like this:
Code:

auto vmbr0
iface vmbr0 inet static
    address 192.168.238.2
    netmask 255.255.255.240
    gateway 192.168.238.1
    bridge_ports eth0
        bridge_stp off
        bridge_fd 0

So far everything works, I can connect to the machine with these settings.

Now I am trying to create CTs and VMs which are bridged to vmbr0 , but when I reach the step to select a bridge, the drop down menu doesn't show any bridge.

I tried to create a new bridge to see if this issue is related to vmbr0, but when I try to create a new bridge via Proxmox Web Interface it tells me:
Code:

Parameter verification failed.  (400)

type: property is not defined in schema and the schema does not allow additional properties

I have no idea to what this issue could be related. I would be really happy about any hint.

Code:

proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1


noVNC/HTML5 not working

$
0
0
In fact, none of the HTML5 is working.

I have Firefox 32.0.3, the latest version. When I open up the JavaScript console on a noVNC console window, the following error shows up:

Code:

Skipping unsupported websocket binary sub-protocol
The status (up top) of the noVNC console window is

Code:

noVNC ready: WebSocket emulation, canvas rendering
but nothing happens.

omnios napp-it no_root_squash

$
0
0
I am unable to create openvz container on NFS share. I read somewhere that no_root_squash parameter needs to be added on the NFS share on omnios napp-it. I cannot find the way to do it from napp-it web interface. Does it have to be done through terminal command? If yes, please let me know how?
Thanks in advance.

omnios napp-it poor NFS performance

$
0
0
Hi all,
I have just configured omnios with napp-it on intel xeon, 16 gb ram, one 500 GB Sata whe OS is installed and 2 x 2TB Sata 7200 rpm drive.
I configured the zfs pool with no mirroring or raidz, but just default setup. SO my effective capacity is 3.89TB.
WHe I ran th dd benchmark on napp-it it only gave me about 400 mbps for read and write.
I have seen people achieve 1300 to 1400 mbps. How do I improve the performance? Is there some tweaking to be done to improve the performance?
Thanks in advance.

Proxmox cluster traffic on separate nics?

$
0
0
Hello all,

we have a three node Proxmox cluster.
The nodes each have 4 Gbit nics.
Nic1 does the management traffic.
Is it necessary to run the cluster traffic on a dedicated NIC (eth1?) in a different IP range?

Thank you in advance.

Regards,

Dirk Adamsky

Passthrough RAMDISK to Guest for direct access

$
0
0
Hello,

is it possible to prepare a Ramdisk (tmpfs/ramfs) for a KVM guest to have direct access on it - even if its not a block device?
I want to have something like a local shared Ramdisk for all VMs where plain data (where fast access is needed) can be stored temporarily.

I can passthrough a normal Disk or Partition in the vm conf with "virtio0: /disk/by/.../sdxx"? Any suggestion how I can/should do this for a Ramdisk, or is it not possible? Hope you know what I mean. I dont want to assign like 128GB RAM to a single VM and share this via NFS/CIF, because in case of a Backup the whole Ramdisk will get backed up as well.

Thx,
Kind Regards
ekin

How to mount a .qcow2 image as my first (ide0) hdd?

$
0
0
Hello there!

New to proxmox i got following question.

a tutorial shows me these steps:

Quote:

01 create a new kvm machine with 1 cpu (kvm64) ... 512 mb ram ... 1 network card (e1000)
02 copy the nanoboot image into the new created kvm folder (if the vm have the number 010, replace the *** in the next steps with 010)
03 qemu-img convert -f raw -O qcow2 /var/lib/vz/images/***/IMAGE.img /var/lib/vz/images/***/vm-***-disk-1.qcow2
04 mount the vm-***disk-1.qcow2 as ide0 disk and boot from this disk
05 create a new disk vm-***disk-2.qcow2 (ide1) as data disk
06 start the vm
I don't know, how to mount the newly created qcow2 image. Anybody can help?Is it only upload and then create new disk using this image? :)

Thanks

Proxmox 3.3 PCI Passthrough

$
0
0
Hello guys,
I was always having trouble when i tried to passthrough my nvidia graphics card, now with the new update im doing another attempt.
The only Problem is that im getting this error when everything is configured. It says: "TASK ERROR: Cannot find vifio-pci module!"
I did everything in the tutorial on the new express passthrough. One thing i noticed is also that it says to activate Iommu in the Bios, maybe thats the Problem?
I thought they added vifio in the new version.
Thank you for your time.

NoVNC code: 1006 error message

$
0
0
I created a Windows 2008 VM over a freshly installed Proxmox 3.3-1 server
and after some minutes of working perfect I got a Server disconnected (code: 1006)
error message and since then I couldn´t connect to the server.


So had to reboot the host server to regain access to the VM, why?

CVE-2014-7169 shellshock

$
0
0
http://web.nvd.nist.gov/view/vuln/de...=CVE-2014-7169

The patch for CVE-2014-6271 did not prevent all exploits, CVE-2014-7169 was assigned to another method.

Debian released another updated bash package to address CVE-2014-7169 about 15 hours ago.

If you updated your bash before that time or near that time you likely need to update again.

To update only bash this command works well:
Code:

apt-get update && apt-get install --only-upgrade bash -y
I've seen numerous exploit attempts in many web server logs, patch quickly!

HP ProLiant MicroServer Gen8

$
0
0
Hi guys

I saw this server in a shop and wanna know if somebody here has some experience with this hardware, mainly regards Smart RAID Array.
Is this another fakeraid??
Another thing: Can I use LIO as fence device to provide HA between two servers like this one??

Any advice would help!

Thanks a lot

2 networks cards

$
0
0
Hello,

I installed 2 network cards on my computer where Proxmox is installed.

- Proxmox will access to 2 networks and to Internet :
10.0.0.0/24 (gateway : 10.0.0.1)
192.168.0.0/24 (gateway : 192.168.0.254)

I have 4 VMs :

- 3 will access to the network 10.0.0.0/24
- The last will access to the network 192.168.0.0/24 (ip 192.168.0.199/24)

This is my proxmox interface :

proxmo10.png

What is the procedure to do this ?

If I try to add vmbr1 :
Ip : 192.168.0.198
Netmask : 255.255.255.0
Gateway : 192.168.0.254

But I have an error message :
gateway: Default gateway already exists on interface 'vmbr0'.

How can i do this ?
Attached Images

LDAP authentification with binddn

$
0
0
Hi,
I want to set up a LDAP authentification in proxmox VE 3.3.
Indeed, our LDAP does not accept anonymous request and require a binddn.
How could I setup that ?
I had a look at /etc/pve/domains.cfg and tried to add a binddn and bindpw line, but it doesn't work.
Thanks.
F.

Recipient address rejected

$
0
0
I'm trialing PMG and I have two domains setup. I'm watching the realtime logs and I keep seeing a lot of Recipient address rejected: Service is unavailable (try later) messages. Does anyone know what this could be about?

no outside network for bridged VM

$
0
0
Hello.

I am unable to get a VM in bridge configuration to access the network outside of the host.

I have one host, with one network card. That card is set up with a static IP. I also set up a bridge on that host, vmbr0
I have one virtual machine. I set it up with one network card, DHCP, in bridge mode connected to vmbr0. This interface receives an IP address from our DHCP server on the same lan as the host. However, the VM is not reacheable from outside the host, and it cannot reach any machine other then the host.

Any ideas what could be wrong?

Here are the config files, I masked the ipv6 addresses :

Host :
Quote:

root@proxmox1:/etc/apt# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.0.107
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

Quote:

root@proxmox1:/# ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:9c:c2:f7
inet6 addr: xxxx::xxx:xxxx:xxxx:xxxx/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2281388 errors:0 dropped:0 overruns:0 frame:0
TX packets:1801639 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:472034264 (450.1 MiB) TX bytes:487168628 (464.6 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:212910 errors:0 dropped:0 overruns:0 frame:0
TX packets:212910 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:93460747 (89.1 MiB) TX bytes:93460747 (89.1 MiB)

tap100i0 Link encap:Ethernet HWaddr 72:5f:47:df:f7:11
inet6 addr: xxxx::xxxx:xxxx:xxxx:xxxx/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:1218 errors:0 dropped:0 overruns:0 frame:0
TX packets:26924 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:74142 (72.4 KiB) TX bytes:2978665 (2.8 MiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vmbr0 Link encap:Ethernet HWaddr 00:50:56:9c:c2:f7
inet addr:192.168.0.107 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: xxxx::xxx:xxxx:xxxx:xxxx/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2243082 errors:0 dropped:0 overruns:0 frame:0
TX packets:1786671 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:433063058 (413.0 MiB) TX bytes:485574977 (463.0 MiB)
Quote:

root@proxmox1:/# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr0

Virtual machine (ubuntu) :

/etc/network/interfaces
Quote:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
route -n
Quote:

root@ubuntu-proxmox:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

ifconfig :
Quote:

root@ubuntu-proxmox:~# ifconfig
eth0 Link encap:Ethernet HWaddr 56:37:b8:09:95:3c
inet addr:192.168.0.125 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: xxxx::xxxx:xxxx:xxxx:xxxx/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:29621 errors:0 dropped:49 overruns:0 frame:0
TX packets:1388 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3261163 (3.2 MB) TX bytes:88411 (88.4 KB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:57 errors:0 dropped:0 overruns:0 frame:0
TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5424 (5.4 KB) TX bytes:5424 (5.4 KB)


rgmanager marking VMs as failed

$
0
0
On a fairly regular basis, but unpredictably, when I attempt to migrate a VM from one host to another, I get failures with no useful error information.
Digging deeper reveals that the problem is at the rgmanager level somehow, with the VM resources being marked as "failed" even though they're still running!
The only 100% guaranteed way I've found to clear the problem is to manually kill the KVM process, then run "clusvcadm -d" to disable the offending resource, then "clusvcadm -e" to re-enable it, which then automatically migrates it to another host.

I have a 4-node cluster, all configured identically. Four 1gb ethernet, bonded together, mgmt interface on a VLAN interface, VMs mostly on other VLANs. Using PVE-hosted CEPH on the same 4 nodes as the underlying data store for all VMs.

Network uses (as of a few weeks ago) OVS, but this problem has been happening both before and after switching to OVS.

What should I be looking for to further diagnose this problem?

FYI, corosync remains happy throughout this problem - only rgmanager appears to be affected.

(Previously reported in bug # 297, but Martin doesn't think it's a bug... and I don't know where to look next.)

-Adam

Doubt with image file

$
0
0
Hello

If I have 3 different storage and create 3 disk into a single one VM but the name of image was the same: vm-101-disk-01.qcow2

My question is: if I move the image from first storage to the second storage the image file will be overwrite or Proxmox will return same kind of error?

Thanks

omnios napp-it nfs share second thoughts. KVM ok, openvz tooooooo slow (Creating ct)

$
0
0
After I created an NFS share on the pool (named omnios). In order to work with openvz storage Idid the following


chmod 777 -R poolname/sharename
example
chmod 777 -R omnios/prox3ovz

This gives recursive permission to all the subfolders under omnios pool
Next I did the following


zfs set aclmode=passthrough omnios/prox3ovz

zfs set aclinherit=passthrough-x omnios/prox3ovs


zfs set sharenfs=rw,noaclfab,root_mapping=0,nosuid,root=@1 92.168.1.0/24 omnios/prox3ovz


KVM works just fine.
Openvz has issues. I created a CT from an ubuntu template (size 122MB). It took 35 minutes. The same template when stored on local storage it was 20 seconds.
Then I backed up the container. It tool 30 minutes.

WHen I backed up a KVM of 230 GB on nfs share it only tool 14 minutes. (Backup size of 26 GB)
So my question is why is it so slow when it comes to openvz stored on NFS share.
By the way. Once the container was created, it worked flawlessly form the NFS share. It is only the backup restore and creation of CT from template that takes extremely long.
Any ideas?

Ability to view a VM filesystem with Proxmox GUI

$
0
0
Was wondering if possible to access/view a VM filesystem from within Proxmox web-gui and offer ways to archive & backup/restore specific directory within a VM? Also, providing disk usage stats on individual VM filesystem would be nice. Any thoughts on this?

A question about ceph on proxmox through extrnal USB

$
0
0
I have 3 dell 760 servers with proxmox on it. If I attach a 4 port usb hub, and attach 1 250GB SSD and 3 2TB hd to it, can I configure it as ceph storage on proxmox?
WHat is the downside to it? Assuming I connect it to usb 3.0 port.
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>