Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Apply common rules to all VM

$
0
0
Hi,

I'm trying to setup the new 3.3 firewall to get rid of our dedicated firewall box and get faster acces to all the CT's.
I would like to have a common set of rules that apply to all VM/CT, for example enable Ping and web access and limit ssh to a group of management IP addresses.
All our VM/CT are on separated subnets from the HW nodes, and I've tried to define these rules in Datacenter view, setting theses subnets as the destination address (via ipset).
However, after some trial and errors I've found that it's not possible to do that as the rules do not apply. Has anyone tried this setting?

Thanks a lot.

no network on vms after restart

$
0
0
I ended up having to move one of my Proxmox servers to make room for another server. When I powered it back up I could connect to Proxmox server and the interface. When I tried to access VMs through the network I keep getting host unreachable. I can't ping the vms.

Open vSwitch VLAN tagging on guest port

$
0
0
Hi,

I can configure a network port attached to a VM to be a VLAN trunk using the following commands (when using Open vSwitch):

Code:

ovs-vsctl set port tap103i1 trunks=10,20,30
ovs-vsctl set port tap103i1 vlan_mode=trunk

How can I do this on the webinterface?

Cheers,
Tobias

fog.io driver for Proxmox

Permissions user of proxmox

$
0
0
Hello good as I can edit or assign permissions to one user Proxmox?
Thank you.

pvesh set /storage issues

$
0
0
  • storageName = 'Backups'
    exportedMountPoint = '/Volumes/Storage/.../ProxmoxBackups'
    cmd = "pvesh set /storage/#{storageName} -server '#{ip}' -storage #{storageName} \
    -export #{exportedMountPoint} \
    -content 'images' -type 'nfs' -options 'vers=3'
    "

    This should work, according to http://pve.proxmox.com/pve2-api-doc/
    (See https://d2oawfjgoy88bd.cloudfront.ne...EJJBIZWFB73RSA)

    I tried tracing what the browser does to get this set, but they fail.

    Then I try various permutations...
    Code:

    Running: ssh root@10.0.1.26 pvesh set /storage/Backups -server '10.0.1.26' -storage Backups                                  -export /Volumes/Storage/martincleaver/ProxmoxBackups                                  -content 'images' -type 'nfs' -options 'vers=3'Unknown option: storage
    Unknown option: export
    Unknown option: type
    400 unable to parse option
    set storage/Backups  [OPTIONS]
    mount-nfs command ran
    martincleaver@MartinCleaversMBP.local:~/SoftwareDevelopment/proxmox-setup 14:48:44 784$ bundle exec bin/proxmox-setup mountnfs --ip 10.0.1.26
    IP=10.0.1.26
    Running: ssh root@10.0.1.26 pvesh set /storage -server '10.0.1.26' -storage Backups                                  -export /Volumes/Storage/martincleaver/ProxmoxBackups                                  -content 'images' -type 'nfs' -options 'vers=3'
    no 'set' handler for 'storage'
    mount-nfs command ran
    martincleaver@MartinCleaversMBP.local:~/SoftwareDevelopment/proxmox-setup 14:50:08 785$ bundle exec bin/proxmox-setup mountnfs --ip 10.0.1.26
    IP=10.0.1.26
    Running: ssh root@10.0.1.26 pvesh create /storage -server '10.0.1.26' -storage Backups                                  -export /Volumes/Storage/martincleaver/ProxmoxBackups                                  -content 'images' -type 'nfs' -options 'vers=3'
    create storage failed: mount error: mount.nfs: requested NFS version or transport protocol is not supported
    mount-nfs command ran
    martincleaver@MartinCleaversMBP.local:~/SoftwareDevelopment/proxmox-setup 14:50:29 786$ bundle exec bin/proxmox-setup mountnfs --ip 10.0.1.26
    IP=10.0.1.26
    Running: ssh root@10.0.1.26 pvesh create /storage -server '10.0.1.26' -storage Backups                                  -export /Volumes/Storage/martincleaver/ProxmoxBackups                                  -content 'images' -type 'nfs'


    create storage failed: mount error: got lock timeout - aborting command
    mount-nfs command ran
    martincleaver@MartinCleaversMBP.local:~/SoftwareDevelopment/proxmox-setup 14:55:10 787$
    martincleaver@MartinCleaversMBP.local:~/SoftwareDevelopment/proxmox-setup 14:55:10 787$ bundle exec bin/proxmox-setup mountnfs --ip 10.0.1.26
    IP=10.0.1.26
    Running: ssh root@10.0.1.26 pvesh create /storage -server '10.0.1.26' -storage Backups                                  -export /Volumes/Storage/martincleaver/ProxmoxBackups                                  -content 'images' -type 'nfs'
    create storage failed: mount error: got lock timeout - aborting command
    mount-nfs command ran



    Hints please.

Ceph OSD failures on good drives

$
0
0
I have a three-node up-to-date (no subscription) Proxmox cluster home lab with AMD 8-core CPUs, RAID card X8 sata in JBOD and 1 SSD for the system. Network is 1Gbps on unmanaged switches. Each node is also used as a 4-OSD Ceph node with an isolated 2 x 1Gbps NICs bonded Round-Robin redundant link using separate switches with STP active. Bandwidth is 1.7 Gbps on the Bond and connectivity is fine to all three Ceph nodes. Each OSD is a WD Red 1TB 2.5" SATA drive but my first node also has 3 other drives used for testing BTRFS (separate of Ceph) and 2 unused connected drives.

I have 3 Ceph pools configured: data (2/2) is 25% full, pve (3/2) is 10% full and metadata (3/2) is minimal. I use the data pool for CephFS testing and the pve pool for rbd. PGs and PGPs is configured equally at 512 for all pools. When all drives are Up/In and the Ceph shows as Healthy, access is acceptable. I can easily use data on CephFS to serve mp4 or mkv movies remotely in HD quality (LAN) and the KVM VM functionality is fine.

I had to replace one OSD on node 1 with another unused drive connected to the same controller in order to get the Ceph system stable. I had to try two drives before I could get Ceph stable though.

This is what happened:

  • One of the original Ceph OSD failed after a few hours of having a Ceph Healthy system. The first spare drive (Fujitsu 1 TB 3.5") I tried also fails. The second unused drive (also a Fujitsu 1 TB 3.5") is working.


This is what I tried:

  • The two failing OSD drives pass all error checks (SMART, gparted, gdisk) and nothing shows up in any logs.
  • Every time I attempt to add them as OSD (through the Proxmox web interface), creation is successful. They are Up/In and the entire Ceph system attempts to re-balance.
  • After a few hours, the re-balance is almost completed but stops at less than 1% remaining and the OSD goes Down/Out. Any attempt to start it fails with an error 1 dialog.
  • I have attempted to re-configure these two drives as OSD at least 3 times with zapping the drives in-between attempts with no change.


I can not understand why these two drives do not work as OSDs. The behavior is repeatable and re-balance usually starts between 20 and 30%. Ceph logs show slow progress with many pages unhealthy.

Can anyone give me a hint on where to look?

Serge

Change email address for software update notifications and so on

$
0
0
Hello, can anyone help me to change the email address for software update notifications? (and all the notifications except for backups)

Thanks in advance!

Why can't this container mount an NFS share?

$
0
0
I am trying to mount a NFS share using a container.

# modprobe nfs
libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/2.6.32-32-pve/modules.dep.bin'

Meanwhile, on the proxmox node:
# pveversion -v

root@proxmox:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Odd installation issue on ThinkServer TS430

$
0
0
Hi guys,

Not really sure if anyone has run in to this but after the 4th day of banging my head against the same box, I thought I would reach out to you guys :)

I have 2 ThinkServer TS430 boxes, more or less the same with the exception of the BIOS, one runs the 3.X version and one runs the 2.X version. I am having the issue with the one that is on the 2.7 version (and no I can't upgrade to the 3.x because they are different chips, this model has 2 different types of BIOS).

Everything is working as expected, until the reboot after the install (I tried version 3.3 and 3.2 of proxmox). All I get once the bootloader passes to the OS (proxmox) just a blinking cursor in the top left side of the screen.

Things I have tried:
- create MBR partition on the disk prior to install, but the install recreates gpt anyway.
- install Ubuntu 12.04 desktop on the server (worked fine), but I did notice ubuntu used msdos partition structure.
- updated BIOS to the latest version for that model. (2.7)

Strange thing is, it installs and runs just fine on the other server (with the 3.x bios chip), the bios on that server detects the drive as legacy (csm), the server I am having issues with detects the drive as UEFI.
I have even tried dumping the drive from one server in to the other to perform install and then switch back, with exact same result which leads me to conclude that the issue is definitely in UEFI boot process.

One solution would to force the installer to use MBR instead of GPT type which the bios would then detect as legacy and work properly, but I'm not sure if that's an option I can pass the installer at boot.

Before someone suggests it, there is no way to force Legacy/CSM or to disable UEFI in the bios, it just scans the drive and if it finds EFI information on the drive it automatically sets it to that mode, if not then automatically goes to Legacy.

Proxmox 2.3 worked fine on both of these machines, having this issue since I decided to upgrade.

Thanks guys!

Two node broken pipe (596)

$
0
0
Hello I need help.
The problem is I already add node just like this http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
but node is offline and in browser the only thing show is Broken pipe, here is my pvecm nodes

root@KPNO-PRD-TEST:~# pvecm nodes
Node Sts Inc Joined Name
1 X 0 KPNO-PRD-PRMX02
2 M 12 2014-12-02 18:45:43 KPNO-PRD-TEST



root@KPNO-PRD-PRMX02:~# pvecm nodes
Node Sts Inc Joined Name
1 M 472 2014-12-02 18:48:00 KPNO-PRD-PRMX02
2 X 0 KPNO-PRD-TEST

somebody please help me

Urgent question re vlan setup

$
0
0
We run all our windows dev, test and production servers on our proxmox servers, weekly onsite DR backups and monthly offsite DR backups.



And we just got hammered with a root kit virus that is proving extremely difficult to remove.


I'm proposing that we restore one by one from last month DR backups to a vlan tag of 1, check thats its clear, then change it to a vlan tag of 2.


As you may have guessed, I'm a complete novice when it comes to stuff like vlans.


- Is it sufficient to just set the vlan for a VM via the proxmox network device gui?


- will that isolate it from the main (infected) network?


- can I keep the same subnet? (192.168.5.0)


- Will the VM's be able to access the outside internet?


Thanks.

Wikipedia Entry (English)

Invalid Server ID

$
0
0
Dear Proxmox,

I have had to replace the HDD on one of my Proxmox servers. However, when I try to install the subscription key I have, it says "invalid server ID". Do I need a re-issued key? All other hardware is the same apart from the hard disk.

(The old hard disk from the server will be reformatted and reused for non-mission-critical purposes, which don't involve Proxmox or hosting)

GRUB loading. error no such device. USB stick used once on different machine.

$
0
0
Hi, Thank you for creating a wonderful product and sharing your work and knowledge. Perhaps someone can help me understand this situation?

Using Proxmox 3.3 iso installation from USB stick.

Installation proceeds perfectly for the first machine.

Then if I try to use the same USB stick to install on a second machine I get this error:

Quote:

Grub loading.
Welcome to GRUB!

error: no such device: d669a004-5a3d-438c-871f-c7f19fe18784.
Entering rescue mode...
grub rescue>

As a workaround, I just re-create the USB stick from the iso. and it works?!?!

The problem is, it takes a long time to burn the iso back onto the USB stick.

Is it possible that the Proxmox installation procedure somehow makes a change to the USB stick during the first (successful) installation?

How can I avoid having to re-burn the iso back on the USB? Is there some files I can just change on the USB stick after the successful install?

Here is the procedure I am currently using that works every time the first time using a fresh usb stick:

Quote:

1. On Win7 PC Download the Proxmox VE 3.3 iso from http://proxmox.com/downloadsproxmox-...a06c9f73-2.iso

2. Insert a USB stick in Win7 PC

3. Use Rufus on a Win7 PC to burn the ISO image to USB stick
a. use default settings except choose ISO ... and select image from downloads directory.

4. When Rufus is done copy proxmox-ve_3.3-a06c9f73-2.iso from downloads directory to root of USB stick

5. eject the USB from the Win 7 PC and insert USB in Server which will be used for Proxmox

6. Set the bios settings on server to boot from USB first and restart Server

7. At the proxmox splash screen at the boot: prompt type 'debug'

8. Notice and write down the /dev/sd<x> name of the USB drive from the console messages (e.g. /dev/sda =>usb )

9. Wait for messages that say can't find cd rom. at root prompt I type the following to mount usb as a loop device.
a. # mount -t vfat /dev/sda /mnt
b. # mount -o loop -t iso9660 /mnt/proxmox-ve_3.3-a06c9f73-2.iso /mnt
c. # chroot /mnt sbin/unconfigured.sh

10. At Proxmox GUI, fill in 4 screens and proceed.
a. the target destination is usually /dev/sdb1 which is a RAID-1 mirrored device using 9550-SXU-4LP raid controller with 2 drives attached.

11. After the installer finishes I hit the reboot button, but leave the USB stick in the server

12. After the server reboots, I can login to Proxmox on the server.

13. Next I install grub on /dev/sd<x> where sd<x> is my mirrored raid device
a. grub-install /dev/sdb

14. after success message I change the proxmox repo to no subscription in etc/apt/sources...
a. vi /etc/apt/sources.list.d/pve*
b. comment out the line begining with 'deb http://...' and ":wq!" to exit vi
c. vi /etc/apt/sources.list
d. add line 'deb http://download.proxmox.com/debian wheezy pve-no-subscription

15. Now update the apt repository and upgrade the distribution
a. apt-get update
b. apt-get -y dist-upgrade

16. Then shutdown the Server pull out USB and restart
a. #shutdown -h now
b. pull out USB stick
c. power on the server.

17. At this point Server 1 works great!

But if I try to re-use the same USB on a different Server ( with the exact same hardware/bios configuration) I get the grub error

Quote:

Grub loading.
Welcome to GRUB!

error: no such device: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx.
Entering rescue mode...
grub rescue>


How can I avoid having to re-burn the USB stick each time I want to install proxmox on a new Server?

Thank you for your time and great work!

Respectfully,
Windsor

1Gbit nic = 100 Mbit speed negotiated

$
0
0
Hi.

I have a nic that is a GBit one, but it only negotiates 100 mbit.

This network card is connected to internet, that have 250/100 Mbit speed. no router or switches between the nic and internet. Any ideas how to get it to set Gbit?
I have tried ethtool -s eth1 speed 1000 autoneg off. But then i cannot get a link at all and ethtool eth1 says unknown on speed.
Quote:

Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: pumbg
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

Bonding on two different link speeds

$
0
0
I am about to deploy a new cluster using 10gb Intel NIC.
Is it possible to have a single 10gb NIC and a single 1gb NIC in active-backup bond?
I want to get full 10gb throughput on the active and fail-over to 1gb on failure of primary NIC.
I am NOT looking for a way to get 11gb/s.
I want a 10gb full duplex connection, unless it fails, at which point it will be a 1gb full duplex connection.
Is this possible?

CentOS container cannot enable network

$
0
0
I installed CentOS 6 Standard (also tried with minimal) via a template and cannot seem to enable eth0. Bridge mode to vmbr0 was selected during container creation, and this is the device info:
Code:

ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: venet0: <BROADCAST,POINTOPOINT,NOARP> mtu 1500 qdisc noop state DOWN
    link/void
3: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 02:86:47:81:c0:49 brd ff:ff:ff:ff:ff:ff

I've defined /etc/sysconfig/network-scripts/ifcfg-eth0 with:
Code:

    auto eth0
    ONBOOT=yes
    allow-hotplug eth0
    iface eth0 inet dhcp

But when I try to bring the device up or restart networking, I see
Code:

# ifup eth0
/etc/sysconfig/network-scripts/ifcfg-eth0: line 1: auto: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 2: allow-hotplug: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 3: iface: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 1: auto: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 2: allow-hotplug: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 3: iface: command not found
Device does not seem to be present, delaying initialization.

Code:

# service network restart
./ifcfg-eth0: line 1: auto: command not found
./ifcfg-eth0: line 3: allow-hotplug: command not found
./ifcfg-eth0: line 4: iface: command not found
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface eth0:  /etc/sysconfig/network-scripts/ifcfg-eth0: line 1: auto: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 3: allow-hotplug: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 4: iface: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 1: auto: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 3: allow-hotplug: command not found
/etc/sysconfig/network-scripts/ifcfg-eth0: line 4: iface: command not found
Device does not seem to be present, delaying initialization.
                                                          [FAILED]

can't upgrade from 3.1 to 3.3

$
0
0
i want to upgrade from 3.1 to 3.3 , but i get a system up-to date .

after a apt-get update .

Code:

root@node1:~# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Code:

root@node1:~# pveupgrade
Starting system upgrade: apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Your System is up-to-date


Code:

root@node1:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1


Code:

root@node1:~# cat  /etc/apt/sources.list
deb http://http.at.debian.org/debian wheezy main contrib
#deb http://debian.mirrors.ovh.net/debian/ wheezy main contrib

# PVE packages provided by proxmox.com
deb http://download.proxmox.com/debian wheezy pve-no-subscription

# security updates
deb http://security.debian.org/ wheezy/updates main contrib

any missed step or any command to trace the probleme ?

SCO Openserver 6 VM / Fail to start with screenshots

$
0
0
Good day to everyone, It is a blessing to know about Proxmox VE and it is an amazing solution for business nowadays. I have a question regarding moving our SCO Openserver 6 VM's running on top of virtualbox to Proxmox.

I converted the SCO Openserver 6 VM's from .vmdk to qcow2 succesfully and imported it to Proxmox VE. It ran successfully until I got the general protection trap panic below. Has anyone encountered this issue? Please shed some light, I have been scratching my head already for a few weeks on this.

Thank you and much appreciated.

Screenshot below.

Selection_072.png
Attached Images
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>