Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Bugzilla vs. Github or JIRA

$
0
0
Bugzilla is seriously behind the times.

If you want people to contribute to this open source project, how about opening issue trackers on the github project?

Better still, install the Atlassian suite (JIRA, Stash, etc). As an open source project Proxmox VE would get it for free.

M.

Windows 2003 - drive from network e disk

$
0
0
Windows 2003 - Network and disk drive


What is the best option for windows 2003? virtio disk drive and network or IDE + E1000?

Windows Server 2012 / Proxmox 3.x

$
0
0
Microsoft is announcing end of life for Windows 2008 Server and we have not been able to install any flavors of Windows 2012 as they do not recognize any of the drive/storage. What version of Windows Server later than 2008 would you recommend and any resolution with the problem?

Thanks

Set bond primary

$
0
0
Trying to figure out how to set the primary slave on an active-backup setup.

Code:

root@tapecephhost2:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None              <--------- There has to be a way to set this?
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:4e:b7:50
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:52:16:c0
Slave queue ID: 0

From what I have read I should be able to simply set bond_primary and it should work but it doesn't seem to. Does anyone know how to achieve this? I want one nic in the active-backup setup to be primary at all times when it is connected. This should be quite simple.

Base volume don't show up

$
0
0
Hello!

I'm currently testing a server with Proxmox 3.4 with an iSCSI device.

The solution is provided by ONLINE.NET, a French provider.

http://www.online.net/en/dedicated-server/rpn-san

The iSCSI mount / target is ok.

But the LVM part don't want to work.

The "Base volume" menu don't show up in the interface.

base_volume.png

Does someone, know where to look in logs to try to debug this problem?

Sincerely,

Stephane
Attached Images

Techniques for monitoring cluster 'health'?

$
0
0
Hi there,

We've been using Proxmox in production for sometime now and are looking to expand our cluster. One thing I'm keen on monitoring is the health of the cluster, quorum and all. We use PRTG for monitoring so can use SSH scripts and other techniques for monitoring. We already monitor tons of metrics on the nodes themselves (network traffic, uptime, disk health et. al).

Are there any guidelines on which are the best commands to run or metrics to read to ensure the fabric of the cluster is in good shape (as opposed to the hosts themselves)? Have used 'clustat' quite a bit interactively, but interested in seeing if the output could be spidered to give useful stats.

Any pointers much appreciated on this.

How to route VPS traffic via host server interface?

$
0
0
Hello, i have host server and KVM guest Linux CentOS 5.11 VPS. ...... host server has one IP assigned + gateway IP ...... I bought additional one IP (/32), this IP dont have gateway and it is on very different subnet than my host server ...... When i asked server provider how to use that IP on my VPS without gateway IP he told me "You have to route the traffic via your host interface." ...... Please how can i do it exactly? Please any tutorial?

Help... upgraded Proxmox from 3.2 to 3.4

$
0
0
ok so i updated from 3.2 to 3.4 everything looked like it went well got the grub problems sorted and server rebooted

logged into the web ui went to start a VM and was hit by an error. long story short after 5 hours of bitching and hitting my keyboad iv found some packages really messed up.

Code:

proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-39-pve)
pve-manager: not correctly installed (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: not correctly installed
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: not correctly installed
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

iv enen gone as far as trying to install each modual on its own which dosnt work. tried apt-get -f install and all i get is a list of errors. im npt great with linux and stuff like this is why i stick to windows. :) sooner i can get this sorted the better :)

NFS vs iSCSI - Confusing results

$
0
0
I'm trying to decide which is a better option performance-wise but getting some unreliable results when using NFS.

The storage server is running OpenMediaVault with a 4 disk Raid10 array. It's connected via gigabit ethernet on the same switch as the Proxmox node.

For my NFS share I created a LVM logical volume on top of the Raid10 array, then formatted it using ext4 and created a new target for NFS. Set this up in Proxmox as a NFS storage and installed Windows 2012 R2 cleanly on a 256GB partition. I then ran CrystalDiskMark to compare the two storage types.

Initial results varied wildly starting off getting reads of over 1000MB/sec, after a few runs it levelled out but still getting results greater than the available network bandwidth, see below:



Then I removed all the NFS config and targets, removed the LVM volumes on the storage server and created a iSCSI target pointing to the raw Raid10 array. I then created an iSCSI target in Proxmox and a LVM group on top of that. These results are more along the lines of what I expected, see below:



Can anyone shed some light on why the NFS storage appears to be faster?

TIA

Shutdown issue

$
0
0
Hi all,

I'm testing a shutdown script to run during an extended power outage - it's really basic - basically sending a shutdown -h +0 to each of the PVE hosts. This works fine to an extent - if there is a vm that is not shutting down properly (for example - in a test pass, one vm took 3 minutes to shutdown and another was at 6 minutes before I just went in an killed the process) - is there a way that I can have it just kill VMs that are improperly shutting down after X minutes? (I'm sure there is a way via scripting, but I can't say I'm the best bash scripter). I need the VMs shutdown before I can instruct the ceph cluster to shut itself down, and right now the batteries would die prior to the ceph cluster being able to be properly shutdown.

Timeout when creating qcow on NFS storage

$
0
0
I have an NFS share from a Windows box on my PVE box, currently with backups & iso's. Today I tried to add an export to add virtual disk functionality over NFS. I can create RAW disks fine, but if I try to create a .qcow, it times out each & every time, no matter what I do. The network isn't saturated, but it's not gigabit either. See attachment for error screenshot.

Here's the relevant storage.cfg entry with the options I tried to no avail.

Code:

nfs: disks
    path /mnt/pve/disks
    server x.x.x.x
    export /disk
    options vers=3,timeo=7200,retrans=9999
    content images
    maxfiles 1

I'd like to be able to use snapshots, so if someone could let me know how to make this work, I'd appreciate it.
Attached Images

[SOLVED] How to bond eth0.1 w/ vmbr5 to put a vlan on an internal-only bridge

$
0
0
I want to give eth0.1 its own internal bridge so I can give a KVM guest eth1 to access eth0.1.

What I tried already broke networking completely & I had to fix it w/ a Live CD- all that will remain unsaid... (I should've known it wouldn't work)

I'd like to run my next idea past the community before I try it, to prevent another Live CD adventure. (I learned the physical machine's login screen resets to the 'login:' prompt after approximately 25 of 75 characters of the password are entered) :-)

Here's what I have in mind:

Code:

auto vmbr5   
   
auto bond0
iface bond0 inet manual
        slaves eth0 vmbr5
        bond_miimon 100
        bond_mode 4


auto vmbr5
iface vmbr5 inet static
    address  x.x.x.x
    netmask  255.255.255.0
    bridge_ports bond0.1
    bridge_stp off
    bridge_fd 0
       
auto bond0.1
iface bond0.1 inet manual
        vlan-raw-device bond0

Will it work? If not, what can I do to get this?
-
Thanks.

The KVM VPS can not started at ZFS RAID0

$
0
0
My server's hard disk is 2x2T, but no RAID card.
When I try to install proxmox3.4, I chose ZFS RAID0.
Software RAID0 seem to work correctly, but all the KVM VPS can not start.
Please help!

create cluster error. cman will not start.

$
0
0
Hi
i have a Problem to create a testcluter.
Now at reboot cman will not start.


root@pve:~# /etc/init.d/cman start
root@pve:~# pvecm status
cman_tool: Cannot open connection to cman, is it running ?




Nod1


pvecm create testcluster
Use of uninitialized value $wb in numeric eq (==) at /usr/bin/pvecm line 119.
Use of uninitialized value $wb in concatenation (.) or string at /usr/bin/pvecm line 119.
writing key failed - short write


pvecm status
cman_tool: Cannot open connection to cman, is it running ?












Node 2




pvecm add 37.xxx.xxx.xxx
unable to copy ssh ID


ls -l /etc/pve
total 3
-rw-r----- 1 root www-data 451 May 22 14:07 authkey.pub
-rw-r----- 1 root www-data 13 May 22 14:06 datacenter.cfg
lrwxr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/pve
drwxr-x--- 2 root www-data 0 May 22 14:07 nodes
lrwxr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/pve/openvz
drwx------ 2 root www-data 0 May 22 14:07 priv
-rw-r----- 1 root www-data 1350 May 22 14:07 pve-root-ca.pem
-rw-r----- 1 root www-data 1679 May 22 14:07 pve-www.key
lrwxr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/pve/qemu-server
-rw-r----- 1 root www-data 34 May 22 14:06 user.cfg
-rw-r----- 1 root www-data 119 May 22 14:07 vzdump.cron
root@pve:~# pvecm status
cman_tool: Cannot open connection to cman, is it running ?
root@pve:~#




thanks
Achim

Semi-Dedicated system

$
0
0
I'm considering Proxmox to be used as an alternative to dual booting because of the hardware support it offers. I'm curious if the arrangement I thought of is even possible or if there is better way to achieve what I'm attempting...
Components are standard 1 Monitor, 1 Computer (2 Video Cards), 1 Keyboard, 1 Mouse
Obviously the Computer would host the Proxmox hypervisor.
One video card would be dedicated to a Linux vm and the other to a Windows vm. (Graphically intense applications used in both)
The video card outputs will be switched to the monitor through an HDMI selector switch.
Proxmox would be controlled From both the Windows and Linux vm's through the standard interface installed on each.
What I'm not sure about is how to control/direct the mouse and keyboard activities to only one vm at a time in a configuration like this.

Anyone think this is feasible/possible?
Anyone got any ideas about an arrangement like this?
Anyone done anything like this?

Thanks

Confused about proxmox-ve versions and kernels

$
0
0
If apt-get currently shows that:

The following packages have been kept back:
proxmox-ve-2.6.32

While pveversion -v shows:

proxmox-ve-2.6.32: 3.4-150 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-34-pve: 2.6.32-140

Is this some way of telling me that the package "proxmox-ve-2.6.32" that's held back contains a pve kernel update? The GUI seems to imply that, but I don't want to use the GUI to update stuff.

If so, and I run apt-get dist-upgrade to install it, do I need to reboot the host?

ProxMox backup on different ProxMox

$
0
0
Hi all,

I would like to know if i could use a different ProxMox machine for backing up my Virtual Machine. I have installed Proxmox on two different servers. On one of those two servers i have installed a virtual machine with Windows 7 and i would like to create a backing up job for that virtual machine to the second ProxMox machine that has no virtual windows (or any SMB service) installed. Is it possible to create a backup job with no SMB server? Just copy the virtual machine every..lets say week... to the seconds ProxMox server hd?

Thank you in advance!!!

Mouting physical drives for VM to use

$
0
0
Hello,

I'm brand new to Proxmox (just got myself a Microserver Gen8 to mess around with a semi-pro server) and as soon as I tested it, I got hooked ! It's so flexible and intuitive that even though I'm still tempted to go with a regular server OS, I just want to make it work with Proxmox.

I'm a newbie and I'm facing a problem with, I guess, multiple solutions.

My installation occupies an SSD only dedicated to proxmox and its VMs.
I'd like however to make the 3 other physicals from my server available to these VMs (OMV+Debian for a start) and I'm facing difficulties to make it work properly.

I tried to use the "qm set <vmid> --ideX /dev/sdX" method which works in that the OMV VM see the drives but I can't make anything with them (can't change their file systems or share them).
I also tried to add them using the "Add" option in the VM Hardware after adding the disks in the Storage menu of the Datacenter, after preparing them using this tutorial. The disks can be mounted in OMV using that technique but again, shares don't work. Besides, I get really weird results with somes drives having only 10% of storage available, while they are all empty !

What is the cleanest and most flexible solution to mount exiting hard drives in Proxmox to make them available to all the VMs ?

Thank you !

copy Proxmox VM to cloud then back to Proxmox

$
0
0
Sounds weird i know. so i tried to upgrade proxmox 3.2 to 3.4 and something went bad. iv been poking and proding proxmox all weekend to get it soted (dont know enough about proxmox or Linux tosort it) so im going to do a fresh install tweak it and upgrade again

My Question

Thanks to HubiC (10TB YaY!!!!) im going to back all my server up 4TB to hubiC and the back to the new install. do i need anything other than the vm disk images?

ProxMox v3.4 not bootable after successful install.

$
0
0
Hello,

I am having problem getting ProxMox v3.4 to boot. Server have 6 x 2TB SATA drives. Configured via hardware controller as one logical volume as RAID 5 (10 Terabytes total):

- Went through the CD installation process without any problem (Proxmox see 9.3TB).

- When tried to boot, keep saying no boot device found.

- I reinstalled and at the ProxMox partition prompt, I clicked on "OPTION" and changed "ext3" to "1000.0 MB" (1TB).

- Same problem. Cannot boot and still have "no boot device found".

- Installation on other servers with 136GB boot drive were successful with no problem.

Is this a limitation on partition size problem of some sort or is it an issue with ProxMox 3.4? Would appreciate any help. Thanks.

Andrew
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>