Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

increase qcow2 Hard disk

$
0
0
Hello everyone,

first sorry for me bad english. I creat a VM with PROXMOX VE with 500GB Harddisk. i installed on the VM a Webserver and i want to increase the Harddisk from 500GB to 160GB. I need your help please.


Thank you!

suspend a VM and sve this to disk for reboot

$
0
0
Are there any way to suspend my 50VMs and save the state for a host reboot?
After the reboot the vms should just resume. This would help to reduce the Shutdown time and could keep my VMs somehow "running". This is important for long duration tests of our software....

SubscriptionInvalid response from server: 500 connect failed connect no route to host

$
0
0
Hi, i just checked into my proxmox webpage and got hit with the info that the subscription info is too old.
And when i press check i get a timeout: Invalid response from server: 500 connect: failed connect no route to host.


Whats wrong? In the dns settings of proxmox i have my vm guest pfSense dns-server (bind9) as first option and 8.8.8.8 as second.

I just tried to reboot the whole thing. Still doesnt work.

EDIT: I just changed the dns settings to only 8.8.8.8 and i get this error:
Invalid response from server: 500 Can't connect to shop.maurer-it.com:443 (Bad hostname) (500)

Removal of VM

$
0
0
When removing a KVM VM using the GUI I noticed that the /var/lib/vz/images/vm-id does not get removed. So when removing VM with id 101, the directory /var/lib/vz/images/101 still exists although the contents of the directory is empty. It might not be a problem, I'm only describing what I see :-)

Gijsbert

Fresh Install fails to start up networking

$
0
0
I have ip , netmask and gw from my provider
and proxmox is installed
but i cannot get networking to work ( networking was working on centos , so i think thats fine )

after googling i tried this
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode balance-alb

auto vmbr0
iface vmbr0 inet static
address x.x.x.42
netmask 255.255.255.248
gateway YY.YY.YY.254
bridge_ports bond0
bridge_stp off
bridge_fd 0


still no luck



PS : btw the gateway is on a totally different network , never seen this config before .

My proxmox attacking a NFS Server? SPAM Warning!

$
0
0
Hello,

Today I got a warning that my proxmox attacking a NFS server*One of my clients running bruteforce on his NFS* looks like this:

  1. Nov 29 06:25:21 SRC=xxx.xxx.xxx.xx DST=xxx.xx.xx.xxSPT=978 DPT=2049

Im using IP forwarding, one IP for all my clients. It make it hard to me listing out which VM causing that bruteforce attack.

I need help solving this issue, maybe there is a code to block the outgoing traffic to that IP via firewall or iptables?

Thank for your help!

GUI: view section doen't scroll

$
0
0
Hello,

there seem's to be an problem with the overview section on the left side of the GUI.
Mode:Server View. 7 nodes. Last node expanded, 20 VMs.
Can move the scrollbar down, but it doesn't move the content.
So I'm not abble to see the last VM or the storage part.
On my 24" display this doen't matter. But actually I'm on the way with my laptop.

Regards
Gerd

Network Problem after networking restart

$
0
0
Hello Proxmox Freaks,

sometimes we have issues with our windows vms ... loosing any network connection.
What we do to fix it : shutdown the vm and start again.

is there any better solution ? To restart only the Network Device of the vm from the proxmox host ?


thanks in advance

tempes

Win2008R2 exaggeratedly slow with 256GB RAM and strange behaviours in PVE

$
0
0
Hi to all.

Anybody that can help me?

My Setup:
In Hardware i have a DELL R720, two processors Intel Xeon CPU E5-2690 v2 @ 3.00GHz., 256GB RAM.
In the Hardware Bios, i have configured Numa, and processor optimized for random access to the RAM.

In Software recently installed: PVE 3.3-5 (from his ISO and upgraded), and as a VM, Win2008R2 SP1 (no other program more)

The Problem:
When i turn on the Win2008R2 VM, the cpu go to the 100% in use for each thread that have configured this VM, and when the VM is configured for example with 4GB RAM, this VM works perfectly.

I did some test, always without the use of tablet for pointer (i use vmmouse of VMware)

Here the symptoms and my actions performed:

1) Doing some tests, i have changed manually the size the page file of Windows Server to 10 GB, and configured the VM with 62 GB. RAM (63488 MB. RAM), now i can see that the Windows Server start with 100% of CPU, and after of two minutes more or less, the processor returns to a normal state.

2) Seeing the task manager of Win2008R2 VM, i get this:
Image Name: System
User Name: System
CPU: 99%
memory (Private workspace): 52 KB
Description: NT Kernel & System

All other processes are consuming 0% of CPU


Here comes the more important information of side of PVE:

3) In paralel to this strange behavior, htop is showing that the use of the memory is growing in each second that elapses, and when his memory bar say that have used "63964/257912MB", the consumption of threads of processors of this VM returns to normal state.

4) As a second test, after of see all these behaviours, I log to Windows Server, and htop show me a high consumption of many threads of processors (+/- 50%), but after that the session was initiated, the consumption of processors returns to normal state.

5) In htop, i see the same behaviour of consumption of processor while that a session of windows is closing, i guess that any thing that i do in this VM will consume processor resources extra needlessly.

6) Moreover, while that the task manager of Win2008R2 VM says that have +/- 60GB free, PVE in his tag "Summary" says exactly the conversely.

Maybe KVM or PVE have problems for manage lots RAM memory with numa enabled in the bios of the Server, but in am not sure.
Only as a reference, i show this link:
https://bugzilla.redhat.com/show_bug.cgi?id=872524

I will be very grateful I will be very grateful to anyone that may help me.

Best regards
Cesar

Fairly high CPU load after passing through USB serial device

$
0
0
Hi folks,

I'm running Proxmox VE 3.3 on my home server (Intel NUC i5) with very high satisfaction :D

I had to connect a USB serial dongle device to one of my virtual KVM hosts so I added "usb0: host=2-4" to it's configuration file as stated in the Wiki.
It's working fine in general.
However, CPU load is now 20-30% so I guess the interrupts caused by the USB passthrough is fairly high.

Is there any way to bring it down? The USB device is actually a fairly low performance device, only a few bits via serial command.

Appreciate any help.


Cheers,
Julian

HOWTO: Changing SSH Port with Proxmox Cluster

$
0
0
This has been a common question, which I know because I myself have searched for a solution without much luck. It's known that Proxmox Cluster requires SSH to be on port 22 to work & it doesn't support changing it as far as I'm aware. Certain circumstances dictate this might be an issue for a variety of reasons.

I think I've found a workaround: Don't change it, just restrict it with iptables, & add another port. OpenSSH can listen on however many ports you want, so you can leave the default 22 enabled, but also add another non-standard port to the config file & it will listen on both:

This assumes you're starting with an empty ruleset

Read & understand the entire thing before changing anything!

Edit /etc/ssh/sshd_config & add the additional Port line under Port 22. Save & close the file (Note: this is just a snippet of the top of the file, not the entire file):
Code:

# Package generated configuration file # See the sshd_config(5) manpage for details
 
 # What ports, IPs and protocols we listen for
 Port 22
 Port 2222
 # Use these options to restrict which interfaces/protocols sshd will bind to
 #ListenAddress ::
 #ListenAddress 0.0.0.0

From there, restart ssh & verify you can get in over port 2222 before continuing:

Code:

service ssh restart
For administrative use, you could use the non-standard port 2222, which could stay open to the world. For Proxmox's cluster purposes, we can restrict port 22 with iptables.

When creating firewall rules, I always use these as a standard baseline (allow all to/from loopback & allow all established connections):
Code:

iptables -I INPUT 1 -i lo -j ACCEPT
iptables -I INPUT 2 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Assume 10.0.1.10 is the other Proxmox cluster node. If you have multiple nodes, add one rule per node:
Code:

iptables -I INPUT 3 -m conntrack --ctstate NEW -m tcp -p tcp --source 10.0.1.10 --dport 22 -j ACCEPT
Then drop the rest of the connections to port 22:
Code:

iptables -I INPUT 4 -m tcp -p tcp --dport 22 -j DROP
Note that I'm using "iptables -I INPUT 2". I'm specifying the rule number so it will go where I want in the INPUT chain, in that case 2. iptables processes rules from top down. If your chain is empty, you can copy & paste. If not, you'll need to adjust accordingly.

These are meant to be done on a running system, hence the use of the -I flag with the rule number after INPUT. If the system is rebooted, the rules will be lost. It is up to you to determine how to make them permanent. I haven't gotten that far myself, but there's plenty of howto's out there for doing that on debian. If you're using a script to load the rules, you'll want to change the -I flag to -A.

It's also possible you could use the new PVE Firewall in 3.3 for this, but I haven't given it a look yet.

If anyone can see a reason why this is a bad idea, please let me know. Thanks!

Proxmox in VirtualBox - github project

Possible problem with VM Windows 2008 R2 with Proxmox 3.3 or Hardware issue?

$
0
0
Hi there.

I have 2 servers working with Proxmox.

Server 1 = Proxmox 3.2
Server 2 = Proxmox 3.3

If I get a copy of VM from 3.2 and put it to work over the 3.3, after some hours the windows 2008 gets totally freeze.
On the 3.2, I have it working without any restart or issue for more than 60 days, without any problems, but the same VM running over the 3.3 do never pass more than 1 day without freeze.
When It is freeze, I login to console and hit ctrl + alt + del and the password screen do not appear.
Everything inside the server is freeze and I know it because I have some online services inside this VM and these services stop to work.

There is any logs, or debug files of I can start to search if we have some problem on the node?

Thank you for help!

Proxmox in Dell PowerEdge R720 bcm5720

$
0
0
Problem with network driver.
How can I install network card driver broadcom gigabit ethernet bcm5720?

Node still in GUI after delete it

$
0
0
Hello,

from a 7 node cluster I removed one node. The node was in shutdown state during operation.
The GUI still shows the node whereas the deleted node is not in /etc/pve/cluster.conf
"pvecm nodes" shows on all nodes the correct nodes. "pvecm status" is ok also.
Maybe one of the other nodes was unintentionally down during the removal and started afterwards.
Don't know because recognized this behavior some time later.
All works as expected. Only the presence of the deleted node unsettles me a little.

Regards
Gerd

Quorum Disk down causes odd GUI behaviors

$
0
0
I have found that in all my clusters 2.x, 3.x. That when the quorum disk goes down for an extended period of time, the proxmox GUI is in a strange state. The little monitor icon next to each VM is black and only has the VMID and not the description next to it. If I get on any of the clusters and look at clustat everything is AOK. All the VM's are running and everything is good. If I click on one of the VM's in the GUI, it does show them as a "Status" of "running", I can open a console and everything is 100% other than the VM's lacking a description and the little monitor is black. I also noticed that the gui reports both nodes as "red". Clustat reports the cluster as quorate and everything seems ok. Once I bring the quorum disk back online, every comes back to life in the GUI. This is happening across all 10-15 inhouse clusters. I haven't had a chance to test some of the ones out in the field.

Updating all Proxmox Nodes Simultaneously

$
0
0
I have Proxmox 3.3 cluster, and I've been updating each node individually using the web GUI.

I've noticed that when I update one node, it takes a little time before the GUI will allow me to update another node. The NoVNC console fails to showcase the update-command-line if I try to update a node too soon after I've just gotten through updating a previous node.

I'm wondering why a certain amount of time must pass between updating each node. Also, I'm wonder how people (who have a lot more nodes than me) perform node updates. Are there methods for propagating updates to all nodes at once?

For me, it is not too much trouble to update each node individually, but I'm curious how administrators with several nodes administer updates, and I'm also curious as to why I must wait a little while between each node update (via the web GUI).

I'd appreciate your insight. Thanks.

Four node HA cluster on Dell Optiplex Desktops

$
0
0
Hey guys,
Here's what I'm trying to accomplish. I have four Dell Optiplex Desktops that have been configured for clustering, which worked just fine. I want to make them HA clusters and the fencing has got me a little bit stumped. I followed the Proxmox wikis on both configuring two node clusters and the specific one to fencing. First, I wanted to know if I could just alter the : <cman two_node="1" expected_votes="1"> </cman> to read
<cman four_node="1" expected_votes="1"> </cman>. Second, all the information I have seen with fencing is on specific equipment, so I'm not sure if this can be possible on a desktop system to begin with. Also, I modified this entry with two more nodes, was that a good choice? :
<fencedevices>
<fencedevice agent="fence_ilo" hostname="nodeA.your.domain" login="hpilologin" name="fenceNodeA" passwd="hpilopword"/>
<fencedevice agent="fence_ilo" hostname="nodeB.your.domain" login="hpilologin" name="fenceNodeB" passwd="hpilologin"/>
</fencedevices>

ps I altered the hostname to ipaddr and included my addresses and the login="<to my login> as well as the passwd entry, but not the fencedevice agent entry
because I don't know what that is.
Any advice would be greatly appriciated!!

VM network only works using NAT

$
0
0
Why does my proxmox host only gets VM network if I use NAT?


After a fresh install this is what I get from ifconfig


Quote:

root@sd-11111:~# ifconfig
eth0 Link encap:Ethernet HWaddr 0c:c4:7a:55:5e:58
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:72285 errors:0 dropped:0 overruns:0 frame:0
TX packets:25831 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:52472377 (50.0 MiB) TX bytes:5320733 (5.0 MiB)


lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2340 errors:0 dropped:0 overruns:0 frame:0
TX packets:2340 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2696216 (2.5 MiB) TX bytes:2696216 (2.5 MiB)


venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


vmbr0 Link encap:Ethernet HWaddr 0c:c4:7a:55:5e:58
inet addr:195.154.102.XXX Bcast:195.154.102.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:71007 errors:0 dropped:0 overruns:0 frame:0
TX packets:25095 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:51334834 (48.9 MiB) TX bytes:5280227 (5.0 MiB)


On the UI I never get the bridge to turn Active it always reports the bridge as Active NO.

I add a VM, but I get no network activity if I use Bridge mode I don't have network activity, I only get network activity if I use NAT mode. What am I doing wrong? Isn't the bridge supposed to give my VM network activity?

Tried to get DHCPd to run but it doesnt even turn on... Starting ISC DHCP server: dhcpdcheck syslog for diagnostics. ... failed! failed!

:confused:

Failed Backup stops VM

$
0
0
good morning,

since a few weeks, i think since i upgraded to 3.3, i have the problem that when the backup fails, it kills the guest entirely.
e.g.
Code:

INFO: Starting Backup of VM 205 (qemu)
INFO: status = running
INFO: update VM 205: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/dailybackup/dump/vzdump-qemu-205-2014_12_02-02_04_55.vma.lzo'
INFO: started backup task '5c79cab1-a8e2-4f1b-94b4-6444e6fb5e29'
INFO: status: 0% (399114240/80530636800), sparse 0% (4812800), duration 3, 133/131 MB/s
INFO: status: 1% (847642624/80530636800), sparse 0% (7905280), duration 7, 112/111 MB/s
INFO: status: 2% (1664090112/80530636800), sparse 0% (9674752), duration 13, 136/135 MB/s
INFO: status: 3% (2512519168/80530636800), sparse 0% (43184128), duration 20, 121/116 MB/s
INFO: status: 4% (3274702848/80530636800), sparse 0% (52264960), duration 27, 108/107 MB/s
INFO: status: 5% (4139384832/80530636800), sparse 0% (56594432), duration 35, 108/107 MB/s
ERROR: VM 205 not running
INFO: aborting backup job
ERROR: VM 205 not running
ERROR: Backup of VM 205 failed - VM 205 not running

in the eventlog of the guest that fails can be seen nothing but the info that the machine has stopped working on specific time (looks just the same when you pull the powerplug)

before the upgrade to 3.3 it all was good without any failed backup or stopped machine for months.

Code:

pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

4 cluster setup, non HA
backup-storage is a ZFS Server, connected via NFS

earlier it wasn't a problem that the 4 servers were backuping at the same time, at the moment i think i will have to set a manual backup timeslot for every cluster-member to see if that is the problem
are there any known problems in such a setup?

therefore it would be nice if there would be the option in the backup-configuration that the backup takes only 1 backup at a time in the whole cluster, and not one on every cluster-member

thanks
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>