Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

CEPH poor performance (4 nodes) -> config error?

$
0
0
I am in the process of testing a new cluster. For this cluster we have 4 nodes with each configuration equal:
2 x Xeon E5620 prcoessors
32 GB RAM
160 GB SSD for Proxmox VE
3 x 4 TB WD Black WD4003FZEX disks for CEPH
2 x Intel Gigabit NIC's, 1 main IP and 1 for the storage network

I have created the CEPH cluster and configured all nodes to be a monitor. Each disk is added as an OSD which totals to 12 OSD's. The ceph pool has a size of 3 and pg_num is 512.

I created 1 KVM to benchmark and I think the performance is poor:

Code:

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=15889KB/s, minb=15889KB/s, maxb=15889KB/s, mint=329962msec, maxt=329962msec

Run status group 0 (all jobs):
  READ: io=5120.0MB, aggrb=47242KB/s, minb=47242KB/s, maxb=47242KB/s, mint=110977msec, maxt=110977msec


Ran the benchmarks with:
Code:

fio --max-jobs=1 --numjobs=1 --readwrite=write --blocksize=4M --size=5G --direct=1 --name=fiojob
And:


Code:

fio --max-jobs=1 --numjobs=1 --readwrite=read --blocksize=4M --size=5G --direct=1 --name=fiojob
Inbetween the write and the read I did:
Code:

echo 3 > /proc/sys/vm/drop_caches
On all hosts and the guest, as per instructions I found on other topics.

After this I tried different options in ceph.conf, I added:
Code:

        osd mkfs options xfs = "-f -i size=2048"
        osd mount options xfs = "rw,noatime,logbsize=256k,logbufs=8,inode64,al$
        osd op threads = 8
        osd max backfills = 1
        osd recovery max active = 1
        filestore max sync interval = 100
        filestore min sync interval = 50
        filestore queue max ops = 10000
        filestore queue max bytes = 536870912
        filestore queue committing max ops = 2000
        filestore queue committing max bytes = 536870912

I added this code under [osd]

Unfortunately this helped a little with read:
Code:

Run status group 0 (all jobs):
  WRITE: io=5120.0MB, aggrb=15548KB/s, minb=15548KB/s, maxb=15548KB/s, mint=337206msec, maxt=337206msec

And:
Code:

Run status group 0 (all jobs):
  READ: io=5120.0MB, aggrb=51013KB/s, minb=51013KB/s, maxb=51013KB/s, mint=102775msec, maxt=102775msec


Main issue is: I feel that the KVM server is sloppy, not good performing at all.

Questions:
1. Is this performance expected for the current config?
2. If 1=no, what could be the problem? What can we do to get better performance except adding more OSD's?
3. If 1=yes, what would be the expected performance if we expand this cluster to 8 nodes with the same config? Double the performance now (which is still bad write performance!) or more?

can't Login with GUI but works with Telnet

$
0
0
hello
i have changed something on the Network settings.
Then i had to reboot. After this the server was not visible in the network.
On the Console i changed again the settings to Eth0 , IP, Network, mask Gateway.
After this change i could reach the Server with telnet and Log in.
With the GUI (Webbroser) i could now also access and when entering the pwd i got a error:
Login failed. Please try again.
user name is root password ist copy pasted from a file so i am sure the password is correct.

what could be the reason any help or suggestion?

have a nice day
vincent

CLI command 'qm status' and 'qm list' reports an incorrect status if VM paused

$
0
0
In my opinion, the CLI command 'qm' has a bug. It reports incorrect VM status 'running' if VM paused.

1) I am logged in as root.

2) Suspends the VM 114
~# qm suspend 114

Proxmox VE Web-GUI reports the correct status 'paused'

3) Query the current status at CLI
~# qm status 114
or
~# qm list
reports the incorrect status 'running'.

but @monitor is it correct:
~# qm monitor 114
Entering Qemu Monitor for VM 114 - type 'help' for help
qm> info status
VM status: paused
qm> quit

The same effect I had when the power mode of the host system changed this in the sleep mode.






I want to script the VM status query via CLI. That should be fixed.




Or is there another explanation?

Greetings,
Gordon

There are non-root restrictions on the execution rights of the Proxmox CLI?

$
0
0
I have a question about a product feature.

There are non-root restrictions on the execution rights of the Proxmox CLI? I have found nothing in this regard in the wiki or iNet.

I've added an additional user 'jenkins' for automation, written in the WIKI. This remote login should not be 'root' just because the login is done without entering a password via preshared keys.

Since the CLI command 'qm' but is in '/usr/sbin', it needs sudoers rights.

1) First idea:
Configuration via '/etc/sudoers' with this line:

%jenkins ALL = NOPASSWD: /usr/sbin/qm

This means that users of the group 'jenkins' allowed to execute the command '/usr/sbin/qm' without entering the password.

But there is not this file. Is this type of configuration is not provided?

2) Second Idea: temporary workaound




If it does not work without root privileges, then jenkins to group 'root' to add.



Nevertheless, the execution of the command 'qm' is denied.

jenkins@testbed:~$ /usr/sbin/qm --help
please run as root

But I actually have this permission as member of root group.:(

If it should be relevant: In Proxmox VE Web-GUI is this user 'jenkins' member of group 'jenkins' with group role 'Administrator'.


Hence the question: work commands the CLI exclusively with the root account?

Thx for answers,
Gordon

Failed snapshot removal

$
0
0
Hi,

Using Ceph and Proxmox VE, removing a VMs snapshot with memory snapshot enabled, it fails miserably. A normal snapshot without memory snapshot works splendidly.

The error in the end sounds scary, does it try to remove the base image?

Cheers,
Josef

dpkg -l pve*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=======================-================-================-================================================== =
ii pve-cluster 3.0-12 amd64 Cluster Infrastructure for Proxmox Virtual Environm
ii pve-firmware 1.1-2 all Binary firmware code for the pve-kernel
un pve-kernel <none> (no description available)
ii pve-kernel-2.6.32-27-pv 2.6.32-121 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-2.6.32-28-pv 2.6.32-124 amd64 The Proxmox PVE Kernel Image
un pve-kvm <none> (no description available)
ii pve-libspice-server1 0.12.4-3 amd64 SPICE remote display system server library
ii pve-manager 3.2-2 amd64 The Proxmox Virtual Environment
ii pve-qemu-kvm 1.7-6 amd64 Full virtualization on x86 hardware
un pve-qemu-kvm-2.6.18 <none> (no description available)



Removing all snapshots: 100% complete...done.
Removing image: 1% complete...
Removing image: 2% complete...
Removing image: 3% complete...
Removing image: 4% complete...
Removing image: 5% complete...
Removing image: 6% complete...
Removing image: 7% complete...
Removing image: 8% complete...
Removing image: 9% complete...
Removing image: 10% complete...
Removing image: 11% complete...
Removing image: 12% complete...
Removing image: 13% complete...
Removing image: 14% complete...
Removing image: 15% complete...
Removing image: 16% complete...
Removing image: 17% complete...
Removing image: 18% complete...
Removing image: 19% complete...
Removing image: 20% complete...
Removing image: 21% complete...
Removing image: 22% complete...
Removing image: 23% complete...
Removing image: 24% complete...
Removing image: 25% complete...
Removing image: 26% complete...
Removing image: 27% complete...
Removing image: 28% complete...
Removing image: 29% complete...
Removing image: 30% complete...
Removing image: 31% complete...
Removing image: 32% complete...
Removing image: 33% complete...
Removing image: 34% complete...
Removing image: 35% complete...
Removing image: 36% complete...
Removing image: 37% complete...
Removing image: 38% complete...
Removing image: 39% complete...
Removing image: 40% complete...
Removing image: 41% complete...
Removing image: 42% complete...
Removing image: 43% complete...
Removing image: 44% complete...
Removing image: 45% complete...
Removing image: 46% complete...
Removing image: 47% complete...
Removing image: 48% complete...
Removing image: 49% complete...
Removing image: 50% complete...
Removing image: 51% complete...
Removing image: 52% complete...
Removing image: 53% complete...
Removing image: 54% complete...
Removing image: 55% complete...
Removing image: 56% complete...
Removing image: 57% complete...
Removing image: 58% complete...
Removing image: 59% complete...
Removing image: 60% complete...
Removing image: 61% complete...
Removing image: 62% complete...
Removing image: 63% complete...
Removing image: 64% complete...
Removing image: 65% complete...
Removing image: 66% complete...
Removing image: 67% complete...
Removing image: 68% complete...
Removing image: 69% complete...
Removing image: 70% complete...
Removing image: 71% complete...
Removing image: 72% complete...
Removing image: 73% complete...
Removing image: 74% complete...
Removing image: 75% complete...
Removing image: 76% complete...
Removing image: 77% complete...
Removing image: 78% complete...
Removing image: 79% complete...
Removing image: 80% complete...
Removing image: 81% complete...
Removing image: 82% complete...
Removing image: 83% complete...
Removing image: 84% complete...
Removing image: 85% complete...
Removing image: 86% complete...
Removing image: 87% complete...
Removing image: 88% complete...
Removing image: 89% complete...
Removing image: 90% complete...
Removing image: 91% complete...
Removing image: 92% complete...
Removing image: 93% complete...
Removing image: 94% complete...
Removing image: 95% complete...
Removing image: 96% complete...
Removing image: 97% complete...
Removing image: 98% complete...
Removing image: 99% complete...
Removing image: 2014-07-28 14:26:28.831411 7fc4b7153760 -1 librbd: error removing header: (16) Device or resource busy99% complete...failed.
rbd: error: image still has watchers
TASK ERROR: rbd rm vm-103-disk-4' error: This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

new firewall for vm not working

$
0
0
hi,

i updatet my ceph test servers. and the firewall is enabled. on the host it is working fine. but the firwall for the vms not. everything is enabled. in options also the rules itself. but with iptables-save i cant see the new rules.. machines stoped started etc....

prxoxmox ve 3.2 drbd cluster plus openvswitch

$
0
0
prxoxmox ve 3.2 drbd cluster plus openvswitch Here is what i have:
a fully working 2 NIC cluster, basically following the wiki : http://pve.proxmox.com/wiki/DRBD

What i like to have: to add open vswitch support.

Tried to add open vswitch via the GUI for private VM connectivity. Only one VM should act as FW/Router with Internet access for the whole cluster, using "vmbr0".
But this failed. afterwards the cluster was not usable anymore.

Setup before creating OVS Bridge:

/etc/network/interfaces
# primary interface
auto eth0
iface eth0 inet static
address 148.XXX.XXX.XXX
netmask 255.255.255.224
gateway 148.XXX.XXX.XXX
broadcast 148.XXX.XXX.XXX
up route add -net 148.XXX.XXX.XXX netmask 255.255.255.224 gw 148.XXX.XXX.XXX eth0

# bridge for routed communication (Hetzner)
# external connection router/firewall-vm, classic Linux Bridge
auto vmbr0
iface vmbr0 inet static
address 148.XXX.XXX.XXX
netmask 255.255.255.248
bridge_ports none
bridge_stp off
bridge_fd 0

# internal connection drbd / cluster
auto eth1
iface eth1 inet static
address 172.24.10.1
netmask 255.255.255.0
Network after adding OVS Bridge with private IP via GUI and a reboot (eth0 and vmbr0 were untouched):

/etc/network/interfaces

...
allow-vmbr1 eth1
iface eth1 inet static
address 172.24.10.1
netmask 255.255.255.0
ovs_type OVSPort
ovs_bridge vmbr1

auto vmbr1
iface vmbr1 inet static
address 192.168.20.1
netmask 255.255.255.0
ovs_type OVSBridge
ovs_ports eth1

Now, node 1 can not ping node 2 anymore, the cluster (connection via 172.24.10.1 and 172.24.10.2) is down.
However, pinging the other OVS Bridge IP (192.168.20.2) is possible.
AFAIK, something is wrong here:
after this is delared "iface eth1 inet static" you can not use "ovs_ports eth1" in "vmbr1" anymore.
Maybe it would work to change "iface eth1 inet static" to "iface int1 inet static" and add "ovs_ports eth1 int1" to "vmbr1".

Is there an howto for adding open vswitch support to a 2-Node drbd cluster? That would be much appreciated.

PM 3.1 Clustering - Reinstalling after master node failed

$
0
0
I have a PM 3.1 cluster that has about 5 node members. On the weekend, the master server for the cluster died and we had to wipe & re-install PM on it. Its back online now, but its no longer part of the cluster. I did a cluster create on it, and that's done. But none of the child nodes are connected to it.

What do I need to do in order to re-attach a child node to a re-installed master node in a cluster? The cluster name is the same, but of course it has regenerated a new SSL key on re-installation.

Thanks in advance for any assistance.

Myles

Proxmox crashes / restarts after UDP: bad checksum.

$
0
0
Hello,
since 2 days I am experiencing my Proxmox Host system crashing, going down for no reason.
After taking a look inside the kernel log, I saw this:

Code:

Jul 27 06:03:33 prox01 kernel: UDP: bad checksum. From 92.97.115.148:8466 to "my server ip":6881 ulen 70
Jul 27 06:05:29 prox01 kernel: imklog 4.6.4, log source = /proc/kmsg started.
Jul 27 06:05:29 prox01 kernel: Initializing cgroup subsys cpuset
Jul 27 06:05:29 prox01 kernel: Initializing cgroup subsys cpu

This happens always a few minutes before the server restarts / crashes.
I googled for this and found some posts about an exploit, so I guess that this is causing the Problem.
Is there any solution to get rid of this ?
I am using Proxmox v 2.3-13
Or might upgrading to the latest verion solce this problem?

I am thankful for any advice/help.

ceph tuning

$
0
0
I'm going to make a new ceph test cluster and have some tuning questions.

We'll be using a 4 disk raid-10 on each node + a hot spare . There will be one OSD per node. We do not need super fast I/O , instead our priority is high availability + always great keyboard response. We'll use this ssd for journal: Intel DC S3700 Series 200GB .

Does anyone know how we could set these:

* replica of 2

*
permanently set OSD noout ?

*the "mon osd downout subtree limit" set to "host"



thank you and best regards
Rob





SSD on RAID1 ... BEWARE of disk degradation over the time!

$
0
0
Hello!

I've been using PROXMOX in a dedicated server since Sept. 2012. My hardware includes 16gb of ram, two INTEL 180gb SSD 520 disks in RAID1 (using a 3ware 9650SE-2LP card) and an INTEL 1230v2 quadcore xeon processor.

Everything was lightning fast at first. Fsyncs/second where about 2500 ... Also I had about 70gb of free space on the SSD.

Recently I did create some VM backups, and almost filled up the SSD. Then I proceeded to delete thos backup files, once I downloaded it into local space.

I started noticing some slowing down ... particularly in a windows 2003 server I am running as KVM inside my proxmox: Copying or deleting files inside that virtual machine was painfully slow.

Finally I started issueing pveperf commands to monitor my server, only to find a drastic decrease in FSYNC/second values, which now gives back a value oscillating between 23 - 40 .

Also the PROXMOX status shows a high IOWAIT (I recall IOWAIT being virtually ZERO before things started going down the hill).

I did my resarch, and I found out that HARDWARE RAID1 cards do not pass through the TRIM commands that LINUX may be sending, so you should entirely depend on the internal garbage collection capabilities of the SSD itself. Which actually usually triggers when the SSD is "iddle" (which won't happen under "proxmox hammering" even with one virtual machine).

Hence, I realized that my SSD RAID1 is having a bad time consolidating free space whenever PROXMOX asks for a write operation. Maybe I am wrong. I would like very much if anyone wants me to provide more information, or perform some tests ...

I am trying to make my dedicated server provider to change the raid1 disks into INTEL S3700 which seems to be much more resilient to write operations, and also got a better garbage collection system, that -if I understood correctly- is constantly running in the background (does not wait for the SSD to be "iddle" to perform it's magic).

Also, it may be a "heads up" for SSD / RAID1 guys ... specially if those arrays have been stuffed with "desktop graded" SSDs, like the INTEL 520 series, which is my case. It may take months, or over a year, but IOWAIT will catch ya!

reset ceph cluster

$
0
0
We've a 3 node test ceph cluster.

I'd like to redo ceph from scratch.

Is the there a simple way to destroy the ceph cluster ? I've tried to remove all osd's but one is stuck. can';t remove using cli of pve:
Code:

0      1.82                    osd.0  DNE
and ceph -s :
Code:

ceph -s    cluster 4267b4fe-78bb-4670-86e5-60807f39e6c1
    health HEALTH_ERR 435 pgs degraded; 45 pgs incomplete; 4 pgs inconsistent; 4 pgs recovering; 480 pgs stale; 45 pgs stuck inactive; 480 pgs stuck stale; 480 pgs stuck unclean; recovery 253909/390921 objects degraded (64.951%); 19/130307 unfound (0.015%); 25 scrub errors; no osds; 1 mons down, quorum 0,1,2 0,2,3
    monmap e16: 4 mons at {0=10.11.12.41:6789/0,1=10.11.12.182:6789/0,2=10.11.12.42:6789/0,3=10.11.12.46:6789/0}, election epoch 3844, quorum 0,1,2 0,2,3
    osdmap e8224: 0 osds: 0 up, 0 in
      pgmap v4085336: 480 pgs, 3 pools, 498 GB data, 127 kobjects
            0 kB used, 0 kB / 0 kB avail
            253909/390921 objects degraded (64.951%); 19/130307 unfound (0.015%)
                  1 stale+active+degraded+inconsistent
                271 stale+active+degraded+remapped
                  1 stale+active+recovering+degraded
                  3 stale+active+degraded+remapped+inconsistent
                  45 stale+incomplete
                  3 stale+active+recovering+degraded+remapped
                156 stale+active+degraded

So is there a way to remove a ceph set up , or should we just reinstall pve to the 3 hosts?

Backup VM increases...

$
0
0
Hi everyone,


I have a virtual machine and I do a vzdump every day of a VM with LZO compression.


But there are a problem because every day, the size of the backup increases considerably.
Ex : monday -> 1,42 Gb, tuesday -> 2,20 Gb, wednesday -> 2,99 Gb, thursday -> 3,71 Gb, saturday -> 4,49 Gb...


No files was added meanwhile.


Did someone had the same problem and solve it?


Thanks,


Alex.

Proxmox and GlusterFS

$
0
0
Hello, dear community.

I've been using Proxmox for a while (total of 5 years :rolleyes: atm) and now decided to attach GlusterFS storage to proxmox nodes.

Configuration of GlusterFS: replicated volume of two bricks on different servers. Servers are connected over 10G network. Using LSI hardware raid.
Configuration of GlusterFS storage on Proxmox: mounted using Proxmox GUI.
The problem:
after restarting the server which was used in Promox GUI to add GlusterFS volume, guest qemu VM drops these messages into the logs:
Code:

[ 3462.988707] end_request: I/O error, dev vda, sector 25458896
[ 3462.989474] end_request: I/O error, dev vda, sector 25458896
[ 3763.435225] end_request: I/O error, dev vda, sector 25458896
[ 3987.913744] end_request: I/O error, dev vda, sector 26304696
[ 3987.917413] end_request: I/O error, dev vda, sector 26304720
[ 3987.917716] end_request: I/O error, dev vda, sector 26304752
[ 3987.917728] end_request: I/O error, dev vda, sector 26304792
[ 3987.917728] end_request: I/O error, dev vda, sector 26304848
[ 3987.917728] end_request: I/O error, dev vda, sector 26304880
[ 3987.917728] end_request: I/O error, dev vda, sector 26304896
[ 3987.917728] end_request: I/O error, dev vda, sector 26304912
[ 3987.917728] end_request: I/O error, dev vda, sector 26304960
[ 3987.917728] end_request: I/O error, dev vda, sector 26304992
[ 3987.917728] end_request: I/O error, dev vda, sector 26297448
[ 3987.917728] end_request: I/O error, dev vda, sector 26297408
[ 3987.917728] end_request: I/O error, dev vda, sector 26297384
[ 3987.917728] end_request: I/O error, dev vda, sector 26297312
[ 3987.917728] end_request: I/O error, dev vda, sector 26297272
[ 3987.921830] end_request: I/O error, dev vda, sector 26304696
[ 3997.914129] end_request: I/O error, dev vda, sector 17097768
[ 3997.914982] end_request: I/O error, dev vda, sector 17097768
[ 3997.915640] end_request: I/O error, dev vda, sector 17097768

and acts like this:

Code:

cat: /var/log/syslog: Input/output error
-bash: /sbin/halt: Input/output error

I know that basically GlusterFS client needs only one server to get the configuration file from the GlusterFS server, but it seems like Promox doesn't know about second server?

storage config:
Code:

glusterfs: FAST-HA-150G
        volume HA-Proxmox-TT-fast-150G
        path /mnt/pve/FAST-HA-150G
        content images,rootdir
        server stor1
        nodes pve1
        maxfiles 1

vmconfig:
Code:

#debian7
bootdisk: virtio0
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 512
name: cacti
net0: virtio=42:01:8D:5A:2C:6C,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1
virtio0: FAST-HA-150G:116/vm-116-disk-1.raw,size=17G

and pveversion:
Code:

root@pve1:~# pveversion -vproxmox-ve-2.6.32: 3.2-132 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

GlusterFS servers version is: 3.4.4



Was looking for answers on forums, google and wiki. Was not able to find an answer.

http://www.jamescoyle.net/how-to/533...unt-in-proxmox this says, that one has to add volume manually to use both servers.

Could someone drop some light on this problem?

[Added later]
seems like someone has the same problem:
http://forum.proxmox.com/threads/182...er-node-reboot

HELP With networking on KVM CENTOS Guest

$
0
0
hi guys
i have been trying to do it for a few hours and need some help please
i have 2 ips from my provider


i am unable to ping out from the VM
is there someone who can help me out ?


thanx

MIgrate cluster from broadcast to multicast

$
0
0
I have a working cluster using broadcast for cluster communication. This method, plus the default cluster conf, and the number of 7 nodes probably lead to many problems. Nodes get fenced with no particular reason and ha vms get restarted frequently and randomly. problems like these 2 posts
http://forum.proxmox.com/threads/974...a8-ca9-caa-cab
http://forum.proxmox.com/threads/136...ed-erratically
I dont have hardware or network problems.

Past week I tried some conf changes and to stay with no more than 5 nodes, with no result.
changes i did to the default conf
Code:

<cman broadcast="yes" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<totem token="54000" window_size="150"/>
<rm status_child_max="20" status_poll_interval="20">

I spoke with my provider and we found the way to use multicast for my cluster communication.

So now i want to check if problems get solved with multicast.

This is a major change and i think i must restart the whole cluster to get the new conf to all machines. I guess that after the change if i reboot one node it will never find quorum, yes? The previous cluster nodes will still comunicate in broadcast. So i have to stop all machines and boot them one by one? I fear that i will never get quorum when they boot again.

Is this the proper way to make this change?

Also should i delete now token conf changes:confused:

kvm hotplug info

$
0
0
Hi

I've learned about the "hotplug" configuration setting (http://pve.proxmox.com/wiki/Hotplug_...,cpu,memory%29).

Is there (or is planned) any web gui control for this switch in web gui?
Or can this setting be reflected in gui, at least, to know if the vm has it enabled (or not) without having to manually check its .conf?

And, what is the reason for this setting not to be enabled by default?
I've read that guest has to support it, but can enable it by default at least for new kvm guests do any harm?

Thanks,
Marco

Problems with Sparsefiles?

$
0
0
Hello list,

my Englishc is not good! sorry

I have a Promox 3.2 server. The VM is set up with Ubuntu 14:04:01.
The Ubuntu Server is a BaculaServer.

In some backups I had this deleted on the server. Unfortunately, the disk size is not decreased.

My disc has qcow2.

How is regulated so that the data will be completely deleted in the VM?

It is a Windows 2012R2 established uaf Proxmox "file server".
But if the disk image is not dynamic, my plate is irgenwann be full.

Is there a Solution?

Regards hackmann

review of symmcom for proxmox support

$
0
0
Hello,

We are considering using symmcom for Proxmox support (found them through the Proxmox website under resellers).

Any comments or reviews of using them for support from this community?

Thank you.

IBM System Storage as shared storage?

$
0
0
I've inherited a legacy Vmware install I'd like to convert over to Proxmox. It's three IBM x3650 M3's, using a IBM System Storage DS3500 as it's shared backend, all connected via SAS cables. Each x3650 has 2 SAS ports in it, which feed to each of the controllers on the DS3500.

I'm wondering if there is any way to use the DS3500 with Proxmox. I've never touched one before, but it looks like a big JBOD array. If I'm not mistaken, wouldn't that just show up as local storage to the machines, and therefore not be available for HA and the like?

If budget was no object, I'd throw in a ZFS box on NFS and be done with it, but sadly, not an option.
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>