Quantcast
Channel: Proxmox Support Forum
Viewing all 171704 articles
Browse latest View live

Pvedaemon worker with pveprox producing higher than expected CPU utilization

$
0
0
Hello,

With the most recent updates applied to 3.3 Proxmox, both pvedaemon and pveproxy are consuming approximately 6+ percent CPU on a regular and consistent basis.

Looked in sys log and there are a number of worker processes firing.

This sever should be at near zero cpu utilization and only brief peaks. This server is running light duty apache Deb 7 CT's only.

I compared this to an identically configured server that has not been upgrade yet and runs 2 good size mail server hosting lots of people and the entire system is running a .02.

Any place I can look to get an idea.

Thanks in advance


PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
40635 root 20 0 287m 62m 6488 S 5.6 0.2 0:13.85 pvedaemon worke
50790 www-data 20 0 285m 60m 4504 S 1.0 0.2 0:03.75 pveproxy worker
56026 root 20 0 23536 2000 1192 R 1.0 0.0 0:00.13 top


Syslog

Feb 15 09:28:27 exxxxx pvedaemon[2738]: worker exit
Feb 15 09:28:27 exxxxx pvedaemon[2737]: worker 2738 finished
Feb 15 09:28:27 exxxxx pvedaemon[2737]: starting 1 worker(s)
Feb 15 09:28:27 exxxxx pvedaemon[2737]: worker 49717 started
Feb 15 09:29:47 exxxxx pveproxy[32647]: worker exit
Feb 15 09:29:47 exxxxx pveproxy[2765]: worker 32647 finished
Feb 15 09:29:47 exxxxx pveproxy[2765]: starting 1 worker(s)
Feb 15 09:29:47 exxxxx pveproxy[2765]: worker 50790 started
Feb 15 09:30:19 exxxxx pveproxy[35329]: worker exit
Feb 15 09:30:19 exxxxx pveproxy[2765]: worker 35329 finished
Feb 15 09:30:19 exxxxx pveproxy[2765]: starting 1 worker(s)
Feb 15 09:30:19 exxxxx pveproxy[2765]: worker 51190 started


Output of pveversion
proxmox-ve-2.6.32: 3.3-147 (running kernel: 2.6.32-37-pve)
pve-manager: 3.3-19 (running version: 3.3-19/c4c740ea)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-147
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.3-17
pve-firmware: 1.1-3
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-30
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-12
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

after update today it comes a grub-pc question

$
0
0
Hi all,
we have update today our nodes and have following option on grub. what does we need to click here:
Unbenannt.png
all partitions
or only pve-root partition
or pve-root and ccis partition?

any idea? thanks and best regards
Attached Images

Computer Webcam for ProxmoxVM

$
0
0
Hello,

I was wondering if it is possible to allow a VM to access my computer's webcam?

Thanks

лечение ожого

$
0
0
Ожоги лечение третьей степени разрушают кожный платок, затрагивая подкожные ткани. Пострадавший не чувствует боли, поскольку нервные окончания повреждены. Для обожженных участках изредка наблюдается обугливание. После как лечить ожог дома третьей степени остается шрам. Чем обширнее такой ожог, тем более

лечение ожого

$
0
0
Ожоги лечение третьей степени разрушают кожный защита, затрагивая подкожные ткани. Пострадавший не чувствует боли, поскольку нервные окончания повреждены. На обожженных участках изредка наблюдается обугливание. Впоследствии как лечить ожог дома третьей степени остается шрам. Чем обширнее такой ожог, тем более

Simple network setup in KVM

$
0
0
Hi, i trying to do something simple put 2 nic in the VM with different subnet example

eth0 = 200.62.X.X
eth1 = 10.10.X.X

the /etc/network/interface (KVM)

Code:

auto eth0
iface eth0 inet static
        address 200.62.X.X
        netmask 255.255.255.240
        network 200.62.X.X
        broadcast 200.62.X.X
        gateway 200.62.X.X
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 200.62.X.X
        dns-search domain.net


auto eth1
iface eth1 inet static
        address 10.10.1.9
        netmask 255.255.255.0
        bridge_ports vmbr0
        bridge_stp off
        bridge_fd 0

/etc/network/interface (NODE)

Code:

auto vmbr0
iface vmbr0 inet static
        address  10.10.1.2
        netmask  255.255.255.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address  200.62.X.X
        netmask  255.255.255.240
        gateway  200.62.X.X
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0


in the Past this config work but now when i try
Code:

ifup eth1
Cannot find device "eth1"
Failed to bring up eth1.

VZDUMP backup causes KVM/QEMU VM to shutdown if backup storage is full

$
0
0
Dear All,

I think I have discovered an issue with the VZdump backups which can cause a lot of headaches.

If the backups are KVM/QEMU with the backup mode set to SNAPSHOT and the BACKUP storage runs out of space during a backup.

The backup process continues and causes all of the KVM/QEMU VM’s in the backup schedule to be shutdown.

If any container backups fail because the backup storage is full, they continue to work.

Here is a sanitised snippet from the backup log.


INFO: gzip: stdout: No space left on device
ERROR: Backup of VM 100 failed - command '(cd /mnt/vzsnap0/private/100;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|gzip) >/mnt/pve/Backup-Storage/dump/vzdump-openvz-100-2015_02_14-23_59_02.tar.dat' failed: exit code 1
cp: closing `/mnt/pve/Backup-Storage/dump/vzdump-openvz-100-2015_02_14-23_59_02.log': No space left on device
INFO: Starting Backup of VM 104 (openvz)
INFO: CTID 104 exist mounted running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-host1-0')
INFO: /dev/sdc1: read failed after 0 of 4096 at 103743488: Input/output error
INFO: /dev/sdc1: read failed after 0 of 4096 at 103800832: Input/output error
INFO: /dev/sdc1: read failed after 0 of 4096 at 0: Input/output error
INFO: /dev/sdc1: read failed after 0 of 4096 at 4096: Input/output error
INFO: Logical volume "vzsnap-host1-0" created
INFO: creating archive '/mnt/pve/Backup-Storage/dump/vzdump-openvz-104-2015_02_14-23_59_39.tar.gz'
INFO: gzip: stdout: No space left on device
ERROR: Backup of VM 104 failed - command '(cd /mnt/vzsnap0/private/104;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|gzip) >/mnt/pve/Backup-Storage/dump/vzdump-openvz-104-2015_02_14-23_59_39.tar.dat' failed: exit code 1
cp: closing `/mnt/pve/Backup-Storage/dump/vzdump-openvz-104-2015_02_14-23_59_39.log': No space left on device
INFO: Starting Backup of VM 111 (openvz)
INFO: CTID 111 exist mounted running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-host1-0')
INFO: /dev/sdc1: read failed after 0 of 4096 at 103743488: Input/output error
INFO: /dev/sdc1: read failed after 0 of 4096 at 103800832: Input/output error
INFO: /dev/sdc1: read failed after 0 of 4096 at 0: Input/output error
INFO: /dev/sdc1: read failed after 0 of 4096 at 4096: Input/output error
INFO: Logical volume "vzsnap-host1-0" created
INFO: creating archive '/mnt/pve/Backup-Storage/dump/vzdump-openvz-111-2015_02_15-00_00_17.tar.gz'
INFO: gzip: stdout: No space left on device
ERROR: Backup of VM 111 failed - command '(cd /mnt/vzsnap0/private/111;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|gzip) >/mnt/pve/Backup-Storage/dump/vzdump-openvz-111-2015_02_15-00_00_17.tar.dat' failed: exit code 1
cp: closing `/mnt/pve/Backup-Storage/dump/vzdump-openvz-111-2015_02_15-00_00_17.log': No space left on device
INFO: Starting Backup of VM 113 (qemu)
INFO: status = running
INFO: update VM 113: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/Backup-Storage/dump/vzdump-qemu-113-2015_02_15-00_00_52.vma.gz'
ERROR: client closed connection
INFO: aborting backup job
ERROR: VM 113 not running
ERROR: Backup of VM 113 failed - client closed connection

Seeking advice for vm with large amounts of data

$
0
0
I'm setting up a Linux box running alfresco, I anticipate 10 terabytes or so woth of data to be added in the next moth or so. I'm looking for opinions on setting up the main storage. Directly mount an nfs share from my NAS or setup vmdk's in an LVM? Pros and cons appreciated.

Manual entry missing from ceph.conf during Proxmox node reboot

$
0
0
I added a MDS server manually into Proxmox+Ceph cluster along with some tuning such as under [client] segment in ceph.conf. Oddly, everytime i restart a node, it seems to remove the entries from ceph.conf. But any entries under [global] section seems to stay.

How can prevent Proxmox node to remove those manual entries or is there no way to do it as this moment?

kernel 3.10: missing module (block device) cciss

$
0
0
Hello all !

In kernel 3.10 missing module cciss for HP smartarray controller (old version).
I know that cciss module in block device was replaced by hpsa module, but hpsa supports controllers after HP SA P212, P410, etc. not natively old controllers (like P200, P400, etc).
I've tried to ad:
hpsa_allow_any=1 (https://www.kernel.org/doc/Documentation/scsi/hpsa.txt)
on grub, add directly hpsa in initramfs, module autoload, but the device is not present (/dev/sda missing).

Thanks

Luca

Disable Network Device

$
0
0
Hi All,

is possible to disable a network device? Or I need to put a comment in VM.conf

Thanks,

grub2 update problems (GUI only), leading to not correctly installed packages

$
0
0
Doing a package upgrade via GUI is not working in some situations, due to the latest grub2 updates. the problem is already found and also fixed, but the fix is in the new packages and installing them via GUI will move you into this issue.

Best way to workaround the issue
do NOT use the GUI for upgrading to latest packages from (pvetest and pve-no-subscription repo), just update/upgrade via CLI (apt-get update && apt-get dist-upgrade)

If you already upgraded via GUI and the window disappeared and you have unconfigured packages
check your package status via 'pveversion -v' - if you have issues like (not correctly installed packages) do the following:

> dpkg --configure -a

this command will probably not run, as there are background processes blocking access. you need to manually "kill" those process (check your process list with 'ps aux | less' and kill them)

afterwards run again:

> dpkg --configure -a

this will install and configure all packages correctly, except the grub2 installation in your MBR. In order to install grub2, you need to know the device where your Proxmox VE is installed. in most cases this is /dev/sda

you can check your device (harddisks) list this by analysing the output of:

> parted -l

now, install the grub2 into your MBR (in most cases to /dev/sda)

> grub-install /dev/sda

Finally, reboot your host to activate the new kernel.

DRBD with SSD

$
0
0
Hi team,

i am new to proxmox and this forum,
is it possible to configure drbd8 services between two ssds

DRBD in cluster

$
0
0
Hi,
what is the max and min no. of nodes to configure drbd in proxmox cluster

IPv6 bridging issue with latest pve-kernel-3.10.0-7-pve

$
0
0
(Whoops, just saw the dedicated PVE Networking forum section, sorry for the misplaced post…)

Hi Proxmox forums!

For my first post here, I'd like to submit you to a very special issue (bug?) I had on two separate proxmox servers with pve-no-subscription repository.

First server is an online.net Dedibox XC with their custom PVE installation (nothing more than a base debian w/ proxmox installed afterwards, really) and second server is a laptop w/ a fresh install of PVE 3.3.
I compared both servers on dpkg level (dpkg -l) and got rather identical results, so I consider them both valid proxmox installations.

All VM are configured with virtio devices (both block and net), running 2 cores on cpu=host.

Last friday, I apt-get distup both of them and decided to reboot in order to ensure running latest pve-kernel and VM running latest qemu-kvm code.
All seemed right, except that 5 minutes after bootup, all low network activity VM were unreachable using IPv6. IPv4 was OK.

Tcpdump inside VM reveals nothing except ICMPv6 messages, w/o responses messages from either sollicited neighbors nor routers.
Tcpdumping the vmbr or the tap interfaces reveals nothing more.

Here is some info on the culprit kernel :

Code:

Package: proxmox-ve-2.6.32
Priority: optional
Section: admin
Maintainer: Proxmox Support Team <support@proxmox.com>
Architecture: all
Version: 3.3-147
Replaces: proxmox-ve, pve-kernel, proxmox-virtual-environment
Provides: proxmox-virtual-environment
Depends: libc6 (>= 2.7-18), pve-kernel-2.6.32-37-pve, pve-firmware, pve-manager, qemu-server, pve-qemu-kvm, openssh-client, openssh-server, apt, vncterm, vzctl (>= 3.0.29)

Please note that both kernels pve-kernel-2.6.32-34-pve and pve-kernel-3.10.0-7-pve do not show this faulty behavior. IPv6 bridging is OK with both of them.

I kept updating my systems during the week-end, since some packages had several updates (pve-qemu-kvm) and rebooting afterwards. Nothing was fixed with pve-kernel-2.6.32-37-pve.
I decided to run on pve-kernel-3.10.0-7-pve, since I don't need OpenVZ containers.

Thanks for reading through this!

How to advance version from 3.3-1 to latest

$
0
0
Hello forum,

we have one proxmox cluster installed and functionally OK at version 3.3-1 as per following pveversion -v

Code:

root@c00:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
root@c00:~#

Altough we have done
Code:

root@c00:~# apt-get update
Hit http://ftp.it.debian.org wheezy Release.gpg
Hit http://ftp.it.debian.org wheezy Release   
Hit http://security.debian.org wheezy/updates Release.gpg           
Hit http://security.debian.org wheezy/updates Release               
Hit http://ftp.it.debian.org wheezy/main amd64 Packages             
Hit http://ftp.it.debian.org wheezy/contrib amd64 Packages         
Hit http://ftp.it.debian.org wheezy/contrib Translation-en       
Hit http://security.debian.org wheezy/updates/main amd64 Packages
Hit http://ftp.it.debian.org wheezy/main Translation-en         
Hit http://security.debian.org wheezy/updates/contrib amd64 Packages
Ign https://enterprise.proxmox.com wheezy Release.gpg           
Hit http://security.debian.org wheezy/updates/contrib Translation-en
Hit http://security.debian.org wheezy/updates/main Translation-en
Ign https://enterprise.proxmox.com wheezy Release
Hit http://ceph.com wheezy Release.gpg     
Hit http://ceph.com wheezy Release         
Hit http://ceph.com wheezy/main amd64 Packages
Ign http://ceph.com wheezy/main Translation-en_US 
Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages
  The requested URL returned error: 401 Authorization Required
Ign http://ceph.com wheezy/main Translation-en
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en_US
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en
W: Failed to fetch https://enterprise.proxmox.com/debian/dists/wheezy/pve-enterprise/binary-amd64/Packages  The requested URL returned error: 401 Authorization Required

E: Some index files failed to download. They have been ignored, or old ones used instead.
root@c00:~# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree     
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@c00:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
root@c00:~#

the cluster did not advance to version 3.3-20.
Any hint/advice?

Thank you in advance

Pasquale

Apache ProxyPass Proxmox WebUI - noVNC WebSockets

$
0
0
Hi,

I've managed to setup apache on one of my VMs to show the Proxmox WebUI on port 443. (https://example.com/proxmox/)

Everything works fine except for noVNC, when I try to connect I get a message saying "Server disconnection (code: 1006)" and in the Google Chrome developer console I receive:

Code:

WebSocket connection to 'wss://hostIP/api2/json/nodes/hostName/qemu/100/vncwebsocket?port=5900&vncticket=PVEVNC%.....' failed: Error during WebSocket handshake: Unexpected response code: 200
When I use noVNC normally through the :8006 WebUI I noticed that it connects the WebSocket to port 8006.

Code:

wss://hostIP:8006/api2/json/nodes/hostName/qemu/100/vncwebsocket?port=5900&vncticket=PVEVNC......
So I understand that I need to proxy the WebSocket to port 8006 but everything I try doesn't seem to work. I've never Proxied a WebSocket so I have no idea if what I have in my config should work, any help would be great!

Here are the relevant parts of my apache config:

Code:


(I've enabled the mods: proxy, proxy_http and proxy_wstunnel)

        ProxyPass /wss/ wss://192.168.1.1:8006/
        ProxyPassReverse /wss/ wss://192.168.1.1:8006/


        ProxyPass /proxmox/ https://192.168.1.1:8006/
        ProxyPassReverse /proxmox/ https://192.168.1.1:8006/


        ProxyPass /pve2/ https://192.168.1.1:8006/pve2/
        ProxyPassReverse /pve2/ https://192.168.1.1:8006/pve2/


        ProxyPass /api2/ https://192.168.1.1:8006/api2/
        ProxyPassReverse /api2/ https://192.168.1.1:8006/api2/


        ProxyPass /novnc/ https://192.168.1.1:8006/novnc/
        ProxyPassReverse /novnc/ https://192.168.1.1:8006/novnc/


        ProxyPass /vncterm/ https://192.168.1.1:8006/vncterm/
        ProxyPassReverse /vncterm/ https://192.168.1.1:8006/vncterm/

Thanks!

Sporadic Buffer I/O error on device vda1 inside guest,RAW on LVM on top of DRBD

$
0
0
Hello!

I have several proxmox clusters separeated geographically.
Each cluster contains couples of servers sharing LVM over drbd, as a DRBD interlink i have Intel 10G ethernet cards. All servers have top-level RAID controllers with bbu.
All servers have pve-enterprise repository access via community subscription.

Since update in december some guests get sporadically messages like that independend of filesystem type. Some VMs have xfs, some ext4.

Code:

[2015-02-14 16:37:06]  end_request: I/O error, dev vda, sector 15763032
[2015-02-14 16:37:06]  Buffer I/O error on device vda1, logical block 1970123
[2015-02-14 16:37:06]  EXT4-fs warning (device vda1): ext4_end_bio:250: I/O error -5 writing to inode 398637 (offset 0 size 4096 starting block 1970380)
[2015-02-14 16:37:06]  end_request: I/O error, dev vda, sector 15763064
[2015-02-14 16:37:06]  Buffer I/O error on device vda1, logical block 1970127
[2015-02-14 16:37:06]  EXT4-fs warning (device vda1): ext4_end_bio:250: I/O error -5 writing to inode 398637 (offset 16384 size 4096 starting block 1970384)
[2015-02-14 16:37:06]  end_request: I/O error, dev vda, sector 15763144
[2015-02-14 16:37:06]  Buffer I/O error on device vda1, logical block 1970137
[2015-02-14 16:37:06]  EXT4-fs warning (device vda1): ext4_end_bio:250: I/O error -5 writing to inode 398637 (offset 57344 size 4096 starting block 1970394)
[2015-02-14 16:37:06]  end_request: I/O error, dev vda, sector 15763176
[2015-02-14 16:37:06]  Buffer I/O error on device vda1, logical block 1970141
[2015-02-14 16:37:06]  EXT4-fs warning (device vda1): ext4_end_bio:250: I/O error -5 writing to inode 398637 (offset 73728 size 4096 starting block 1970398)
[2015-02-14 16:37:06]  end_request: I/O error, dev vda, sector 15763256
[2015-02-14 16:37:06]  Buffer I/O error on device vda1, logical block 1970151
[2015-02-14 16:37:06]  EXT4-fs warning (device vda1): ext4_end_bio:250: I/O error -5 writing to inode 398637 (offset 114688 size 4096 starting block 1970408)

File system check don't find any corruptions, but some Windows VMs got data loss. BTW, filesystem check on windows VMs isn't able to find any inconsistencies.
Code:



       
Code:

       
roxmox01:~# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


proxmox01:~# cat /etc/drbd.d/r0.res
resource r0 {
  protocol C;
  startup {
    wfc-timeout  0;    # non-zero wfc-timeout can be dangerous (http://forum.proxmox.com/threads/346...-configuration)
    degr-wfc-timeout 60;
    become-primary-on both;
  }
  net {
    cram-hmac-alg sha1;
    shared-secret "lai8IezievuCh0eneiph0eetaigaiMee";
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
    max-buffers 8000;
    max-epoch-size 8000;
    sndbuf-size 0;
  }

  syncer {
    al-extents 3389;
    verify-alg crc32c;
  }

  disk {
    no-disk-barrier;
    no-disk-flushes;
  }

  on proxmox01 {
    device /dev/drbd0;
    disk /dev/sdb;
    address 192.168.1.147:7788;
    meta-disk internal;
  }
  on proxmox02 {
    device /dev/drbd0;
    disk /dev/sdb;
    address 192.168.1.148:7788;
    meta-disk internal;
  }
}

Any ideas?

ceph show custom rulesets in GUI

$
0
0
hi,

i have two minor suggestions for people that use custom rulesets. I can understand if you guys argue that you don't want to support customization of the ceph stack.

1. show mounted storage size based on "ceph df" rather than "rados df" this includes rule patterns like different root bucket

2. show whole osd directory/bucket tree with all OSDs even with multiple root buckets exactly like shown in "ceph osd tree"

ceph failed ulimit -n 32768; /usr/bin/ceph-mon -i 0 --pid-file /var/run/ceph/mon.0.pi

$
0
0
Hi guys,

I am trying to create a cluster with ceph, but when I create a monitor with,
pveceph createmon it hangs, and I start receiving the message
failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i 0 --pid-file /var/run/ceph/mon.0.pid -c /etc/pve/ceph.conf --cluster ceph ', I have alfready altered the number of processes files and opened files to everyone:
* soft nofile 50000
* hard nofile 50000
root soft nofile 50000
root hard nofile 50000
* soft nproc 50000
* hard nproc 50000


On /etc/security/limits.conf and also rebooted the machine.

Any hint o what's going on
Thanks in advance.
Viewing all 171704 articles
Browse latest View live