Quantcast
Channel: Proxmox Support Forum
Viewing all 181714 articles
Browse latest View live

SSD partitions best practice?

$
0
0
Hi all,

I have ordered the following:
- HP Microserver Gen8
- 16 GB RAM
- 4x 3 TB WD Red
- 1x Crucial MX200 250 GB

I want to use ZFS on the 4 hard disks (mirrored+striped)

But I want to use the SSD for the Proxmox installation, AND for L2ARC (and maybe ZIL?)

How can I create a EXT3 partition of lets say 32 GB for Proxmox, and give the rest to ZFS?

I have found an URL (which I cannot post due to forum restrictions) to resize the boot and swap partitions, but that's not regarding ZFS.

Thanks in advance!

ZFS Datasets not included in backups

$
0
0
Hello i have

Code:

pve-manager/4.0-50/d3a6b7e5 (running kernel: 4.2.2-1-pve)
running some mysql instance and ive began modifying the ZFS underneath it to get this result

Code:

NAME                                        RECSIZE    LOGBIAS  PRIMARYCACHE
rpool                                          128K    latency          all
rpool/ROOT                                    128K    latency          all
rpool/ROOT/pve-1                              128K    latency          all
rpool/subvol-1000-disk-1                      128K    latency          all
rpool/subvol-1000-disk-1/mysql                128K    latency      metadata
rpool/subvol-1000-disk-1/mysql/innodb-data      16K  throughput      metadata
rpool/subvol-1000-disk-1/mysql/innodb-logs    128K    latency      metadata
rpool/swap                                        -    latency          all

But when i backup data using the backup in client or using console and restore the backup back i get the following disk layout
Code:

NAME                      RECSIZE    LOGBIAS  PRIMARYCACHE
backup                      128K    latency          all
rpool                        128K    latency          all
rpool/ROOT                  128K    latency          all
rpool/ROOT/pve-1            128K    latency          all
rpool/subvol-1000-disk-1    128K    latency          all
rpool/swap                      -    latency          all

All modifications to the metadata and recordsize are gone... is there some way around that eq. to restore backup back with appropriate recordsizes etc??

PVE 4.0 DNS settings disapearing

$
0
0
I can configure DNS settings via the GUI DNS / Edit / Dialog box 'Search Domain' and 'DNS Server'
and DNS resolution then works for a time:
Code:

root@deb82:~# cat /etc/resolv.conf 
search wnr2200.lan 
nameserver 192.168.11.1 
nameserver 192.168.5.1

However at some point /etc/resolv.conf is regenerated and the details are lost.
I have searched the forums and google and the 2 most helpful answers were:
Thread: Hostname Issue
https://forum.proxmox.com/threads/8715-Hostname-Issue
and
How do I include lines in resolv.conf that won't get lost on reboot? https://askubuntu.com/questions/1571...lost-on-reboot

There are 2 NICs for which I configured 2 Linux Bridges:
and have tried adding to the bridges iface stanzas
dns-search wnr2200.lan
dns-nameservers 192.168.5.1

in /etc/network/interfaces

But cannot resolve this problem.
Code:

root@deb82:~# ls -al /etc/network/interfaces.d/ 
total 8 
drwxr-xr-x 2 root root 4096 Mar 13  2015 . 
drwxr-xr-x 7 root root 4096 Oct 16 21:44 ..

root@deb82:~# pveversion   
pve-manager/4.0-50/d3a6b7e5 (running kernel: 4.2.2-1-pve)

I have been testing PVE4 since Beta1 and Beta2 and DNS worked ok on other boxes but now I see this problem on a box that I recently freshly installed.

I followed the guide: Install Proxmox VE on Debian Jessie after doing a fresh install of Debian 8.2

Building Unified Communications Servers with Proxmox 4

Startup/Shutdown Order - LXC and KVM mix not working

$
0
0
Hi.

I have 3 vm's:
101 - Debian LXC -> Start at boot -> order=1
102 - Ubuntu LXC -> Start at boot -> order=3
201 - Windows -> Start at boot -> order=2,up=240

Expected:
101 + no delay
201 + 240 sec. delay
102 + no delay

Fact: 201 + 240 sec. delay
101 + no delay
102 + no delay

pveversion:
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1

What am i missing?

How work new watchdog in proxmox4?

$
0
0
I have 3 identical nodes and bought 3 license:
Quote:

root@cluster-2-1:~# pvecm nodes

Membership information
----------------------
Nodeid Votes Name
1 1 cluster-2-1 (local)
2 1 cluster-2-2
3 1 cluster-2-3
root@cluster-2-1:~#
Quote:

root@cluster-2-1:~# dmesg |grep -i watch
[ 0.097129] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
[ 2.586248] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[ 2.636518] IPMI Watchdog: Unable to register misc device
[ 2.636540] IPMI Watchdog: set timeout error: -22
[ 2.636542] IPMI Watchdog: driver initialized
root@cluster-2-1:~#


NODE1
Quote:

root@cluster-2-1:~# ipmitool mc watchdog get
Watchdog Timer Use: Reserved (0x00)
Watchdog Timer Is: Stopped
Watchdog Timer Actions: No action (0x00)
Pre-timeout interval: 0 seconds
Timer Expiration Flags: 0x00
Initial Countdown: 0 sec
Present Countdown: 0 sec
root@cluster-2-1:~#
Quote:

root@cluster-2-1:~# cat /etc/modprobe.d/impi_watchdog.conf
options ipmi_watchdog action=power_cycle start_now=1
root@cluster-2-1:~#
NODE2:
Quote:

root@cluster-2-2:~# ipmitool mc watchdog get
Watchdog Timer Use: SMS/OS (0x44)
Watchdog Timer Is: Started/Running
Watchdog Timer Actions: Power Cycle (0x03)
Pre-timeout interval: 0 seconds
Timer Expiration Flags: 0x00
Initial Countdown: 10 sec
Present Countdown: 9 sec
root@cluster-2-2:~#
Quote:

root@cluster-2-2:~# cat /etc/modprobe.d/impi_watchdog.conf
options ipmi_watchdog action=power_cycle start_now=1

NODE3:
Quote:

root@cluster-2-3:~# ipmitool mc watchdog get
Watchdog Timer Use: SMS/OS (0x04)
Watchdog Timer Is: Stopped
Watchdog Timer Actions: Hard Reset (0x01)
Pre-timeout interval: 0 seconds
Timer Expiration Flags: 0x00
Initial Countdown: 900 sec
Present Countdown: 900 sec
root@cluster-2-3:~#
Quote:

root@cluster-2-3:~# cat /etc/modprobe.d/impi_watchdog.conf
options ipmi_watchdog action=power_cycle start_now=1


This 3 nodes full identical and don't have difference, only hostname and license to proxmox.

And how i can diagnistic works watchdog module?

Quote:

Base Board Information
Manufacturer: Supermicro
Product Name: X10SRi-F
I don't understand why watchdog not work on 1,3 nodes

How make a iso with the git package?

$
0
0
Hello ,all!
I want git the all packages,and make a install iso with this pakage and debian default package ?
Thanks!

TASK ERROR: VM is locked (snapshot)

$
0
0
Unable to take new snapshots/backups because there seems to be a snapshot in progress.
I've rebooted to the hypervisor, no change.
The snapshots tab shows a 'vzdump' snapshot with a status of 'prepare'

Also...
Code:

root@hv2:~# lxc-snapshot -L --name 224No snapshots
root@hv2:~# lxc-snapshot --name 224 
lxc-snapshot: lxccontainer.c: do_lxcapi_clone: 2812 error: Original container (224) is running
lxc-snapshot: lxccontainer.c: do_lxcapi_snapshot: 3127 clone of /var/lib/lxc:224 failed
lxc-snapshot: lxc_snapshot.c: do_snapshot: 55 Error creating a snapshot
root@hv2:~# qm unlock 224
no such VM ('224')
root@hv2:~#


error:lxc-checkpoint: criu.c: criu_ok: 331 lxc.tty must be 0

$
0
0
lxc-checkpoint: criu.c: criu_ok: 331 lxc.tty must be 0
lxc-checkpoint -n 100 -D /var/liv/vz/dump

VE 4.0 Kernel Panic on HP Proliant servers

$
0
0
We have 2 labs setup with Proxmox VE 4.0 from latest ISO Download.

In one lab we have HP proliant servers with massive kernel panic on Module hpwdt.ko.

Unfortunately we do not have the trace due to HP's dammed ILO :-( but I will give mor Info when catched it up.

We have a ceph cluster with 3 hosts, 3 monitors up and running on this lab and erverything seems to be quite good.

We can start VM's, also migrate them but as soon you activate HA for any VM we receive a kernel panic on the hhwdt.ko module.

We have DL 360 G6 (lates Bios patches) and a DL380 G( running in this lab.

'This are the versions we are running.

proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

Anything known about this kernel panics?

I found some hints googling around.

- blacklisting hpwdt was suggested but not the solution for VE, since we need the watchdog interfaces.
- I also tried grub parameters:
-- noautogroup and
-- intel_idle.max_cstates=0

with no success.

Since we have no debug symbols for the kernel (I did not find any package about this....), I could not use kdump to catch the panic up.

Any advise which could help or anone having problem like this.

LSI Raid Prox4/Jessie - preferred install method hint ?

$
0
0
Hi, a fun small question (I hope). In past I've used the lovely 3rd party repository, http://hwraid.le-vert.net/wiki/DebianPackages
in order to get LSI hardware raid card health monitor and status and email notification in case of disk fail / bad events.

It seems that now with Prox 4.X based on Debian Jessie, we no longer have this option (ie, that repo is not yet supporting Jessie)

Just curious if anyone has easy recommended / preferred route to resolve this ?

- I believe I can get the static-compiled binary RPM, then blast it open and extract the megactl / megacli type of app binaries from there. But this is kind of gross.
- AFAIK LSI do not provide any .deb version of package officially?
- so that is bit of hassle / drama.

So I was curious if anyone has already got a nice elegant solution to this, or not :-) ?

Thanks!

Tim

PVE 4.0 PERC 5/E Kernel panic

$
0
0
Hi, I install PVE 4 on Dell PE2850 with a PERC 5/E adaptor witout any problem or warning at installation. On boot, just after Loading Linux 4.2.2-1-pve... Loading initial ramdisk ..., I have a Kernel Panic - not syncing: Fatal execption in interupt. The system halt.


I remove the PERC 5/E adaptor and the PVE4 boot correctly. Put adaptor back, same error...


I install Proxmox on Debian Jessie following instruction on wiki and reboot before installing Proxmox. No problem.


I install Proxmox VE packages and after reboot.... Kernel Panic...


With Proxmox 3.4 everything work perfectly.


It is possible 4.2.2-1-pve kernel have a driver problem with PERC 5/E or not support again?


Thanks, have a nice day


Eric.

LXC Networking and IPv6

$
0
0
I created a Centos 7 LXC container. I assigned IPv4 and IPv6 space to it. IPv6 did not work. Looking at container it added the IPv4 address too /etc/sysconfig/network-scripts/ifcfg-eth0 and added IPv6 address too /etc/sysconfig/network-scripts/ifcfg-eth0:0. Moving the IPv6 address too ifcfg-eth0 allowed it to work. Any idea what would be causing that? It does not appear to be picking up any of the settings in :0 file.

LXC Inodes

$
0
0
I am installing Directadmin on LXC container. How do I insure there are enough inodes available? As I recall on openvz you could specify this. Also, how can you specify the number of users a container is allowed to have? I believe under Openvz this was called UGID?

Docu not up2date? There is no /etc/pve/cluster.conf in 4.0 - Howto change to unicast

$
0
0
Hi!

The wiki says that there should be a /etc/pve/cluster.conf but on my proxmox 4.0 there is no such file.

Can you please tell me how to change to unicast in 4.0? Just editing the corosync.conf directly before joining an additional node?

thx

ps: sorry for not posting the link, and shell output, but the forum blocks that with the error message

"You are not allowed to post any kinds of links, images or videos until you post a few times."

TASK OK but lxc container not started in 4.0, no dsik capacity value

$
0
0
Dear


I am testing a convert of CentOs base openVZ container from 3.4 to 4.0.


The CT is working fine in 3.4, but backup/restored CT does not start at 4.0.



In both case, it looks "/etc/pve/nodes/proc40/lxc/XXX.conf" has no DISK CAPACITY value.


How can I add the "Dsik Capacity value" to XXX.conf?


Someone advice me?




yugawara

Proxmox 3 Bridge config not working

$
0
0
Hi all,

i had installed a Proxmox 3 on Dedicated Solutions Cloud, i have problems configuring the network on de VM, heres my config

/etc/netwrok/interfaces on proxmox host

HTML Code:

auto lo
iface lo inet loopback

iface eth0 inet manual
    broadcast  67.219.147.223
    network 67.219.147.216
    dns-nameservers 8.8.8.8
    dns-search localhost.local
# dns-* options are implemented by the resolvconf package, if installed

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
    address  67.219.147.218
    netmask  255.255.255.248
    gateway  67.219.147.217
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address  67.219.147.219
    netmask  255.255.255.248
    bridge_ports eth1
    bridge_stp off

On the VM

HTML Code:

iface eth0 inet static
    address 67.219.147.219
    netmask  255.255.255.248
    gateway 67.219.147.217
    dns-nameservers 8.8.8.8

Here are the routes

On the host
HTML Code:

Kernel IP routing table
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
localnet        *              255.255.255.248 U    0      0        0 vmbr0
localnet        *              255.255.255.248 U    0      0        0 vmbr1
default        67.219.147.217  0.0.0.0        UG    0      0        0 vmbr0

On the vm

HTML Code:

Kernel IP routing table
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
 default        67.219.147.217  0.0.0.0        UG    0      0        0 eth0
localnet              *          255.255.255.248    U  0    0  0 eth0

How i get my VM reaching the internet?

Cheers,

What mean about iscsi target ? How to fill in?

$
0
0
What mean about iscsi target ? How to fill in?
1.jpg
Attached Images

Delete old backup prior to new backup

$
0
0
Hi

I have proxmox doing snapshot backups to a series of 1Tb hotswap USB disks which get rotated each night. This has been working fine until the backup job grew over 500Gb. As VZdump deletes the old backup after a successful backup we are running out of space each night and have to manually go in and delete the old backup each day to get a successful backup.

We currently have Max Backups set to 1 on the USB storage. If we set that to 0 does that disable the Max Backups threshold and not remove any? or would that delete the old backup file first?

I have come across the VZdump hool script option so I guess I could create a script that runs at job-start to remove the old backup prior to running the backup.

I just wondered if there is another way to automate this and will this break if somebody tweaks the backup job in the gui?

Many thanks

Proxmox4 and ceph

$
0
0
I installed new cluster with proxmox4 and ceph, created new pool and created new VM, connected rdb pool in gui interface proxmox, created new vm.
When I started vm, i got error message:

Quote:

root@cluster-2-1:~# qm start 1002
Running as unit 1002.scope.
libust[20838/20838]: Error: Error cancelling global ust listener thread: No such process (in lttng_ust_exit() at lttng-ust-comm.c:1592)
libust[20838/20838]: Error: Error cancelling local ust listener thread: No such process (in lttng_ust_exit() at lttng-ust-comm.c:1601)
kvm: -drive file=rbd:ST01/vm-1002-disk-1:mon_host=192.168.16.1,192.168.16.2,192.168.16.3: id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ST01.keyring,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on: Block format 'raw' used by device 'drive-virtio0' doesn't support the option '192.168.16.3:id'
libust[20839/20839]: Error: Error cancelling global ust listener thread: No such process (in lttng_ust_exit() at lttng-ust-comm.c:1592)
start failed: command '/usr/bin/systemd-run --scope --slice qemu --unit 1002 -p 'CPUShares=1000' /usr/bin/kvm -id 1002 -chardev 'socket,id=qmp,path=/var/run/qemu-server/1002.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/1002.vnc,x509,password -pidfile /var/run/qemu-server/1002.pid -daemonize -smbios 'type=1,uuid=923dfa53-bf91-4f46-bc10-b1731ccad532' -name WinT -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga std -no-hpet -cpu 'kvm64,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_rel axed,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time,enforce' -m 512 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:4dfda0936f97' -drive 'file=/var/lib/vz/template/iso/ru_windows_7_ultimate_with_sp1_x64_dvd_u_677391.is o,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=rbd:ST01/vm-1002-disk-1:mon_host=192.168.16.1,192.168.16.2,192.168.16.3: id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ST01.keyring,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0' -netdev 'type=tap,id=net0,ifname=tap1002i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=72:12:21:32:1E:94,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

it's new install proxmox cluster from iso image at this week, why does this not work?
Quote:

root@cluster-2-1:~# cat /etc/pve/nodes/cluster-2-1/qemu-server/1002.conf
bootdisk: virtio0
cores: 1
ide2: local:iso/ru_windows.iso,media=cdrom
memory: 512
name: WinT
net0: virtio=72:12:21:32:1E:94,bridge=vmbr159
numa: 0
ostype: win7
smbios1: uuid=923dfa53-bf91-4f46-bc10-b1731ccad532
sockets: 1
virtio0: ST01:vm-1002-disk-1,size=60G
root@cluster-2-1:~#

Viewing all 181714 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>