Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

kernel panic caused by kernel.pid_ns_hide_child=1 with kernel 2.6.32-166

$
0
0
Hello!

When "kernel.pid_ns_hide_child=1" sysctl flag is used, and one starts a new OpenVZ container it causes a crash into kernel panic on the latest and greatest "proxmox-ve-2.6.32: 3.4-166 (running kernel: 2.6.32-43-pve)". OpenVZ devs seem to know about the issue and will hopefully fix it very soon.

Very unpleasent and upgrading to this version is not advisable.

Is there any workaround to hide processes of containers from being visible on the host?

Thanks and best regards
moz

PS: Stacktrace attached:
Code:

------------[ cut here ]------------
kernel BUG at kernel/workqueue.c:192!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/kernel/uevent_seqnum
CPU 1
Modules linked in: ipt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 netconsole 8021q garp ip_set vhost_net tun macvtap macvlan kvm_intel kvm nfnetlink_log nfnetlink vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 vzcpt vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_conntrack nf_conntrack ipt_LOG xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit xt_dscp ipt_REJECT ip_tables dlm sctp configfs acpi_cpufreq mperf cpufreq_ondemand cpufreq_conservative cpufreq_powersave cpufreq_stats freq_table vzevent ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc bonding fuse snd_hda_codec_analog snd_hda_codec_generic tpm_infineon snd_hda_intel i915 iTCO_wdt drm_kms_helper iTCO_vendor_support snd_pcsp snd_hda_codec snd_hwdep drm i2c_algo_bit snd_pcm snd_page_alloc snd_timer tpm_tis i2c_core lpc_ich snd soundcore tpm tpm_bios serio_raw mfd_core video output wmi zfs(P) zunicode(P) zavl(P) zcommon(P) znvpair(P) spl zlib_deflate ata_generic sg usb_storage pata_acpi r8169 mii mvsas libsas scsi_transport_sas ata_piix e1000e ptp pps_core [last unloaded: scsi_wait_scan]
Pid: 7945, comm: salt-minion veid: 501 Tainted: P        W  -- ------------    2.6.32-43-pve #1 042stab112_15 Hewlett-Packard HP Compaq dc5800 Microtower/2820h
RIP: 0010:[<ffffffff810a186c>]  [<ffffffff810a186c>] queue_work_on+0x5c/0x70
RSP: 0018:ffff8801442a3cd8  EFLAGS: 00010003
RAX: ffffffff81ab4ee0 RBX: ffff88014e919ec0 RCX: 0000000000000001
RDX: ffffffff81ab4ed8 RSI: ffff88021b17cdc0 RDI: 0000000000000001
RBP: ffff8801442a3cd8 R08: 0000000000000000 R09: 00000000ffffffff
R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff81ab4680
R13: 0000000000000001 R14: 0000000000000046 R15: ffff8802107c8140
FS:  00007f7749b91740(0000) GS:ffff880028280000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f773ea701a0 CR3: 0000000143151000 CR4: 00000000000427e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process salt-minion (pid: 7945, veid: 501, threadinfo ffff8801442a0000, task ffff8801ec6624c0)
Stack:
 ffff8801442a3ce8 ffffffff810a18bf ffff8801442a3cf8 ffffffff810a18e8
<d> ffff8801442a3d28 ffffffff810a30d6 ffff88020a4927c0 00000000000108e0
<d> ffff88020a492701 ffff8801576da680 ffff8801442a3d38 ffffffff810a324a
Call Trace:
 [<ffffffff810a18bf>] queue_work+0x1f/0x30
 [<ffffffff810a18e8>] schedule_work+0x18/0x20
 [<ffffffff810a30d6>] free_pid+0xd6/0x1f0
 [<ffffffff810a324a>] __change_pid+0x5a/0x60
 [<ffffffff810a3260>] detach_pid+0x10/0x20
 [<ffffffff8107fe59>] release_task+0x3c9/0x540
 [<ffffffff81080441>] wait_task_zombie+0x471/0x5e0
 [<ffffffff81080636>] wait_consider_task+0x86/0x4e0
 [<ffffffff81080b77>] do_wait+0xe7/0x220
 [<ffffffff81080d1f>] sys_wait4+0x6f/0xf0
 [<ffffffff8107f5a0>] ? child_wait_callback+0x0/0x80
 [<ffffffff8100b1e2>] system_call_fastpath+0x16/0x1b
Code: 03 04 fd e0 99 c1 81 48 89 c7 e8 20 ff ff ff b8 01 00 00 00 5d c3 66 0f 1f 84 00 00 00 00 00 31 c0 5d c3 8b 3d ae 1f b8 00 eb cb <0f> 0b 66 90 eb fc 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55
RIP  [<ffffffff810a186c>] queue_work_on+0x5c/0x70
 RSP <ffff8801442a3cd8>
Tainting kernel with flag 0x7
Pid: 7945, comm: salt-minion veid: 501 Tainted: P        W  -- ------------    2.6.32-43-pve #1
Call Trace:
 [<ffffffff8107bbe9>] ? add_taint+0x69/0x70
 [<ffffffff815ad5d3>] ? oops_end+0x53/0xf0
 [<ffffffff81011b28>] ? die+0x58/0x90
 [<ffffffff815acc70>] ? do_trap+0xc0/0x160
 [<ffffffff815af642>] ? __atomic_notifier_call_chain+0x12/0x20
 [<ffffffff8100ca2b>] ? do_invalid_op+0xab/0xc0
 [<ffffffff810a186c>] ? queue_work_on+0x5c/0x70
 [<ffffffff810a31ce>] ? free_pid+0x1ce/0x1f0
 [<ffffffff8100c19b>] ? invalid_op+0x1b/0x20
 [<ffffffff810a186c>] ? queue_work_on+0x5c/0x70
 [<ffffffff810a18bf>] ? queue_work+0x1f/0x30
 [<ffffffff810a18e8>] ? schedule_work+0x18/0x20
 [<ffffffff810a30d6>] ? free_pid+0xd6/0x1f0
 [<ffffffff810a324a>] ? __change_pid+0x5a/0x60
 [<ffffffff810a3260>] ? detach_pid+0x10/0x20
 [<ffffffff8107fe59>] ? release_task+0x3c9/0x540
 [<ffffffff81080441>] ? wait_task_zombie+0x471/0x5e0
 [<ffffffff81080636>] ? wait_consider_task+0x86/0x4e0
 [<ffffffff81080b77>] ? do_wait+0xe7/0x220
 [<ffffffff81080d1f>] ? sys_wait4+0x6f/0xf0
 [<ffffffff8107f5a0>] ? child_wait_callback+0x0/0x80
 [<ffffffff8100b1e2>] ? system_call_fastpath+0x16/0x1b
---[ end trace 6c8fcd470bbda8c5 ]---
Kernel panic - not syncing: Fatal exception
Pid: 7945, comm: salt-minion veid: 501 Tainted: P      D W  -- ------------    2.6.32-43-pve #1
Call Trace:
 [<ffffffff815a04d9>] ? panic+0xa7/0x167
 [<ffffffff815ad654>] ? oops_end+0xd4/0xf0
 [<ffffffff81011b28>] ? die+0x58/0x90
 [<ffffffff815acc70>] ? do_trap+0xc0/0x160
 [<ffffffff815af642>] ? __atomic_notifier_call_chain+0x12/0x20
 [<ffffffff8100ca2b>] ? do_invalid_op+0xab/0xc0
 [<ffffffff810a186c>] ? queue_work_on+0x5c/0x70
 [<ffffffff810a31ce>] ? free_pid+0x1ce/0x1f0
 [<ffffffff8100c19b>] ? invalid_op+0x1b/0x20
 [<ffffffff810a186c>] ? queue_work_on+0x5c/0x70
 [<ffffffff810a18bf>] ? queue_work+0x1f/0x30
 [<ffffffff810a18e8>] ? schedule_work+0x18/0x20
 [<ffffffff810a30d6>] ? free_pid+0xd6/0x1f0
 [<ffffffff810a324a>] ? __change_pid+0x5a/0x60
 [<ffffffff810a3260>] ? detach_pid+0x10/0x20
 [<ffffffff8107fe59>] ? release_task+0x3c9/0x540
 [<ffffffff81080441>] ? wait_task_zombie+0x471/0x5e0
 [<ffffffff81080636>] ? wait_consider_task+0x86/0x4e0
 [<ffffffff81080b77>] ? do_wait+0xe7/0x220
 [<ffffffff81080d1f>] ? sys_wait4+0x6f/0xf0
 [<ffffffff8107f5a0>] ? child_wait_callback+0x0/0x80
 [<ffffffff8100b1e2>] ? system_call_fastpath+0x16/0x1b
drm_kms_helper: panic occurred, switching back to text console
------------[ cut here ]------------


Uncorrect data in Memory Usage

$
0
0
Hi, everyone.

Usually I've installed Windows with English Language Pack and haven't problems with Balloon. But since I've installed Windows with Spanish Language Pack I've got an issue.
Balloon driver and service was successfully installed, but there's wrong data in memory usage.

win2k8-x64-es.png

Stable virtio drivers at this time are these (virtio-win-0.1.102.iso). Windows in my case is Windows Server 2008 x64 Standard, but I think it can be reproduced on Windows 7, Windows 2008 R2 and so on.

Please, can anyone test this case with any other Language Pack (except English) and confirm or refute this issue?
Attached Images

Latest Proxmox 4 update

$
0
0
Hello Proxmox Team,

I was downloaded Proxmox v4 latest from download section with sufix -17 and install it on my dedicated server.

My problem is, when I was installed Proxmox v4 from install pool OVH on dedicated server it was updated to latest version proxmox-ve: 4.0-21 but when I was updated my Proxmox install on this dedicated server and update it, I have version proxmox-ve: 4.0-16!

What can be wrong with installatio from download section and how to update my Proxmox from 4.0-16 to latest (current is 4.0-21)?

Thanks for the all help in advance...

Problems with Proxmox VE 4

$
0
0
It's been a couple of years since I last played with Proxmox. I thought I'd give it a try again and went about installing Proxmox VE 4 onto a Dell CS24 Server. The hardware is as follows:
  • Dual Quad Core 2.5Ghz CPU's
  • 16GB of RAM
  • 3x1TB SATA Drives in Raid 1 with Hot Spare configured using the included LSI RAID controller.


My problems thus far are as follows:
  • Web Interface seems to freeze up and be inaccessible from time to time. Rebooting tends to fix this.
  • Sometimes, upon rebooting the server, I get errors when trying to connect to Virtual Machines using the VNC client in the web browser. If I reboot again it seems to work.


Anyone have any thoughts for making this stable? Where would I go to find logs about these issues without the web GUI?

DRBD9 mesh network

$
0
0
Hello,

I've read in the DRBD9 manual that the connection between nodes must be "full mesh".
So for a three nodes cluster, we have to use 2 dedicated links on each node directly connected the other nodes.

In the proxmox drbd9 wiki page, I only see 1 dedicated connection per node in the three nodes example.

Did I missed something?

Thanks for your help

KRBD - rbd image as backup storage

$
0
0
Hi All!

I want implement a store for vm's and ct's backups inside CEPH-storage in my cluster as rbd-image.
From any cluster node via krbd make any FS on created rbd-image and mount it in local folder.
And this local folder use as storage for backups VMs.

for testing I created rbd-image (size 40GiB):

rbd create backup-store --size 40960 --pool ceph_stor

rbd ls -l ceph_stor


NAME SIZE PARENT FMT PROT LOCK
backup-store 40960M 1
vm-100-disk-1 5120M 2
...

I'm trying to determine which disk corresponds to the created image via fdisk -l.
And I do not see any drive of this 40GiB size...

Any idea? :confused:

Can I update ovs to 2.4 on PVE 4 ?

Proxmox VE autmatic shutsown VM and VE

$
0
0
Hello PROXMOX community,
I have (hopfully) an easy question. I have installed a proxmox server with 5 VM. I can start and shutdown the VM manualy and I think this will work also with an "automatic shutdown service". After that, I want to shutdown the proxmox server. But I don't know how to implement this. I'm not a Linux freak so please if anyone has an idea, please tell in an easy way. didn't found helpful information for me to fix this problem.


thx


Oliver Kern

ProxMoxVE doesn't delete old backup

$
0
0
I've set a NFS external backup storage. I've set 10 max backup, but in bk dir there are all old backup.
Where can I se why?

Storage Issues on Proxmox 4

$
0
0
I am running into a very odd issue with a new iSCSI storage solution we purchased. I will do my best to describe the issue I am seeing as it is a very odd one.

Storage Hardware
Nimble CS300 iSCSI 10G

Servers
HP DL 380 Gen9

I currently have two hosts configured and connected to the Nimble CS300. One of the hosts is a updated Proxmox 3.4 install and the other is a updated Proxmox 4 install.

If I setup a fresh VM on the Proxmox 4 node and try to install any OS it will fail due to not being able to create the filesystem properly. It looks as if its an issue with creating the journal.

I was originally thinking this was a issue with the storage, but after testing with Proxmox 3.4 I don't feel that is the case.

I can move the VM config over to the proxmox 3.4 host, and boom it installs with no issues. This leads me to believe that this is an issue with proxmox 4, but im very stumped as to what the issue could be.

OS Install's I have tried, all of them have the same issues. Move over to the Proxmox 3.4 host and the install no problem.
CentOS6
CentOS7
Ubuntu
Windows 7

Providing some SS of a CentOS6 install when it fails.

Failed_CMD.png
Failed_GUI.png
Attached Images

Backup solutions other than vzdump ?

$
0
0
Hello,

Do you know an other backup solution (free or not) than vzdump to back up directly from the node / hypervisors please ?

Actually, we have a cluster of several dedicated servers connected to a SAN in 10 Gbits/s.

We have tried vzdump with local, nfs and samba but we have got instabilities in the node when we start backup and we would like to try another way to proceed.

Thanks.

Kernel-Panic on KVM-Guest on Proxmox 3.4

$
0
0
Hello

sorry for my bad english...

we have problems on two KVM-Machines on a Proxmox-Host 3.4
Two machines freeze every few days - kernel panic (see below).
Apparently there is no longer access to the hard drive?

I've found this:
Our values on the vm are:
vm.dirty_background_bytes = 0
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_writeback_centisecs = 500
vm.dirty_ratio = 20
vm.dirty_background_ratio = 10

could this help?
vm.dirty_ratio = 10
vm.dirty_background_ratio = 5

Can anybody help?

Quote:

Host:
2x Xeon E5-2630v3 2.4 GHz
Supermicro X10DRI
128 GB RAM
4x 960 GB SSD SM863 Raid-10 (System on /var/lib/vz)
2x 2000 GB SAS Ultrastar 7k4000 Raid-1 (Backup on /mnt/sdb1)


Adaptec ASR8805
Firmware 7.5-0 (32033)

Controller Cache for Raid-10 (SSD)
Read-Cache Status Off
Write-Cache Status Off (write-through)
Write-Cache Mode Disabled (write-through)

Controller Cache for Raid-1 (SAS)
Read-Cache Status On
Write-Cache Status Off (write-through)
Write-Cache Mode Disabled (write-through)

-------------------------------------------

# pveversion --verbose
proxmox-ve-2.6.32: 3.4-164 (running kernel: 2.6.32-41-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-39-pve: 2.6.32-157
pve-kernel-2.6.32-41-pve: 2.6.32-164
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

-------------------------------------------

one of the problems makes machines:
KVM with Debian Jessie 8.2

bootdisk: virtio0
cores: 8
ide2: backup:iso/debian-8.2.0-amd64-netinst.iso,media=cdrom,size=247M
memory: 49152
name: xxx
net0: e1000=7E:FF:6B:8D:0B:88,bridge=vmbr0
net1: e1000=2E:64:01:8D:E6:C2,bridge=vmbr1
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=158680c0-7d14-4f4a-93fe-b3f0cebbb0cf
sockets: 1
virtio0: local:107/vm-107-disk-1.qcow2,format=qcow2,size=900G

-------------------------------------------

Nov 20 10:23:33 vm107 kernel: [294480.772112] INFO: task kworker/u16:0:6 blocked for more than 120 seconds.
Nov 20 10:23:33 vm107 kernel: [294480.773123] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:33 vm107 kernel: [294480.773661] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:33 vm107 kernel: [294480.774672] kworker/u16:0 D ffff880c0eb6a4a8 0 6 2 0x00000000
Nov 20 10:23:33 vm107 kernel: [294480.775354] Workqueue: writeback bdi_writeback_workfn (flush-254:0)
Nov 20 10:23:34 vm107 kernel: [294480.775984] ffff880c0eb6a050 0000000000000046 0000000000012f00 ffff880c0eb83fd8
Nov 20 10:23:34 vm107 kernel: [294480.777026] 0000000000012f00 ffff880c0eb6a050 ffff880c3fc537b0 ffff880c3ff80060
Nov 20 10:23:34 vm107 kernel: [294480.778040] ffff880c0eb837f0 0000000000000002 ffffffff811d6b10 ffff880c09f07ab8
Nov 20 10:23:34 vm107 kernel: [294480.779052] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.779496] [<ffffffff811d6b10>] ? generic_block_bmap+0x50/0x50
Nov 20 10:23:34 vm107 kernel: [294480.780113] [<ffffffff8150e159>] ? io_schedule+0x99/0x120
Nov 20 10:23:34 vm107 kernel: [294480.780696] [<ffffffff811d6b1a>] ? sleep_on_buffer+0xa/0x10
Nov 20 10:23:34 vm107 kernel: [294480.781282] [<ffffffff8150e5e1>] ? __wait_on_bit_lock+0x41/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.781879] [<ffffffff811d6b10>] ? generic_block_bmap+0x50/0x50
Nov 20 10:23:34 vm107 kernel: [294480.782480] [<ffffffff8150e6b7>] ? out_of_line_wait_on_bit_lock+0x77/0x90
Nov 20 10:23:34 vm107 kernel: [294480.783123] [<ffffffff810a7a70>] ? autoremove_wake_function+0x30/0x30
Nov 20 10:23:34 vm107 kernel: [294480.783771] [<ffffffffa0141a20>] ? do_get_write_access+0x260/0x4e0 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.784434] [<ffffffffa0141cc2>] ? jbd2_journal_get_write_access+0x22/0x40 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.785471] [<ffffffffa018b066>] ? __ext4_journal_get_write_access+0x36/0x80 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.786478] [<ffffffffa0191a7a>] ? ext4_mb_mark_diskspace_used+0x6a/0x4c0 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.787472] [<ffffffffa018c986>] ? ext4_mb_use_preallocated+0x256/0x270 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.788479] [<ffffffffa018cf33>] ? ext4_mb_initialize_context+0x73/0x190 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.789474] [<ffffffffa01931d2>] ? ext4_mb_new_blocks+0x292/0x4f0 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.790112] [<ffffffffa0188923>] ? ext4_ext_map_blocks+0x653/0x10a0 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.790756] [<ffffffffa015e99c>] ? ext4_map_blocks+0x15c/0x530 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.791382] [<ffffffffa0161b86>] ? ext4_writepages+0x606/0xd00 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.792027] [<ffffffff811ce539>] ? __writeback_single_inode+0x39/0x220
Nov 20 10:23:34 vm107 kernel: [294480.792657] [<ffffffff811cf1a4>] ? writeback_sb_inodes+0x1a4/0x3e0
Nov 20 10:23:34 vm107 kernel: [294480.793268] [<ffffffff811cf476>] ? __writeback_inodes_wb+0x96/0xc0
Nov 20 10:23:34 vm107 kernel: [294480.793877] [<ffffffff811cf6e3>] ? wb_writeback+0x243/0x2d0
Nov 20 10:23:34 vm107 kernel: [294480.794463] [<ffffffff811d193c>] ? bdi_writeback_workfn+0x1bc/0x420
Nov 20 10:23:34 vm107 kernel: [294480.795099] [<ffffffff81081662>] ? process_one_work+0x172/0x420
Nov 20 10:23:34 vm107 kernel: [294480.795701] [<ffffffff81081cf3>] ? worker_thread+0x113/0x4f0
Nov 20 10:23:34 vm107 kernel: [294480.796305] [<ffffffff81081be0>] ? rescuer_thread+0x2d0/0x2d0
Nov 20 10:23:34 vm107 kernel: [294480.796900] [<ffffffff81087f7d>] ? kthread+0xbd/0xe0
Nov 20 10:23:34 vm107 kernel: [294480.797458] [<ffffffff81087ec0>] ? kthread_create_on_node+0x180/0x180
Nov 20 10:23:34 vm107 kernel: [294480.798079] [<ffffffff81511618>] ? ret_from_fork+0x58/0x90
Nov 20 10:23:34 vm107 kernel: [294480.798662] [<ffffffff81087ec0>] ? kthread_create_on_node+0x180/0x180
Nov 20 10:23:34 vm107 kernel: [294480.799301] INFO: task jbd2/vda1-8:156 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.799929] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.800472] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.801480] jbd2/vda1-8 D ffff880c0a3cce38 0 156 2 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.802156] ffff880c0a3cc9e0 0000000000000046 0000000000012f00 ffff880c0e697fd8
Nov 20 10:23:34 vm107 kernel: [294480.803168] 0000000000012f00 ffff880c0a3cc9e0 ffff880c3fc137b0 ffff880c3ff861f8
Nov 20 10:23:34 vm107 kernel: [294480.804195] 0000000000000002 ffffffff811d6b10 ffff880c0e697c80 ffff880c0a1d2398
Nov 20 10:23:34 vm107 kernel: [294480.805219] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.805657] [<ffffffff811d6b10>] ? generic_block_bmap+0x50/0x50
Nov 20 10:23:34 vm107 kernel: [294480.806260] [<ffffffff8150e159>] ? io_schedule+0x99/0x120
Nov 20 10:23:34 vm107 kernel: [294480.806828] [<ffffffff811d6b1a>] ? sleep_on_buffer+0xa/0x10
Nov 20 10:23:34 vm107 kernel: [294480.807409] [<ffffffff8150e4dc>] ? __wait_on_bit+0x5c/0x90
Nov 20 10:23:34 vm107 kernel: [294480.807980] [<ffffffff811d6b10>] ? generic_block_bmap+0x50/0x50
Nov 20 10:23:34 vm107 kernel: [294480.808597] [<ffffffff8150e587>] ? out_of_line_wait_on_bit+0x77/0x90
Nov 20 10:23:34 vm107 kernel: [294480.809222] [<ffffffff810a7a70>] ? autoremove_wake_function+0x30/0x30
Nov 20 10:23:34 vm107 kernel: [294480.809846] [<ffffffffa014450e>] ? jbd2_journal_commit_transaction+0x175e/0x1950 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.810862] [<ffffffff810a2b01>] ? pick_next_task_fair+0x6e1/0x820
Nov 20 10:23:34 vm107 kernel: [294480.811477] [<ffffffffa0147bc2>] ? kjournald2+0xb2/0x240 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.812087] [<ffffffff810a7a40>] ? prepare_to_wait_event+0xf0/0xf0
Nov 20 10:23:34 vm107 kernel: [294480.812700] [<ffffffffa0147b10>] ? commit_timeout+0x10/0x10 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.813313] [<ffffffff81087f7d>] ? kthread+0xbd/0xe0
Nov 20 10:23:34 vm107 kernel: [294480.813867] [<ffffffff81087ec0>] ? kthread_create_on_node+0x180/0x180
Nov 20 10:23:34 vm107 kernel: [294480.814491] [<ffffffff81511618>] ? ret_from_fork+0x58/0x90
Nov 20 10:23:34 vm107 kernel: [294480.815082] [<ffffffff81087ec0>] ? kthread_create_on_node+0x180/0x180
Nov 20 10:23:34 vm107 kernel: [294480.815762] INFO: task mysqld:18992 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.816401] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.816928] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.817919] mysqld D ffff880c0a8859c8 0 18992 691 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.818565] ffff880c0a885570 0000000000000086 0000000000012f00 ffff880a7d183fd8
Nov 20 10:23:34 vm107 kernel: [294480.819566] 0000000000012f00 ffff880c0a885570 ffff880c0a1d2000 0000000000128138
Nov 20 10:23:34 vm107 kernel: [294480.820592] ffff880c0a1d2088 ffff880c0a1d2024 ffff880a7d183ed0 ffff880c0a1d20a0
Nov 20 10:23:34 vm107 kernel: [294480.821587] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.822021] [<ffffffffa0147605>] ? jbd2_log_wait_commit+0x95/0x100 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.822651] [<ffffffff810a7a40>] ? prepare_to_wait_event+0xf0/0xf0
Nov 20 10:23:34 vm107 kernel: [294480.823264] [<ffffffffa0159770>] ? ext4_sync_file+0x280/0x310 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.823875] [<ffffffff811d53fb>] ? do_fsync+0x4b/0x70
Nov 20 10:23:34 vm107 kernel: [294480.824445] [<ffffffff811d566c>] ? SyS_fsync+0xc/0x10
Nov 20 10:23:34 vm107 kernel: [294480.825017] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.825649] INFO: task master:1599 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.826261] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.826778] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.827766] master D ffff880c0b51e7e8 0 1599 1 0x00000004
Nov 20 10:23:34 vm107 kernel: [294480.828433] ffff880c0b51e390 0000000000000082 0000000000012f00 ffff880c07c77fd8
Nov 20 10:23:34 vm107 kernel: [294480.829431] 0000000000012f00 ffff880c0b51e390 ffff880c3fd937b0 ffff880c3ff8f060
Nov 20 10:23:34 vm107 kernel: [294480.830425] ffff880c07c77be0 0000000000000002 ffffffff811d6b10 ffff880c09d2e9e8
Nov 20 10:23:34 vm107 kernel: [294480.831421] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.831850] [<ffffffff811d6b10>] ? generic_block_bmap+0x50/0x50
Nov 20 10:23:34 vm107 kernel: [294480.832481] [<ffffffff8150e159>] ? io_schedule+0x99/0x120
Nov 20 10:23:34 vm107 kernel: [294480.833044] [<ffffffff811d6b1a>] ? sleep_on_buffer+0xa/0x10
Nov 20 10:23:34 vm107 kernel: [294480.833618] [<ffffffff8150e5e1>] ? __wait_on_bit_lock+0x41/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.834210] [<ffffffff811d6b10>] ? generic_block_bmap+0x50/0x50
Nov 20 10:23:34 vm107 kernel: [294480.834814] [<ffffffff8150e6b7>] ? out_of_line_wait_on_bit_lock+0x77/0x90
Nov 20 10:23:34 vm107 kernel: [294480.835444] [<ffffffff810a7a70>] ? autoremove_wake_function+0x30/0x30
Nov 20 10:23:34 vm107 kernel: [294480.836070] [<ffffffffa0141a20>] ? do_get_write_access+0x260/0x4e0 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.836701] [<ffffffff811d830a>] ? __getblk+0x2a/0x2d0
Nov 20 10:23:34 vm107 kernel: [294480.837256] [<ffffffffa0141cc2>] ? jbd2_journal_get_write_access+0x22/0x40 [jbd2]
Nov 20 10:23:34 vm107 kernel: [294480.842481] [<ffffffffa018b066>] ? __ext4_journal_get_write_access+0x36/0x80 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.843475] [<ffffffffa0161388>] ? ext4_reserve_inode_write+0x68/0x90 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.844449] [<ffffffffa01645cb>] ? ext4_dirty_inode+0x3b/0x60 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.845078] [<ffffffffa01613ef>] ? ext4_mark_inode_dirty+0x3f/0x1d0 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.845726] [<ffffffffa01645cb>] ? ext4_dirty_inode+0x3b/0x60 [ext4]
Nov 20 10:23:34 vm107 kernel: [294480.846341] [<ffffffff811cebc2>] ? __mark_inode_dirty+0x172/0x270
Nov 20 10:23:34 vm107 kernel: [294480.846939] [<ffffffff811c1771>] ? update_time+0x81/0xc0
Nov 20 10:23:34 vm107 kernel: [294480.847507] [<ffffffff8106b3a2>] ? current_fs_time+0x12/0x60
Nov 20 10:23:34 vm107 kernel: [294480.848099] [<ffffffff811c1970>] ? file_update_time+0x80/0xd0
Nov 20 10:23:34 vm107 kernel: [294480.848689] [<ffffffff811b0005>] ? pipe_write+0x3b5/0x460
Nov 20 10:23:34 vm107 kernel: [294480.849260] [<ffffffff811a7ad4>] ? new_sync_write+0x74/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.849834] [<ffffffff811a8212>] ? vfs_write+0xb2/0x1f0
Nov 20 10:23:34 vm107 kernel: [294480.850394] [<ffffffff811a8d52>] ? SyS_write+0x42/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.850952] [<ffffffff811bc68d>] ? SyS_poll+0x5d/0xf0
Nov 20 10:23:34 vm107 kernel: [294480.851507] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.852161] INFO: task apache2:5693 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.852771] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.853294] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.854277] apache2 D ffff880036c5cdb8 0 5693 1032 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.854943] ffff880036c5c960 0000000000000082 0000000000012f00 ffff880b58dd3fd8
Nov 20 10:23:34 vm107 kernel: [294480.855932] 0000000000012f00 ffff880036c5c960 ffff880c0a36d748 ffff880b58dd3f20
Nov 20 10:23:34 vm107 kernel: [294480.856937] ffff880c0a36d74c ffff880036c5c960 00000000ffffffff ffff880c0a36d750
Nov 20 10:23:34 vm107 kernel: [294480.857928] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.858365] [<ffffffff8150e2a5>] ? schedule_preempt_disabled+0x25/0x70
Nov 20 10:23:34 vm107 kernel: [294480.858977] [<ffffffff8150fd53>] ? __mutex_lock_slowpath+0xd3/0x1c0
Nov 20 10:23:34 vm107 kernel: [294480.859591] [<ffffffff8150fe5b>] ? mutex_lock+0x1b/0x2a
Nov 20 10:23:34 vm107 kernel: [294480.860173] [<ffffffff811c43ed>] ? __fdget_pos+0x3d/0x50
Nov 20 10:23:34 vm107 kernel: [294480.860737] [<ffffffff811a8d2a>] ? SyS_write+0x1a/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.861294] [<ffffffff8106b59e>] ? SyS_gettimeofday+0x2e/0x80
Nov 20 10:23:34 vm107 kernel: [294480.861871] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.862525] INFO: task apache2:5707 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.863135] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.863655] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.864689] apache2 D ffff880c0a1f5138 0 5707 1032 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.865360] ffff880c0a1f4ce0 0000000000000082 0000000000012f00 ffff880036e1ffd8
Nov 20 10:23:34 vm107 kernel: [294480.866350] 0000000000012f00 ffff880c0a1f4ce0 ffff880c0a36d748 ffff880036e1ff20
Nov 20 10:23:34 vm107 kernel: [294480.867340] ffff880c0a36d74c ffff880c0a1f4ce0 00000000ffffffff ffff880c0a36d750
Nov 20 10:23:34 vm107 kernel: [294480.868340] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.868769] [<ffffffff8150e2a5>] ? schedule_preempt_disabled+0x25/0x70
Nov 20 10:23:34 vm107 kernel: [294480.869384] [<ffffffff8150fd53>] ? __mutex_lock_slowpath+0xd3/0x1c0
Nov 20 10:23:34 vm107 kernel: [294480.869990] [<ffffffff8150fe5b>] ? mutex_lock+0x1b/0x2a
Nov 20 10:23:34 vm107 kernel: [294480.870550] [<ffffffff811c43ed>] ? __fdget_pos+0x3d/0x50
Nov 20 10:23:34 vm107 kernel: [294480.871112] [<ffffffff811a8d2a>] ? SyS_write+0x1a/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.871666] [<ffffffff8106b59e>] ? SyS_gettimeofday+0x2e/0x80
Nov 20 10:23:34 vm107 kernel: [294480.872291] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.872916] INFO: task apache2:5711 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.873531] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.874049] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.875055] apache2 D ffff880c0b15a768 0 5711 1032 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.875701] ffff880c0b15a310 0000000000000082 0000000000012f00 ffff880036cfffd8
Nov 20 10:23:34 vm107 kernel: [294480.876707] 0000000000012f00 ffff880c0b15a310 ffff880c0a36d748 ffff880036cfff20
Nov 20 10:23:34 vm107 kernel: [294480.877701] ffff880c0a36d74c ffff880c0b15a310 00000000ffffffff ffff880c0a36d750
Nov 20 10:23:34 vm107 kernel: [294480.878686] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.879114] [<ffffffff8150e2a5>] ? schedule_preempt_disabled+0x25/0x70
Nov 20 10:23:34 vm107 kernel: [294480.879727] [<ffffffff8150fd53>] ? __mutex_lock_slowpath+0xd3/0x1c0
Nov 20 10:23:34 vm107 kernel: [294480.880344] [<ffffffff8150fe5b>] ? mutex_lock+0x1b/0x2a
Nov 20 10:23:34 vm107 kernel: [294480.880899] [<ffffffff811c43ed>] ? __fdget_pos+0x3d/0x50
Nov 20 10:23:34 vm107 kernel: [294480.881458] [<ffffffff811a8d2a>] ? SyS_write+0x1a/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.882008] [<ffffffff8106b59e>] ? SyS_gettimeofday+0x2e/0x80
Nov 20 10:23:34 vm107 kernel: [294480.882587] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.883214] INFO: task apache2:5723 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.883819] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.884355] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.885354] apache2 D ffff880c0aa064a8 0 5723 1032 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.885998] ffff880c0aa06050 0000000000000082 0000000000012f00 ffff8800bba7ffd8
Nov 20 10:23:34 vm107 kernel: [294480.886985] 0000000000012f00 ffff880c0aa06050 ffff880c0a36d748 ffff8800bba7ff20
Nov 20 10:23:34 vm107 kernel: [294480.887971] ffff880c0a36d74c ffff880c0aa06050 00000000ffffffff ffff880c0a36d750
Nov 20 10:23:34 vm107 kernel: [294480.888975] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.889404] [<ffffffff8150e2a5>] ? schedule_preempt_disabled+0x25/0x70
Nov 20 10:23:34 vm107 kernel: [294480.890015] [<ffffffff8150fd53>] ? __mutex_lock_slowpath+0xd3/0x1c0
Nov 20 10:23:34 vm107 kernel: [294480.890615] [<ffffffff8150fe5b>] ? mutex_lock+0x1b/0x2a
Nov 20 10:23:34 vm107 kernel: [294480.891171] [<ffffffff811c43ed>] ? __fdget_pos+0x3d/0x50
Nov 20 10:23:34 vm107 kernel: [294480.891727] [<ffffffff811a8d2a>] ? SyS_write+0x1a/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.892296] [<ffffffff8106b59e>] ? SyS_gettimeofday+0x2e/0x80
Nov 20 10:23:34 vm107 kernel: [294480.892876] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.893504] INFO: task apache2:5734 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.894327] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.894903] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.895915] apache2 D ffff880c08007808 0 5734 1032 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.896593] ffff880c080073b0 0000000000000082 0000000000012f00 ffff8800bba8bfd8
Nov 20 10:23:34 vm107 kernel: [294480.897593] 0000000000012f00 ffff880c080073b0 ffff880c0a36d748 ffff8800bba8bf20
Nov 20 10:23:34 vm107 kernel: [294480.898593] ffff880c0a36d74c ffff880c080073b0 00000000ffffffff ffff880c0a36d750
Nov 20 10:23:34 vm107 kernel: [294480.899591] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.900039] [<ffffffff8150e2a5>] ? schedule_preempt_disabled+0x25/0x70
Nov 20 10:23:34 vm107 kernel: [294480.900664] [<ffffffff8150fd53>] ? __mutex_lock_slowpath+0xd3/0x1c0
Nov 20 10:23:34 vm107 kernel: [294480.901281] [<ffffffff8150fe5b>] ? mutex_lock+0x1b/0x2a
Nov 20 10:23:34 vm107 kernel: [294480.901842] [<ffffffff811c43ed>] ? __fdget_pos+0x3d/0x50
Nov 20 10:23:34 vm107 kernel: [294480.902408] [<ffffffff811a8d2a>] ? SyS_write+0x1a/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.902967] [<ffffffff8106b59e>] ? SyS_gettimeofday+0x2e/0x80
Nov 20 10:23:34 vm107 kernel: [294480.903551] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15
Nov 20 10:23:34 vm107 kernel: [294480.904203] INFO: task apache2:5704 blocked for more than 120 seconds.
Nov 20 10:23:34 vm107 kernel: [294480.904819] Not tainted 3.16.0-4-amd64 #1
Nov 20 10:23:34 vm107 kernel: [294480.905363] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 10:23:34 vm107 kernel: [294480.906356] apache2 D ffff880c0d77b078 0 5704 1032 0x00000000
Nov 20 10:23:34 vm107 kernel: [294480.907007] ffff880c0d77ac20 0000000000000086 0000000000012f00 ffff8800bb9fffd8
Nov 20 10:23:34 vm107 kernel: [294480.908022] 0000000000012f00 ffff880c0d77ac20 ffff880c0a36d748 ffff8800bb9fff20
Nov 20 10:23:34 vm107 kernel: [294480.909022] ffff880c0a36d74c ffff880c0d77ac20 00000000ffffffff ffff880c0a36d750
Nov 20 10:23:34 vm107 kernel: [294480.910022] Call Trace:
Nov 20 10:23:34 vm107 kernel: [294480.910457] [<ffffffff8150e2a5>] ? schedule_preempt_disabled+0x25/0x70
Nov 20 10:23:34 vm107 kernel: [294480.911076] [<ffffffff8150fd53>] ? __mutex_lock_slowpath+0xd3/0x1c0
Nov 20 10:23:34 vm107 kernel: [294480.911687] [<ffffffff8150fe5b>] ? mutex_lock+0x1b/0x2a
Nov 20 10:23:34 vm107 kernel: [294480.912267] [<ffffffff811c43ed>] ? __fdget_pos+0x3d/0x50
Nov 20 10:23:34 vm107 kernel: [294480.912833] [<ffffffff811a8d2a>] ? SyS_write+0x1a/0xa0
Nov 20 10:23:34 vm107 kernel: [294480.913393] [<ffffffff8106b59e>] ? SyS_gettimeofday+0x2e/0x80
Nov 20 10:23:34 vm107 kernel: [294480.913977] [<ffffffff815116cd>] ? system_call_fast_compare_end+0x10/0x15

Slow performance with ZFS?

$
0
0
Hello. I'm having some performance issues with ZFS 0.6.4 on Proxmox 3.4.
I'm running two WD Green 2TB drives in raid 1. When i run the hdparm tool inside a vm, im getting anywhere from 40 to 50 mb/s.
my zfs_arc_max is set to 4GB.
Is this considered normal?

Help startig with PVE-zsync

$
0
0
Hello, i'm testing
PVE-zsync on two server.

Src server (test1):
192.168.75.10

Dst server (test2):
192.168.75.11

Only one KVM server on test1.

I have this error:
------------------------------------------------------------
root@test1:~/.ssh# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
600 testDebian running 2048 20.00 2471

root@test1:~/.ssh# pve-zsync sync --source 600 --dest 192.168.75.11:rpool --verbose --maxsnap 2
disk is not on ZFS Storage

------------------------------------------------------------

On test2

root@test2:~/.ssh# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 16.9G 882G 96K /rpool
rpool/ROOT 942M 882G 96K /rpool/ROOT
rpool/ROOT/pve-1 942M 882G 942M /
rpool/STORAGE 96K 882G 96K /rpool/STORAGE
rpool/swap 15.9G 898G 64K -

------------------------------------------------------------

What doase it mean: "disk is not on ZFS Storage" ?


I have done this test:

pve-zsync sync --source rpool/ROOT/pve-1 /var/lib/vz/images/600/vm-600-disk-1.qcow2 --dest 192.168.75.11:rpool/STORAGE --verbose --maxsnap 2

but copy the entire root dir /

------------------------------------------------
root@test2:/rpool/STORAGE# cd pve-1/
root@test2:/rpool/STORAGE/pve-1# ls -la
total 172
drwxr-xr-x 24 root root 24 Nov 20 18:21 .
drwxr-xr-x 3 root root 3 Nov 20 18:31 ..
drwxr-xr-x 2 root root 140 Nov 20 18:18 bin
drwxr-xr-x 4 root root 14 Nov 20 16:48 boot
drwxr-xr-x 11 root root 1695 Oct 6 09:44 dev
drwxr-xr-x 96 root root 188 Nov 20 18:18 etc
drwx------ 2 root root 7 Nov 20 18:20 .gnupg
drwxr-xr-x 2 root root 2 Aug 26 18:31 home
drwxr-xr-x 18 root root 46 Nov 20 16:45 lib
drwxr-xr-x 2 root root 3 Oct 6 09:43 lib64
drwxr-xr-x 2 root root 2 Oct 6 09:43 media
drwxr-xr-x 2 root root 2 Oct 6 09:43 mnt
drwxr-xr-x 2 root root 2 Oct 6 09:43 opt
drwxr-xr-x 2 root root 2 Aug 26 18:31 proc
drwx------ 5 root root 11 Nov 20 17:48 root
drwxr-xr-x 2 root root 2 Nov 20 18:25 rpool
drwxr-xr-x 6 root root 7 Nov 20 18:20 run
drwxr-xr-x 2 root root 216 Nov 20 16:48 sbin
drwxr-xr-x 2 root root 2 Oct 6 09:43 srv
drwxr-xr-x 3 root root 3 Nov 20 18:21 STORAGE
drwxr-xr-x 2 root root 2 Apr 6 2015 sys
drwxrwxrwt 9 root root 9 Nov 20 18:26 tmp
drwxr-xr-x 10 root root 10 Oct 6 09:43 usr
drwxr-xr-x 11 root root 13 Oct 6 09:43 var
-------------------------------------------------------------------------



Thanks!!

Proxmox 4 CEPH won't start after cold reboot

$
0
0
I have a 3 node PVE4.0 cluster running ceph with 3 mons and 18 OSDs (6 per host). I had a situation where all three hosts powercycled at the same time. When they came back, none of the ceph processed would start correctly. Can someone point me to a procedure to get a cluster started correctly where I don't think anything was actually corrupted?

Thanks
Dan

GUIDE: Proxmox VE and pfSense (dual NIC)

$
0
0
I wrote a guide on my blog on this subject, and wanted to share it with the Proxmox community, but I'm not allowed to share links.
I could paste it all here, but it contains links (to other resources) and images I'm not allowed to share either.

I'm not sure how to share it, but it's on my webpage kaven.no

which raid

$
0
0
For a small budget.

This is for a single host.

I've been greatly disappointed in ZFS raid 10, the FSYNCS/SECOND doesn't get it. at all!

Proxmox existing vm with mdadm raid 1 = 999 beats the crap out of it. (Tested while guest were running, one of them a mail server)

old perc5 raid card= 3964 on anther machine.

While I'm aware I could add 2 ssd's to the machine to improve performance, that just blows the whole idea of software raid.

#####################################
So, while I have a strong preference for software raid, I've never tried raid 10 mdadm so I don't know what I'm getting into.

As for hardware raid, then I'll get a command line I'm not familiar with (kinda sucks in a pinch), Though I'm aware you can run a windows vm and monitor the raid from that... sigh..


I'm a bit sour at the moment as I just found out the machine I built has to be rebuilt.

I'd really like some suggestions or just some patting on the back and telling me it'll all workout some day.. ha

Add ZFS Log and Cache SSDs, what i have to buy and to do?

$
0
0
Hello,

i would like to add cache and logs ssd's to my existing pool: http://pve.proxmox.com/wiki/Storage:..._existing_pool

I don't use the rootpool for datas or VMs. Use an extra pool:

pool: v-machines
state: ONLINE
scan: resilvered 1.09T in 4h45m with 0 errors on Sat May 23 02:48:52 2015
config:


NAME STATE READ WRITE CKSUM
v-machines ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D0KRWP ONLINE 0 0 0
ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0343538 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D688XW ONLINE 0 0 0
ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D63WM0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0381420 ONLINE 0 0 0
ata-WDC_WD20EURS-63S48Y0_WD-WMAZA9381012 ONLINE 0 0 0

So i have to make 2 partitions on the ssd. 50/50. But what kind of ssd should i buy? And is it a good idea to use a mirror with cache and log? Because when an ssd is going failed.
Is it ok when i buy two of the 60 GB Silicon - Power S60 550mb/s read and 500mb/s write.

pve-manager/4.0-57/cc7c2b53 (running kernel: 4.2.3-2-pve)

Thanks for Information.

Best Regards
Fireon

Manual Backup - strange for me

$
0
0
When I use a amount of backup per vm on my backup storage I've got a message that maximum of backup is exist on storage I must remove a backup when I want do manual backup for my developers I think it's a little strange bechaviour

First install - Welcome to emergency mode!

$
0
0
Hi all,

First time install of Proxmox on Lenovo ThinkServer TS140 and after successful install I did reboot and now I'm presented with below screen. I can't have server that is stuck on that after reboot and need monitor/keyboard connected in order to continue. I was hoping someone can help me overcome this.
All what I can think of is that I installed it on SSD that had previously win7 on it and I did not formatted it before installing Proxmox. As with any Linux distribution I was think it will format SSD before it installs it but I did never see anything during installation that it was formatting.

TIA

Capture4.JPGCapture5.JPG
Attached Images
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>