Quantcast
Channel: Proxmox Support Forum
Viewing all 179695 articles
Browse latest View live

hubic in proxmox environment

$
0
0
Hi,
I want to use hubic to backup "proxmox backup folder".
At now, every time I reboot the server, I have to ssh into the server to manually start the service in the following way:


  • dbus-launch --sh-syntax
  • from the previous output I write in console "export DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-...."
  • finally I can start the server with the command "hubic login my@email.forexample"
  • the password at the end


Is there a way to do a init.d script to automatically start the service?
Thank you for your reply.

Emanuele Bruno.

fdisk -l do not show second hrd

$
0
0
I install Proxmox VE 4 and I install it on first HDD and my system has two HDD. when I run this command
fdisk -l
I see



second HDD was partitioned by FreeBSD FS, I want use second HDD , for VM , because I can not find enough space on.first HDD
what I must do ?
Attached Files

Help with Pci passthrough.

$
0
0
The hardware is a HP ProLiant ML310e Gen8 v2 Server with Intel Xeon 1231v2 with one intel nic attached to a pci-e slot. (i tried all ports)
iam running the new 4.0 version. i followed this guide, https://pve.proxmox.com/wiki/Pci_passthrough


Things i have done:


1: Enable vt-d in bios
2: edited grub with "GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
3. update-grub and reboted
4. found the pci id with lspci: 0b:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection


5. added those lines in /etc/pve/qemuserver/101.cfg


Code:

bootdisk: ide0
cores: 1
ide0: local:100/vm-100-disk-1.qcow2,cache=writethrough,size=32G
ide2: none,media=cdrom
memory: 4096
name: 110
numa: 0
ostype: l26
smbios1: uuid=e81d7cf1-8363-4571-b9cb-5266a61d99d8
sockets: 1
machine:q35
hostpci0: 0b:00.0,pcie=1,driver=vfio

The result is that the vm don't start and the gui output errors.
when the server first starts i can see the nic, under the host configuration as eth2, but when i start the vm from the gui, the network device is detached from the host, so something defiantly happens.
i have also tried another nic from another vendor this makes the server crash.

with the intel nic i get this output from the gui:


Code:

unknown hostpci setting 'driver=vfio'
unknown hostpci setting 'driver=vfio'
Running as unit 100.scope.
kvm: -device vfio-pci,host=0b:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to set iommu for container: Operation not permitted
kvm: -device vfio-pci,host=0b:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to setup container for group 11
kvm: -device vfio-pci,host=0b:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to get group 11
kvm: -device vfio-pci,host=0b:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: Device initialization failed
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice qemu --unit 100 -p 'CPUShares=1000' /usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=e81d7cf1-8363-4571-b9cb-5266a61d99d8' -name 110 -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,-kvm_steal_time,enforce -m 4096 -k da -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=0b:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:398bc71b738' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-ide0,cache=writethrough,format=qcow2,aio=threads,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -machine 'type=q35'' failed: exit code 1



when running dmesg | grep -e DMAR -e IOMMU output:


Code:

root@vmcluster:~# dmesg | grep -e DMAR -e IOMMU
[    0.000000] ACPI: DMAR 0x00000000B5DE4A80 00042C (v01 HP    ProLiant 00000001 \xffffffd2?  0000162E)
[    0.000000] DMAR: IOMMU enabled
[    0.024810] DMAR: Host address width 39
[    0.024811] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[    0.024815] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c9008020660262 ecap f010da
[    0.024816] DMAR: RMRR base: 0x000000b5ffd000 end: 0x000000b5ffffff
[    0.024817] DMAR: RMRR base: 0x000000b5ff6000 end: 0x000000b5ffcfff
[    0.024818] DMAR: RMRR base: 0x000000b5f93000 end: 0x000000b5f94fff
[    0.024818] DMAR: RMRR base: 0x000000b5f8f000 end: 0x000000b5f92fff
[    0.024820] DMAR: RMRR base: 0x000000b5f7f000 end: 0x000000b5f8efff
[    0.024821] DMAR: RMRR base: 0x000000b5f7e000 end: 0x000000b5f7efff
[    0.024822] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff
[    0.024823] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff
[    0.024824] DMAR: RMRR base: 0x000000b5dee000 end: 0x000000b5deefff
[    0.024825] DMAR: RMRR base: 0x000000c0000000 end: 0x000000dfffffff
[    0.024826] DMAR-IR: IOAPIC id 8 under DRHD base  0xfed90000 IOMMU 0
[    0.024827] DMAR-IR: HPET id 0 under DRHD base 0xfed90000
[    0.024828] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    0.024829] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    0.024978] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.468166] DMAR: No ATSR found
[    0.468236] DMAR: dmar0: Using Queued invalidation
[    0.468243] DMAR: Setting RMRR:
[    0.468251] DMAR: Setting identity map for device 0000:08:00.0 [0xc0000000 - 0xdfffffff]
[    0.470328] DMAR: Setting identity map for device 0000:02:00.0 [0xc0000000 - 0xdfffffff]
[    0.472389] DMAR: Setting identity map for device 0000:0b:00.0 [0xc0000000 - 0xdfffffff]
[    0.474452] DMAR: Setting identity map for device 0000:03:00.0 [0xc0000000 - 0xdfffffff]
[    0.476517] DMAR: Setting identity map for device 0000:03:00.1 [0xc0000000 - 0xdfffffff]
[    0.478575] DMAR: Setting identity map for device 0000:01:00.0 [0xc0000000 - 0xdfffffff]
[    0.480638] DMAR: Setting identity map for device 0000:01:00.2 [0xc0000000 - 0xdfffffff]
[    0.482692] DMAR: Setting identity map for device 0000:01:00.0 [0xb5dee000 - 0xb5deefff]
[    0.482699] DMAR: Setting identity map for device 0000:01:00.2 [0xb5dee000 - 0xb5deefff]
[    0.482710] DMAR: Setting identity map for device 0000:01:00.4 [0xb5dee000 - 0xb5deefff]
[    0.482717] DMAR: Setting identity map for device 0000:08:00.0 [0xe8000 - 0xe8fff]
[    0.482725] DMAR: Setting identity map for device 0000:02:00.0 [0xe8000 - 0xe8fff]
[    0.482732] DMAR: Setting identity map for device 0000:0b:00.0 [0xe8000 - 0xe8fff]
[    0.482739] DMAR: Setting identity map for device 0000:03:00.0 [0xe8000 - 0xe8fff]
[    0.482746] DMAR: Setting identity map for device 0000:03:00.1 [0xe8000 - 0xe8fff]
[    0.482753] DMAR: Setting identity map for device 0000:01:00.0 [0xe8000 - 0xe8fff]
[    0.482760] DMAR: Setting identity map for device 0000:01:00.2 [0xe8000 - 0xe8fff]
[    0.482766] DMAR: Setting identity map for device 0000:08:00.0 [0xf4000 - 0xf4fff]
[    0.482768] DMAR: Setting identity map for device 0000:02:00.0 [0xf4000 - 0xf4fff]
[    0.482769] DMAR: Setting identity map for device 0000:0b:00.0 [0xf4000 - 0xf4fff]
[    0.482770] DMAR: Setting identity map for device 0000:03:00.0 [0xf4000 - 0xf4fff]
[    0.482771] DMAR: Setting identity map for device 0000:03:00.1 [0xf4000 - 0xf4fff]
[    0.482773] DMAR: Setting identity map for device 0000:01:00.0 [0xf4000 - 0xf4fff]
[    0.482774] DMAR: Setting identity map for device 0000:01:00.2 [0xf4000 - 0xf4fff]
[    0.482775] DMAR: Setting identity map for device 0000:08:00.0 [0xb5f7e000 - 0xb5f7efff]
[    0.482782] DMAR: Setting identity map for device 0000:02:00.0 [0xb5f7e000 - 0xb5f7efff]
[    0.482789] DMAR: Setting identity map for device 0000:0b:00.0 [0xb5f7e000 - 0xb5f7efff]
[    0.482796] DMAR: Setting identity map for device 0000:03:00.0 [0xb5f7e000 - 0xb5f7efff]
[    0.482802] DMAR: Setting identity map for device 0000:03:00.1 [0xb5f7e000 - 0xb5f7efff]
[    0.482809] DMAR: Setting identity map for device 0000:01:00.0 [0xb5f7e000 - 0xb5f7efff]
[    0.482813] DMAR: Setting identity map for device 0000:01:00.2 [0xb5f7e000 - 0xb5f7efff]
[    0.482817] DMAR: Setting identity map for device 0000:08:00.0 [0xb5f7f000 - 0xb5f8efff]
[    0.482819] DMAR: Setting identity map for device 0000:02:00.0 [0xb5f7f000 - 0xb5f8efff]
[    0.482820] DMAR: Setting identity map for device 0000:0b:00.0 [0xb5f7f000 - 0xb5f8efff]
[    0.482822] DMAR: Setting identity map for device 0000:03:00.0 [0xb5f7f000 - 0xb5f8efff]
[    0.482823] DMAR: Setting identity map for device 0000:03:00.1 [0xb5f7f000 - 0xb5f8efff]
[    0.482824] DMAR: Setting identity map for device 0000:01:00.0 [0xb5f7f000 - 0xb5f8efff]
[    0.482826] DMAR: Setting identity map for device 0000:01:00.2 [0xb5f7f000 - 0xb5f8efff]
[    0.482827] DMAR: Setting identity map for device 0000:08:00.0 [0xb5f8f000 - 0xb5f92fff]
[    0.482829] DMAR: Setting identity map for device 0000:02:00.0 [0xb5f8f000 - 0xb5f92fff]
[    0.482830] DMAR: Setting identity map for device 0000:0b:00.0 [0xb5f8f000 - 0xb5f92fff]
[    0.482832] DMAR: Setting identity map for device 0000:03:00.0 [0xb5f8f000 - 0xb5f92fff]
[    0.482833] DMAR: Setting identity map for device 0000:03:00.1 [0xb5f8f000 - 0xb5f92fff]
[    0.482834] DMAR: Setting identity map for device 0000:01:00.0 [0xb5f8f000 - 0xb5f92fff]
[    0.482836] DMAR: Setting identity map for device 0000:01:00.2 [0xb5f8f000 - 0xb5f92fff]
[    0.482837] DMAR: Setting identity map for device 0000:08:00.0 [0xb5f93000 - 0xb5f94fff]
[    0.482838] DMAR: Setting identity map for device 0000:02:00.0 [0xb5f93000 - 0xb5f94fff]
[    0.482840] DMAR: Setting identity map for device 0000:0b:00.0 [0xb5f93000 - 0xb5f94fff]
[    0.482841] DMAR: Setting identity map for device 0000:03:00.0 [0xb5f93000 - 0xb5f94fff]
[    0.482842] DMAR: Setting identity map for device 0000:03:00.1 [0xb5f93000 - 0xb5f94fff]
[    0.482843] DMAR: Setting identity map for device 0000:01:00.0 [0xb5f93000 - 0xb5f94fff]
[    0.482845] DMAR: Setting identity map for device 0000:01:00.2 [0xb5f93000 - 0xb5f94fff]
[    0.482846] DMAR: Setting identity map for device 0000:01:00.0 [0xb5ff6000 - 0xb5ffcfff]
[    0.482847] DMAR: Setting identity map for device 0000:01:00.2 [0xb5ff6000 - 0xb5ffcfff]
[    0.482849] DMAR: Setting identity map for device 0000:01:00.4 [0xb5ff6000 - 0xb5ffcfff]
[    0.482857] DMAR: Setting identity map for device 0000:00:1a.0 [0xb5ffd000 - 0xb5ffffff]
[    0.482873] DMAR: Setting identity map for device 0000:00:1d.0 [0xb5ffd000 - 0xb5ffffff]
[    0.482880] DMAR: Prepare 0-16MiB unity mapping for LPC
[    0.482885] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    0.482953] DMAR: Intel(R) Virtualization Technology for Directed I/O
[  93.900901] vfio-pci 0000:0b:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  186.843082] vfio-pci 0000:0b:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  229.103893] vfio-pci 0000:0b:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  267.422386] vfio-pci 0000:0b:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  915.854466] vfio-pci 0000:0b:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[ 1012.760069] vfio-pci 0000:0b:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.

Guidance LVM NFS

$
0
0
Hi all,

Just a slight issue I'm having not serious just need some clarity.

Currently I have 1 backup test server with LVM shared called VG0 and it has NFS setup. Now I have another server called vz-cpt-2. vz-cpt-2 is a ZFS Raid 10

I get this when doing a backup from vz-cpt-2 to nfs share with openmediavault.

INFO: mode failure - unable to detect lvm volume group

here full details:

INFO: starting new backup job: vzdump 139 --remove 0 --mode snapshot --compress lzo --storage store-nfs --node vz-cpt-2
INFO: Starting Backup of VM 139 (openvz)
INFO: CTID 139 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /var/lib/vz/private/139/ to /mnt/pve/store-nfs/dump/vzdump-openvz-139-2015_10_17-16_16_04.tmp

Should I rather change vz-cpt-2 to a ext4 with lvm instead of ZFS for snapshots to work? Note I installed proxmox with proxmox iso should I maybe do debian first partition then install lvm.

How can i do a complete cluster shutdown?

$
0
0
I have a 3.4 Proxmox Cluster system with 3 nodes. During a power failure, the nodes must shut down sequentially. Now it happened that the last node is not completely shut down, because quorum error. After the reboot of the first node quorum stops and breaks with an error message. Only after the start of another node and restarting the quorum service everything works again. This is very time expensive and i'm sure that this is not so provided.
The error message after boot is
Code:

Waiting for quorum... Timed-out waiting for cluster
How can i shutdown all servers?

Separate Cluster Network wiki page

$
0
0
Hello - I just noticed the new https://pve.proxmox.com/wiki/Separate_Cluster_Network wiki page and have a question.

The /etc/hosts file does not go along with later parts like:

Code:

pvecm create <clustername> -bindnet0_addr 10.10.10.151 -ring0_addr one-corosync
as the host file shows a different address:
Code:

10.10.1.151 one-corosync.proxmox.com one-corosync
I assume the 10.10.10 example at Final Command section should be changed to 10.10.1 ?

PS: Thank you to the authors of the page . It will help make stable clusters.

CPU - Units proxmox versioni :4.0-48/0d8559d0

$
0
0
So after you create a VPS 2 CPU to CPU Units is worth 1024 soup that was created VPS ull I connect to it and write command cat / proc / cpuinfo and instead to show me two CPU shows me 16 CPU that so I at a dedicated 24G RAM and 16 CPU that can make me prexmox work properly?

Outdated kernel sources @ git.proxmox.com? (Proxmox VE 4.0)

$
0
0
Hi,

I'm not very much into compiling and sources, so if this post is fundamentally wrong for some reason, sorry in advance. For example I sense that may be using "Ubuntu-4.2.0-14.16" sources along with "config-4.2.2-1-pve" would give me the exact results I want? (no compatibility issues, lack of certain bit of custom code, etc)?

Straight to the point: the latest kernel source files (4.2.1 of 25th September) are outdated compared to the compiled version(s?) from both pve-no-subscription and pvetest repos (4.2.2.1-pve of 6th October).


What I need them for is to compile the kernel with preempt and higher HZ (it's just a standalone node with a single LXC container used for Counter Strike: GO gameservers, which could benefit from the low latency kernel).


Thanks in advance!!

heres a vmware packer template for testing

ha-Questions in pve4

$
0
0
Hi,

i installed two nodes with proxmox 4 for testing the new features and prepare myself to migrate the older clusters to the new version. So i'm currently trying to break the cluster.

one thing i faced is that a lxc was in an error-state. i didn't know what i can do and deleted it. This was not good i know, i should be able to bring the vm back from the error-state. (Log file: "service ct:100 is not running and in an error state")
The question is, in proxmox 2 and 3 we got the rgmanager and it's clusvcadm to disable a pvevm:xxx and reenable it when the vm is not booted because of an error state. Whats the tool in pve4?
What can cause a vm to be in error-state?
What can i do when a vm is in this state?

I tried to migrate the lxc from one to another host (disk stored on nfs), it failed. Then i rebooted the server where the vm was running to force the migration


kind regards

converting from raw to qcow2

$
0
0
HiTeam,

I have a windows 2008 vm with 60 GB as raw disk. Now i converted that dist to qcow2 and the physical size is 55GB. When I create the new disk to qcow2 format should i create it as 60 gb or 55gb.

What I am doing is convert the disk from raw to qcow2 then remove the raw one and create a new disk as qcow2 on the gui. Then go at the location of the disk and delete the one thats just been created and rename the converted qcow2 file to the new name.

Please let me know if the new disk being created need to be the same size of the raw file or the size of the convereted file.

Regards,

Raj

Nexenta/Illumos-based slow

$
0
0
Has anyone managed to get this working? With or withour Virtio disks/nic's it just takes ages to get into the webinterface
CPU : AMD FX8320 , 16 Gigs of ram
pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

VM monitoring

My HP server has two Lan , but only connect to UI with one Lan

$
0
0
I install Proxmox 4 and I have HP server , my HP server has two Lan card , in first installation , I can not connect to UI . and can not ping my system . so I plug cable to another Lan card , and I can access to UI .
I think Proxmox can not detect one Lan card.
I want set public IP to first lan card and set private IP to second Lan Card and can connect to server , with two IP. I want connect to server from public IP from outside my work and connect to server with private IP in work place.

LXC restore error in 4.0-48

$
0
0
Hi!

I did a container backup and then a restore.

There is the following error:


Code:

Formatting '/var/lib/vz/images/1002/vm-1002-disk-1.raw', fmt=raw size=10485760
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks:  1024/10240          done                           
Creating filesystem with 10240 1k blocks and 2560 inodes
Filesystem UUID: 5678a725-035a-4a1e-9c16-23586c5a6f01
Superblock backups stored on blocks:
    8193


Allocating group tables: 0/2  done                           
Writing inode tables: 0/2  done                           
Creating journal (1024 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information: 0/2  done


extracting archive '/mnt/pve/h01-backup-01/dump/vzdump-lxc-10001-2015_10_18-12_38_32.tar.lzo'
tar: ./usr/share/man/fr/man8/dpkg-reconfigure.8.gz: Cannot write: No space left on device
tar: ./usr/share/man/fr/man8/update-passwd.8.gz: Cannot write: No space left on device
....
....
tar: ./opt/observium/LICENSE: Cannot open: No such file or directory
Total bytes read: 1772206080 (1.7GiB, 303MiB/s)
tar: Exiting with failure status due to previous errors
command 'tar xpf /mnt/pve/h01-backup-01/dump/vzdump-lxc-10001-2015_10_18-12_38_32.tar.lzo --numeric-owner --totals --sparse -C /var/lib/lxc/1002/rootfs --anchored --exclude './dev/*'' failed: exit code 2

Is there anything that I can do here, to make the restore work?

Best regards
Klaus

Problem with hostname on LXC

$
0
0
Hi,

I know this is an old topic, but from openvz to LXC somethings seems to have changed. I'm running a VM on CentOS 6.7 inside Proxmox 4 and I'm having a little problem with cPanel running inside that VM. By the way, I have imported that VM from OpenVZ running on Proxmox 3.4.

My configuration is as follows:

root@ovh1:/usr/share/perl5/PVE/LXC/Setup# cat /etc/pve/lxc/100.conf
arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: hosting.mydomain.com
memory: 4096
net0: bridge=vmbr0,hwaddr=xx:xx:xx:xx:xx:xx,name=eth0,ty pe=veth
onboot: 1
ostype: centos
rootfs: cbrasovz:subvol-100-disk-1,size=150G
swap: 12288

When I start my VM, here's what I get:

root@hosting [/]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4
# Auto-generated hostname. Please do not remove this comment.
# 127.0.1.1 hosting
::1 localhost
127.0.1.1 hosting
x.x.x.x hosting.mydomain.com

root@hosting [/]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hosting
GATEWAY=x.x.x.x

As said above, I'm using cPanel, which unfortunatelly requires hostname to be FQDN. Is it possible to bypass the settings of LXC in order to get it to write a hostname that's FQDN?

Thanks in advance, and I'm very sorry if it's a repeated post, but I couldn't find any on new Proxmox.

Best regards,

Eugenio Pacheco

Debian container unable to start after migration from OpenVZ to LXC on Proxmox 4

$
0
0
After migrating a Debian 7 OpenVZ CT from Proxmox 3 to a LXC on Proxmox 4 it does not appear to start correctly. Fristly, migration was done by stopping the container on Proxmox 3, backing up to external harddisk, then loading that harddisk in Proxmox 4, restoring that backup and setting the network config. After that, the container appears to start correctly according to Proxmox but is a not able to be connected to through either console (tty), gives a blank output, no login or whatsoever, or SSH (connection refused). Restarting both the CT and host does not seem to have any effect, and starting the CT with the `lxc-start -n 101` with a `-o` log file gives the following output:
Code:

lxc-start 1445190074.912 ERROR    lxc_commands - commands.c:lxc_cmd_rsp_send:237 - failed to send command response -1 Broken pipelxc-start 1445190074.913 ERRORlxc_commands - commands.c:lxc_cmd_rsp_send:237 - failed to send command response -1 Broken pipe
Any ideas on how to diagnose where this problem may be coming from or how to go about fixing it?

Proxmox 3.4 upgrade failure, stuck in busybox due to ZFS

$
0
0
Hi,

after upgrading Proxmox 3.4 and rebooting, the system got stuck in busybox, failing to boot with:
Code:

Error: Failed to mount root filesystem 'rpool/ROOT/pve-1/'
When checking with 'mount', rpool actually shows up as mounted on /root, but it failed to boot anyways.

Proxmox 3.4 was installed with ZFS Raidz1 on two hard disks using the default ISO installer. It is using the Linux 3.10.0-11-pve kernel, which worked fine until and including the last Oct. 7 upgrades.

To recover I had to boot into a previous kernel Linux 2.6.32.40-pve (even Linux 2.6.32.42-pve failed to boot!). Then I rolled back the following ZFS related packages: (from zfsutils:0.6.5-1~wheezy, zfs-initramfs:0.6.5-1~wheezy, etc.)
Code:

apt-get install \
libuutil1:amd64=0.6.4-4~wheezy \
libnvpair1:amd64=0.6.4-4~wheezy \
libzpool2:amd64=0.6.4-4~wheezy \
libzfs2:amd64=0.6.4-4~wheezy \
spl:amd64=0.6.4-4~wheezy \
zfsutils:amd64=0.6.4-4~wheezy \
zfs-initramfs:amd64=0.6.4-4~wheezy

apt-mark hold libnvpair1 libuutil1 libzfs2 libzpool2 spl zfs-initramfs zfsutils

Possibly not all the packages were required to be rolled back, but I suspect especially zfs-initramfs or zfsutils, as there were similar problems during previous upgrades reported on this forum. Also, I expect it is better to keep all ZFS related packages at the same 0.6.4-4 version.

Has anyone else encountered this during a recent upgrade with ZFS and Linux 2.6.32.42-pve or Linux 3.10.0-11-pve? What was your solution?

TASK ERROR: missing 'arch' at 4.0 lxc container

$
0
0
Dear


I am testing upgrade from 3.4 to 4.0 around openVZ container CentOs base.

I backuped the CT at 3.4, and I restored the file in 4.0.


The restoring was OK, however an error came at starting 4.0 CT.


TASK ERROR: missing 'arch' - internal error at /usr/share/perl5/PVE/LXC.pm line 1062.




Because the item 'architecture' of CT is 'unknown', So it happen?


If so, where is the conf files in 4.0? The /etc/pve/nodes/proc40/lxc/xxx.conf file?


Can I modify it?


yugawara

AMD and Intel in one Cluster

$
0
0
Hi,

is it possible to combine these both processors inside one cluster and use all of the features (like HA)?

kind regards
Viewing all 179695 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>