Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

A question about proxmox hybrid cloud to perform like Vmware vcHS

$
0
0
I am trying to see if this is possible. To achieve something like vmware vcHS.
I have a 3 node cluster on my premises. I want to add 2 or 3 nodes in our data-center and connectivity through VPN.
My question is if I incorporate ceph storage with 2 x 2 TB drives (proxmox will be on seperate drive) + 64GB SSD for caching and journaling on each of my nodes (including offsite Data-center which is part of the cluster). How effective will this hybrid cloud scenario be. Will it be efficient.
So here is the scenario. I create a few VMs and run it on Local nodes. Since the nodes are using ceph storage, it should replicate the vm data on all the nodes including remote nodes. Although the initial synchronization of Data to the remote nodes will take a long time (depending how beefy the KVM vms are). Eventually, it should synchronize all the data across all the nodes.
So in the event of a disaster, my VM would automatically migrate to the data-center and should run without an issue. Am I correct in my thinking?
What is the best possible way / best practices to achieve this.
Thanks in advance

ZFS backup tool

$
0
0
FYI.

Tobias Oetiker has just announced this cool backup tool for ZFS. Since it is written in Perl it should run on any platform with ZFS. The tool is open source.
http://www.znapzend.org/

root password rejected by Web Control Panel

$
0
0
Hi.

My server "proxmox5" no longer accepts the root password when logging into the control panel on www.myhostname.com:8006
However, the server "proxmox3" which is in a cluster with it, lets me login and control proxmox5.

I've read other reports of similar things on here, but my situation is different because I CAN login normally through SSH. (My method is to login as a non-privileged SSH user using RSA Private Key Authentication, and then do a "su - " to become root. I have tried resetting the root passwords from the command line, but proxmox5 still won't accept it after that. I have not rebooted because it is a production box.

My version is pve-manager/3.2-1/1933730b (running kernel: 2.6.32-27-pve)
I have a support license for both boxes.

Write cache policy for VM's with running RDMBS'

$
0
0
Hello.

I have got several questions about cache levels.
What are the best practices about VM's cache policy knowing that I prefer security ?
Here is my configuration.

- There are 2 Proxmox servers and 2 NAS'.
- Proxmox servers have a Gluster volume configured.

I would like your help to find all cache levels to determine the risks for RDBMS' (SQLServer / MySQL / Oracle).

I succeed in finding 7 to 8 different cache levels between the RDBMS user process and my hard drive. (Bold ones are disabled but I don't know how to effectively test my configuration).

- Guest page cache.
- Virtual disk drive write cache. (Disabled with cache=directsync parameter passed to KVM process)
- Host page cache.
(Disabled with cache=directsync parameter passed to KVM process)
- GlusterFS cache. (Disabled with performance.write-behind and performance.flush-behind both turned off)
- NAS page cache (is this level used by gluster ?).
- XFS cache (filesystem). (I used wsync flag but I got poor performance so I removed it)
- RAID controller write cache. (This is turned off because I don't have any bacup battery)
- Physical hard drive write cache (This is also turned off).

I just wondered if I am right about identified cache levels. Now I don't know if with those cache levels disabled risks are reduced enough.
Indeed, I got 3 databases corrupted during the last few month during my tests (once because of a faulty UPS and later because of freezed I/O) but I had guest virtual disk drive write cache enabled (cache=none configured in VM configuration).

I have got mainly Windows VM's and few Linux VM's.
Do you know how are handled fsync() calls from guests ? Are FS barriers enabled from guests to physical hard drive ?

Any help would be much appreciated.
Thank you.

Kernel 3.10.0-3-pve / IP-less vmbr1 / not working

$
0
0
Hi guys,

I´m testing the new kernel on some of my Dell Servers and an IP-less bridge to a separate bond is not working.

The KVM´s boot up but thereis no ping from any KVM on this bridge.

Is this a known problem?

Thnx
Mac

Problem with network between quest in cluster

$
0
0
Hello
I installed Proxmox cluster - like https://pve.proxmox.com/wiki/Two-Nod...bility_Cluster
All works great.

But I have problem with network comunication between guests on another node (on same node is all ok)

I am using bridged interface
Code:

auto vmbr0
iface vmbr0 inet static
        address x.x.x.x
        netmask 255.255.255.0
        gateway x.x.x.x
        bridge_ports eth0
        bridge_stp off        
    bridge_fd 0

ping node1 -- node2 => OK
ping guest(node1) -- guest(node1) => OK
ping guest(node1) -- guest(node2) => failed
ping node1 --- guest(node2) => failed

I testing disable firewall, tune sysctl like rc_filter, apr_proxy etc.
After tune with tcpdump look like ARP reply doesnt received.

If I ping (firewall disabled)
on local:
21:26:13.398678 ARP, Request who-has .....
21:26:13.638989 ARP, Request who-has.....
.......

on remote:
22:26:01.373251 ARP, Request who-has ...
22:26:01.373354 ARP, Reply ....
22:26:02.373204 ARP, Request who-has...
22:26:02.373291 ARP, Reply .....

No ICMP packet received.

kernel panic caused by using kernel.pid_ns_hide_child=1

$
0
0
When `kernel.pid_ns_hide_child=1` sysctl flag is used it causes the proxmox v3.2-5a885216-5 (2.6.32-29-pve #1 SMP Thu Apr 24 10:03:02 CEST 2014 x86_64 GNU/Linux) to crash into kernel panic when one starts an openvz container.

It's presumably caused somehow by openvz and they have fixed it recently: https://bugzilla.openvz.org/show_bug.cgi?id=2983 (+ see 2 duplicates)

So is there any known workaround to hide children from containers from being visible on a host machine, and if not - any schedule to reintegrate the fix into pve kernel?

not start the virtual machine

$
0
0
Hello! I do not understand for what reasons failed PROXMOX and now at the moment does not want to run use a virtual machine (image) throws error "Cannot get access to nuclear KVM module: No such file or directory
was unable to initialize KVM: No such file or directory
ERROR TASK: start summed up: the command '/usr/bin/kvm - id 105-chardev 'nest, id=qmp, path =/var/run/qemu-server/105.qmp, server, nowait' Monday 'chardev=qmp, mode=control'-vnc unix:/var/run/qemu-server/105.vnc, x509, password-pidfile/var/run/qemu-server/105.pid-daemonize - rezerv3-smp name 'sockets=1, cores=4'-nodefaults - downloads 'menu=on'-Cirrus vga - CPU kvm64, +x2apic, +sep-k en-us-4000 m-cpuunits 1000 - the device 'piix3-usb-uhci, id=uhci, bus=pci.0, addr=0x1.0x2' device 'tablet usb id=tablet, bus=uhci.0, port=1' device 'virtio-balloon-pci id=balloon0, bus=pci.0, addr=0x3' engine 'file =/dev/cdrom, if=none, id=drive-ide2, media=cdrom, aio=native' device 'CD IDE, bus=ide.1, unit=0, drive=drive-ide2, id=ide2, bootindex=200' engine 'file =/var/lib/vz/images/105/vm-105-disk-1.raw, if=none, id=drive ide0, format=raw, aio=native cache=none' device 'HD IDE, bus=ide.0, unit=0, drive=drive ide0, id=ide0, bootindex=100'-netdev 'type=tap id=net0, ifname=tap105i0, the script =/var/lib/qemu-server/pve-bridge' device 'e1000, mac=C2:D2:B3:80:65:A5, netdev=net0, bus=pci.0, addr=0x12, id=net0, bootindex=300" failed: exit code 1 inch". Help please, what can you do?

ceph cluster - which kernel is best to use?

$
0
0
Hello,
Soon I'll be setting up a ceph cluster.

Is it best to use the 3.10 Kernel from pve testing?

We will only use KVM on the test.

OS Location on RAID 10

$
0
0
Greetings :D,

Am setting up a machine with RAID 10 and wondering if any one out there knows if there is benefit of keeping Proxmox on a separate RAID or including on
the same RAID as the VMs?

My own logic, which is limited, is that the OS and VMs are basically the same application and therefore best case is to have them all on same RAID?

Thank you :confused:

Migration failed, Cleaned up but left with zombie VMs

$
0
0
I was migrating a number of OpenVZ VMs between two servers and there was an unexpected server crash. This resulted in the servers not migrating properly but it left them showing on the source server. I was able to eventually get them to the target server by way of a restore from backup, but now I'm left with these zombie VMs on the original source server. If I try and remove them, I get an error.

If I look at the log from attempting to remove the VMs, I see this:

stat(/var/lib/vz/root/902): No such file or directory
Container is currently mounted (umount first)
TASK ERROR: command 'vzctl destroy 902' failed: exit code 41

What can I do to get rid of these zombie VMs from my web interface?

Myles

GPU passthrough, possible with my hardware?

$
0
0
Hi All,
I've recently installed proxmox, moved to pvetest, updated, and installed windows8.1 in a vm, config below.
My hardware supports Vtd, IOMMU etc, and I have previously run xen 4.4 passing through a number of devices - soundcard, usb chipsets, and the onboard intel HD4600 GPU.
Performance with XEN is great, but I was really attracted to kvm, and proxmox, for other reasons, such as the management console, and to address a few minor problems I've had with xen.

Config here:

bootdisk: virtio0
cores: 2
ide0: local:iso/virtio-win-0.1-81.iso,media=cdrom,size=72406K
ide2: local:iso/win8.iso,media=cdrom
memory: 1984
name: Windows8
net0: virtio=B6:37:0C:BC:1D:A0,bridge=vmbr0
ostype: win8
sockets: 1
virtio0: local:100/vm-100-disk-1.qcow2,format=qcow2,cache=writeback,size=70G
machine: q35
#hostpci0: 00:02.0,pcie=1,driver=vfio
hostpci0: 00:02.0,x-vga=on,pcie=1,driver=vfio
#hostpci0: 00:02.0


the above config won't start, with the following errors:

qm start 100
kvm: -device vfio-pci,host=00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,x-vga=on: vfio: Device does not support requested feature x-vga
kvm: -device vfio-pci,host=00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,x-vga=on: vfio: failed to get device 0000:00:02.0
kvm: -device vfio-pci,host=00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,x-vga=on: Device initialization failed.
kvm: -device vfio-pci,host=00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,x-vga=on: Device 'vfio-pci' could not be initialized
start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name Windows8 -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga std -no-hpet -cpu 'kvm64,kvm=off,hv_spinlocks=0xffff,hv_relaxed,+lah f_lm,+x2apic,+sep' -k en-gb -m 1984 -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,x-vga=on' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:c3ff81b4e81b' -drive 'file=/var/lib/vz/template/iso/win8.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=writeback,aio=native' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0' -drive 'file=/var/lib/vz/template/iso/virtio-win-0.1-81.iso,if=none,id=drive-ide0,media=cdrom,aio=native' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=201' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=B6:37:0C:BC:1D:A0,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1


if I remove the x-vga=on parameter, the vm does boot, makes it to windows which sees an additional display adaptor, but is unstable and crashes very quickly (too quickly to unstall the drivers).

versions:
pveversion -v
proxmox-ve-2.6.32: 3.2-132 (running kernel: 3.10.0-3-pve)
pve-manager: 3.2-18 (running version: 3.2-18/e157399a)
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-14
qemu-server: 3.1-28
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-7
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-1
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1


I had assumed, rightly or wrongly, that as I had this working perfectly well in xen that it ought to be possible in kvm, but maybe not. Perhaps graphics card support is much more limited. That said, it feels close.

If anyone has any ideas on this, I'd really appreciate them!

Many thanks,
Matthew

Clustering between data centers

$
0
0
I have had this working before with PM 1.8 servers, and we are just completing updating to PM 3.1. We have two data centers, each with about 3-4 PM servers in there. I want to be able to have all servers as part of the one cluster. Right now we can do this within the data centers with ease. But when attempting to attach servers between data centers, we are running into trouble.

The data centers each have their own local subnet and we are using IPSEC VPN to connect them together. Each subnet can freely see devices on the other subnet (and vice versa). There isn't any known issues with MULTICAST as ifconfig is reporting each network as MULTICAST compatible. This would be further supported by the fact that we have had this working before.

When we attach a node to the cluster, we get quorum issues. It basically times out waiting for quorum. I think I may have found the issues here. It would seem that the problem is due to SSL keys not being correctly copied between hosts. I managed to verify this by attempting a SSH as root from one PM host to another in a different data center, and it requested a password. I would expect that if the keys were correctly copied it would not do this, and therefore I'm assuming that this may be the core issue. I can manually fix this by just copying the keys over and setting them as authorized.

But what I'm not sure is whether this has to be done on ALL PM hosts in the cluster, or does it only need to be done on the device that you identify as the IP address of the cluster to connect to, when you do a pvecm add <IPADDRESS>?

Or am I missing something fundamental here as well?

Myles

Backup bit weired

$
0
0
Hi team,

I have 1 server and 2 synology DS412+ using as nas .

The 1 nas is an iscsi server where all the images are stored.

The second one has a nfs share where proxmox sends the daily backups.

Both nas have nic teaming and the server has the same.

All running gigabit with a 4006 cisco switch.

Host server is ibm 3650 with 32 GB ram and 2 quad core processor.

Backup is gzip.

I have about 18 vms on that box some windows and other linux.

Now what seems to happen is lets say the backup start at 2100, sometimes the vms backup run smothly and in a few hrs lets say after 10 hrs its done. Other times the job runs for more than 15 hrs. Now looking at the backup logs, it seems to be that if a vm took 20 mins on day one it can take 1 hr on the second day. Its random vms. I have checked usage of the vms and at night they are not used at all.I do not want to use LZO as backup as I had a few misfortunes with that. Does anyone has any ideas why I am getting such an issue?

Cheers,

Raj

pvesh vs. vzlist, and storing extra attributes on containers.

$
0
0
1) Pvesh vs vzlist

Because pvesh help doesn't work (http://forum.proxmox.com/threads/187...7847#post97847), and the documentation lacks, I figured I'd use vzlist to get a list of objects (containers+kvm objects - sorry, don't know the generic term).

Apart from (I assume) no support for kvm, are there any substantial differences between pvesh and vzlist for getting this list? Does PVE do anything strange to openvz, or will all openvz commands work?

I want a list of objects as I plan to generate an apache.conf file for each object such that incoming web requests that hit the pve node can automatically reverse proxied to the right object. (See http://forum.proxmox.com/threads/190...7642#post97642)

2) I'd like to store extra attributes on each container.
Is that possible? In this case I'd need to store what port takes the incoming web traffic, and possibly what DNS like name to route web traffic from.

I'm new to proxmox, caveat emptor.

Thanks,
Martin.

Network setup

$
0
0
I'm trying to put together a setup where I have a pfSense firewall as a VM on a Proxmox node, and all other VMs on that same node communicate with the outside world through that firewall VM. The firewall should be binding to all the IP addresses available to the hypervisor (that part is mostly irrelevant to my problem).

I've got pfSense installed, and it's able to distribute local IPs to other VMs via DHCP. However, I'm unable to get the firewall to bind its WAN interface to any IP addresses provided by the data center. I've mostly rules out a firewall misconfiguration, and I'm pretty sure I've followed all the data center's instructions for using an IP range, but I still can't ping ANYTHING from the pfSense VM, which leads me to believe I've configured something network-related in Proxmox incorrectly.

Here's the hypervisor's network tab:


And the firewall VM's hardware config:



According to the data center, in order to use an IP in the block I'm trying to use, the machine's MAC address needs to be 02:00:00:FF:4C:0E. I've got that set in Proxmox and in the firewall. Here's the firewall's WAN configuration:


And the relevant portion of ifconfig on the firewall:


According to the data center, these are the network settings I need to use:
Quote:

IP: Fail Over IP
Netmask: 255.255.255.255
Broadcast: Fail Over IP
Gateway: Main IP of the server ending in 254.
"Fail Over IP" is an address in the IP block I was allocated (192.99.198.148/30), and "Main IP of the server" is 192.99.10.135, meaning the gateway should be 192.99.10.254.

All of these settings look correct to me (although I'm certainly no expert). However, the firewall VM can't ping the gateway IP or any Internet IPs. The hypervisor can ping both. Is there something I'm misunderstanding about how network devices work in Proxmox?

No visible files in web on storage after update

$
0
0
Hi!
After update system "apt-get update" files don't appear in the web interface.

# pveversion -v
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_PAPER = "ru_RU.UTF-8",
LC_ADDRESS = "ru_RU.UTF-8",
LC_MONETARY = "ru_RU.UTF-8",
LC_NUMERIC = "ru_RU.UTF-8",
LC_TELEPHONE = "ru_RU.UTF-8",
LC_IDENTIFICATION = "ru_RU.UTF-8",
LC_MEASUREMENT = "ru_RU.UTF-8",
LC_TIME = "ru_RU.UTF-8",
LC_NAME = "ru_RU.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
pve-manager: 3.0-20 (pve-manager/3.0/0428106c)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.0-15
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-6
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-12
ksm-control-daemon: 1.1-1


root@pve20-1:~# pvesh get /version
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_PAPER = "ru_RU.UTF-8",
LC_ADDRESS = "ru_RU.UTF-8",
LC_MONETARY = "ru_RU.UTF-8",
LC_NUMERIC = "ru_RU.UTF-8",
LC_TELEPHONE = "ru_RU.UTF-8",
LC_IDENTIFICATION = "ru_RU.UTF-8",
LC_MEASUREMENT = "ru_RU.UTF-8",
LC_TIME = "ru_RU.UTF-8",
LC_NAME = "ru_RU.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
200 OK
{
"release" : "20",
"repoid" : "0428106c",
"version" : "3.0"
}



root@pve20-1:~# pvesh get /nodes/pve20-1/storage/vmstorage2
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_PAPER = "ru_RU.UTF-8",
LC_ADDRESS = "ru_RU.UTF-8",
LC_MONETARY = "ru_RU.UTF-8",
LC_NUMERIC = "ru_RU.UTF-8",
LC_TELEPHONE = "ru_RU.UTF-8",
LC_IDENTIFICATION = "ru_RU.UTF-8",
LC_MEASUREMENT = "ru_RU.UTF-8",
LC_TIME = "ru_RU.UTF-8",
LC_NAME = "ru_RU.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
200 OK
[
{
"subdir" : "status"
},
{
"subdir" : "content"
},
{
"subdir" : "upload"
},
{
"subdir" : "rrd"
},
{
"subdir" : "rrddata"
}
]
root@pve20-1:~# pvesh get /nodes/pve20-1/storage/vmstorage1/content
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_PAPER = "ru_RU.UTF-8",
LC_ADDRESS = "ru_RU.UTF-8",
LC_MONETARY = "ru_RU.UTF-8",
LC_NUMERIC = "ru_RU.UTF-8",
LC_TELEPHONE = "ru_RU.UTF-8",
LC_IDENTIFICATION = "ru_RU.UTF-8",
LC_MEASUREMENT = "ru_RU.UTF-8",
LC_TIME = "ru_RU.UTF-8",
LC_NAME = "ru_RU.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
200 OK
[]


Files is no exist:
ls -lh /mnt/vmstorage2/template/cache/
total 2.8G
-rw------- 1 root root 174M Jun 13 2013 centos-5-x86.tar.gz
-rwxrwxrwx 1 root root 184M Jun 13 2013 centos-5-x86_64.tar.gz
-rw-r--r-- 1 root root 200M Oct 17 2012 centos-6-standard_6.3-1_i386.tar.gz
-rw------- 1 root root 201M Jun 13 2013 centos-6-x86.tar.gz
-rw------- 1 root root 213M Jun 13 2013 centos-6-x86_64.tar.gz
-rw------- 1 root root 148M Jun 13 2013 debian-6.0-x86.tar.gz
-rw------- 1 root root 150M Jun 13 2013 debian-6.0-x86_64.tar.gz
-rw------- 1 root root 200M Jun 13 2013 debian-7.0-x86.tar.gz
-rw------- 1 root root 266M Jun 13 2013 debian-7.0-x86_64.tar.gz
-rw------- 1 root root 195M Jun 13 2013 scientific-6-x86.tar.gz
-rw------- 1 root root 207M Jun 13 2013 scientific-6-x86_64.tar.gz
-rw------- 1 root root 124M Jun 13 2013 ubuntu-12.04-x86.tar.gz
-rw------- 1 root root 205M Jun 13 2013 ubuntu-12.04-x86_64.tar.gz
-rw------- 1 root root 138M Jun 13 2013 ubuntu-13.04-x86.tar.gz
-rw------- 1 root root 232M Jun 13 2013 ubuntu-13.04-x86_64.tar.gz


?????? ?????? ?? 2014-07-31 09:19:49.png

Files before the update is displayed in the web interface normaly.

Any idea?
Attached Images

VMWare install - help please

$
0
0
Hey All,

Having issues installing a trial (a trial for me that is) of Proxmox in VM Workstation 10. I get:

"command 'chroot /target dpkg --force-confold --configure -a' failed with exit code 1 at /usr/bin/proxinstall line 177"

I have looked over the forums and I can see that people have this issue on physical hardware, however, I am not using RAID or anything groovy like that and the other common fault seems to be RAM, I have mine set at 2GB which seems sufficient.

I am using the ESXI profile, which has nested visualization enabled.

I have tried in VM10 and VM10 (VM8 profile).

I can see others have had luck in the past, however, I just cant get it to work.

Advice would be most grateful.

Thanks for any help or guidance.

reactive node with new motherboard

$
0
0
Hello,

I have a 4 nodes cluster with promox 3.1
Yesterday, one of the nodes’ motherboard failed, so I manually moved the config files of the virtual servers on the other nodes, entering the following command line:
mv /etc/pve/nodes/proxmox11/qemu-server/124.conf /etc/pve/nodes/proxmox13/qemu-server/124.conf and now everything works fine.

I know replaced the faulty motherboard, so I have to restore the node with the old hard drives, that are already configured, which means that in the folder /etc/pve/nodes/proxmox11/qemu-server I already have the files that I have to move yesterday.

What shall I do to avoid problem?
Shall I log in as single user to delete the config files I copied yesterday?
Shall I run everything normally and let the system handle it automatically?
Shall I log in as single user to move the files back to the restored node and reboot it in the network?

Any other idea/help?
Thanks

Poor disk speed using KVM on ZFS

$
0
0
Using ZFS 0.6.2 on RAID10 and Proxmox as VE for my KVM machines powered with cache=writeback (cache=none not starting)

Deduplication disabled, primarycache/secondarycache=all, checksum/compression ON. compressratio 1.71X, sync standard.

in KVM (qcow2)
++++++++++++++
$ dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 39.2076 s, 13.7 MB/s
++++++++++++++


out of KVM - just directly to ZFS partition
++++++++++++++
# dd bs=1M count=512 if=/dev/zero of=/var/lib/vz/test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 0.851028 s, 631 MB/s
++++++++++++++

Any tips?
No results with:
- changing interface from IDE to virtio
- decreasing/increasing zfs_arc_max

VMs just hanging out on simple disk loads.

++++++++++++++

I think that the problem is in conjunction between KVM and ZFS.

Using OpenVZ container
++++++++++++++
$ dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 0.541697 s, 702 MB/s
++++++++++++++
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>