Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

debian jessie kvm installation

$
0
0
/?Hi,

Have anyone tried to install debian 8 VM on proxmox ? I'm trying to install using glusterfs volume as storage and have no luck
1. if I use qcow format, than installation never goes to its end or (if it installs fine) I'm getting erros on boot (like can't load module ext4) and not able to boot up the system
2. if I use raw format, than installation is just extremely slow.

no problems with debian wheezy VM-s on same proxmox setup... ideas?

cannot eject cdrom in pve3.4

$
0
0
when run eject command in pve3.4, got "eject: unable to eject, last error: Inappropriate ioctl for device".
I have to reboot the server to eject the cdrom. but once I closed the cdrom and run eject again, I got same error message again.
how to solve this problem?

proxmox ve 3.4 use ceph vm can't start

$
0
0
i use ceph for vm disk. vm can't start
if i remove vm's disk, it can startup
root@pve1:~# qm start 100
kvm: -drive file=rbd:rbd/vm-100-disk-1:mon_host=pve1,pve2:id=admin:auth_supported=cephx :keyring=/etc/pve/priv/ceph/mystorage.keyring,if=none,id=drive-virtio0,aio=native,cache=none,detect-zeroes=on: could not open disk image rbd:rbd/vm-100-disk-1:mon_host=pve1: Block format 'raw' used by device 'drive-virtio0' doesn't support the option 'pve2:id'
start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=5912954b-e5cb-48b5-8b23-5b9b12382466' -name test1 -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -m 2048 -k en-us -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:de183c34a2' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=rbd:rbd/vm-100-disk-1:mon_host=pve1,pve2:id=admin:auth_supported=cephx :keyring=/etc/pve/priv/ceph/mystorage.keyring,if=none,id=drive-virtio0,aio=native,cache=none,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=3E:AD:F7:A3:3F:CE,netdev=net0,bus=pci.0 ,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
root@pve1:~# /usr/bin/kvm --version
QEMU emulator version 2.1.3, Copyright (c) 2003-2008 Fabrice Bellard

Proxmox networking problem

$
0
0
I have 2 Proxmox nodes in different datacenters. All the ovz and kvm containers that were made before this week are fine and have internet access. But in the last few days it seems that any time I make a new VPS, be it kvm or ovz, the networking does not work. I have been trying to figure out what I could have changed on both nodes to make this happen, but I have come up with nothing. One node was making VPS's just fine last week, but this week every time I try to make a kvm VPS the networking will just not work. The other node is at a datacenter where I have IPMI, so I backed up all the VPS's and put the newest Proxmox 3.4 on the server. All the VPS's that I restored on to the node work fine, but new ones just will not connect to the internet no matter what I do. I just use these nodes for my own personal use and do not have any paying customers. I'm at my wits end trying to figure this out and have been googling for days. Thank you in advance for any light you may be able to shine on this subject.

Proxmox 3.4 IPv6 problems

$
0
0
Hello

I am trying to configure IPv6 on host system, but always receiving "Destination unreachable: Address unreachable"
Please check my configs maybe i something missed.

/etc/network/interfaces
Code:

auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual

#####################################################################################
# BOND0
#####################################################################################
auto bond0
iface bond0 inet manual
        slaves eth0 eth1 eth2
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer3+4

#####################################################################################
# VLAN10 - INTERNET
#####################################################################################
auto vlan10
iface vlan10 inet manual
        vlan_raw_device bond0

auto vmbr10
iface vmbr10 inet static
        address xx.xx.xx.xx
        netmask 255.255.255.240
        network xx.xx.xx.xx
        gateway xx.xx.xx.xx
    bridge_ports vlan10
        bridge_stp off
        bridge_fd 0

        post-up ip route add table vlan10 default via xx.xx.xx.xx dev vmbr10
        post-up ip rule add from xx.xx.xx.xx/28 table vlan10
        post-down ip route del table vlan10 default via xx.xx.xx.xx dev vmbr10
        post-down ip rule del from xx.xx.xx.xx/28 table vlan10


iface vmbr10 inet6 static
        address XXXX:XXXX:69::5
        netmask 48
        gateway XXXX:XXXX:69::1

      post-up ip -6 route add table vlan10 default via XXXX:XXXX:69::1 dev vmbr10
      post-up ip -6 rule add from XXXX:XXXX:69::/48 table vlan10
      post-down ip -6 route del table vlan10 default via XXXX:XXXX:69::1 dev vmbr10
      post-down ip -6 rule del from XXXX:XXXX:69::/48 table vlan10

/etc/sysctl.conf
Code:

net.ipv6.conf.all.proxy_ndp=1
net.ipv6.conf.all.forwarding=1

Host snat rule not working with pve-firewalled container

$
0
0
I have setup a configuration to allow containers to be NATed with my host's public IP so that I can download / update containers (they do not need to be accessed from the host). It works well but only if my container network interface is not firewalled by pve-firewall. More precisely, if I switch-on pve-firewall the SNAT rule on the host does not seem to apply on my container's outgoing traffic.

Here is my setup (tested on proxmox 3.3.5 and 3.4.1) :
- Host has public IP w.x.y.z on vmbr0 bridge (host's eth0 plugged into that bridge)
- Bridge vmbr2 is dedicated to outgoing traffic for my containers. Host has 10.0.0.1 on vmbr2, containers have virtual interfaces with address in 10.0.0.0/24 and have 10.0.0.1 as their default gateway
for the example below, a container has eth0 with 10.0.0.10
- host's SNAT rule : iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -o vmbr0 -j SNAT --to-source w.x.y.z

It works very well with pve-firewall not activated on container's eth0.
- for instance, dig www.google.com @8.8.8.8 from inside the container
- gives me this output with a tcpdump in the host :
Code:

tcpdump -n -i vmbr0 port 53
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on vmbr0, link-type EN10MB (Ethernet), capture size 65535 bytes
> 14:41:53.652325 IP w.x.y.z.41029 > 8.8.8.8.53: 35348+ A? www.google.com. (32)
> 14:41:53.662983 IP 8.8.8.8.53 > 10.0.0.10.41029: 35348 1/0/0 A 216.58.211.100 (48)

However if I activate pve-firewall on container's eth0, I get that tcpdump output :
Code:

tcpdump -n -i vmbr0 port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vmbr0, link-type EN10MB (Ethernet), capture size 65535 bytes
14:45:44.242616 IP 10.0.0.10.47279 > 8.8.8.8.53: 64472+ A? www.google.com. (32)

In the first case, the snat rule is applied, in the second case the request goes out with my container's non-routable internal IP.

I came to the conclusion that the iptable rules added when I switch on container/eth0 pve-firewalling, somehow disable nat processing for that container's packets. But I could not find exactly where and why.

Can someone help me understand what's happening here ?

Kind regards,
Nicolas

SMBIOS settings

$
0
0
The UI will not allow spaces in these settings even though it does appear you can edit the .conf directly and add options with spaces as long as you put quotes around them. However, as soon as the Web UI notices this it simply wipes all of the settings in the SMBIOS option. Please fix this.

Proxmox DHCP relay between network

$
0
0
Hi all,

I've few differents subnets on my Proxmox server. The goal is to simulate a real infrastructure.
To sum up, I've:
- LAN server subnet with dhcp,ntp, ldap, mail, intranet web servers (vmbr1)
- DMZ subnet with DNS, VPN servers (vmbr2)
- LAN client subnet with client station (vmbr3)

Critical servers (dns,dhcp,ntp,ldap) are configured with static IP, others servers and clients stations should obtain IP by DHCP.
The server DHCP run well on LAN server subnet, but DHCP request from other subnet don't reach the DHCP server.

I think the problem is that there is no DHCP relay. I haven't found any information to do this directly with Proxmox.

Is anyone ever encountered this problem or have a solution for me ?

Thank you in advance.
Bye

Hot to do installation while i have two .iso images ?

$
0
0
Hello, i have two .iso images. disk2 should be inserted when suggested by installer Please how i can use Proxmox to install with two disks? Is there any image tutorial or text tutorial which can guide thru the process please?

Proxmox is only .iso install?

$
0
0
Hello, isnt there any tutorial on how to install KVM and Proxmox via SSH Linux command line only, if its possible? I mean im renting server and have just SSH access atm.

Feature request: make it clear in GUI that host is running out of disk space

$
0
0
I'm a newbie to KVM, but have been running Proxmox for a few months without any problems - in fact the more I realise how awesome Proxmox is, the more I'm in awe of the developers!

However, I've just had what appeared to me to be some kind of hardware failure on the host. My VMs were reporting i/o errors and mounting their drives read-only and things. I couldn't create a new VM either.

After drawing up elaborate rescue plans while my phone rang constantly, I suddenly realised that a df -h on the host showed I was 100% on /dev/mapper/pve-data and almost full on /dev/mapper/pve-root

So now I've removed some disk files and all is OK.

I would have been nice to have had some obvious warning in the GUI - maybe just a red color on the local storage icon or something like a message?

PS: I thought KVM .raw disk files would only use up the physical space they were using in the virtual space (that is, thin provisioning)? But I see though that if I create a disk that's 100G, it creates a 100G .raw file, even though the VM's only using 20G. Is that right?

suddenly stops webGUI on 2 servers (not in a cluster )

$
0
0
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve) pve-manager: 3.4-6 (running version: 3.4-6/102d4547) pve-kernel-2.6.32-39-pve: 2.6.32-156 pve-kernel-2.6.32-26-pve: 2.6.32-114 pve-kernel-2.6.32-38-pve: 2.6.32-155 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.7-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.10-2 pve-cluster: 3.0-17 qemu-server: 3.4-6 pve-firmware: 1.1-4 libpve-common-perl: 3.0-24 libpve-access-control: 3.0-16 libpve-storage-perl: 3.0-33 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-8 vzctl: 4.0-1pve6 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 2.2-10 ksm-control-daemon: 1.1-1 glusterfs-client: 3.5.2-1




service pvedaemon/pveproxy/pvestatd restart - no result,
then stop all machines manually (ssh),
reboot server

after reboot - dont start machines, only wait...and see - all my machines dont have stasus in gui (er1)

and if switch between tabs - can see (er2)... webgui like brakes


and after minut 10-15 i dont logon in webgui "error:Login failed. Please try again"
Attached Images

How i can expand an ZFS Raid10 correctly?

$
0
0
Hello :)

i have here an server with 6 Disks. 2 SSDs in Raid1 and 4 HDDs in Raid10, everything ZFS. Here my config:

Code:

pool: rpool
 state: ONLINE
  scan: resilvered 4.68M in 0h0m with 0 errors on Fri Apr 24 17:37:52 2015
config:

    NAME        STATE    READ WRITE CKSUM
    rpool      ONLINE      0    0    0
      mirror-0  ONLINE      0    0    0
        sdf3    ONLINE      0    0    0
        sde3    ONLINE      0    0    0

errors: No known data errors

  pool: v-machines
 state: ONLINE
  scan: none requested
config:

    NAME                                            STATE    READ WRITE CKSUM
    v-machines                                      ONLINE      0    0    0
      mirror-0                                      ONLINE      0    0    0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D877YE  ONLINE      0    0    0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D0KRWP  ONLINE      0    0    0
      mirror-1                                      ONLINE      0    0    0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D688XW  ONLINE      0    0    0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D63WM0  ONLINE      0    0    0

errors: No known data errors

So I would like to add 2 and and later another tow disk to the Pool. I read a lot of in forums and documentations. But i think i don't understand. In an normal Linux Softraid 10 i have always one mirror. For example, 4Disks mirrored to the other 4disks. So in my configuration i have the same, 2disks mirrord with 2disks, right?

So i would like to add the next two disks to the ZFS Raid10. So with my understanding: 1 disk to mirror-0 and one disk to mirror1... not right? I don't know. I always found something like this: http://docs.oracle.com/cd/E19253-01/...zgw/index.html
But sorry i don't understand it. I tested something in a virtual machine, but i was only able to add another mirror, like this:

Code:

NAME        STATE    READ WRITE CKSUM
        tank        ONLINE      0    0    0
          mirror-0  ONLINE      0    0    0
            c0t1d0  ONLINE      0    0    0
            c1t1d0  ONLINE      0    0    0
          mirror-1  ONLINE      0    0    0
            c0t2d0  ONLINE      0    0    0
            c1t2d0  ONLINE      0    0    0
          mirror-2  ONLINE      0    0    0
            c0t3d0  ONLINE      0    0    0
            c1t3d0  ONLINE      0    0    0

So what is right, is Raid10 with ZFS not the same like normaly softraid unter Linux?

Thanks and best Regards

proxmox Host only IPv6

$
0
0
Hi, can i set a proxmox host only to IPv6 adress and VMs IPv4 and IPv6 or does only IPv6 on Host not work?

regards

GRE broken after upgrading from 3.3-5 to 3.4-6

$
0
0
After upgrading from 3.3-5 to 3.4-6 I can't get GRE traffic to/from pfSense VM. Other traffic (ICMP, TCP, UDP) is working fine.
The firewall is disabled at the VM level, so all traffic should pass.

As soon as I disable the firewall at the datacenter level, then GRE traffic to/from the VM is restored.

I want to keep the firewall enabled at the datacenter level, to limit access to the Proxmox nodes from the outside. But I need GRE tunnels to pfSense as well. How to solve?

Any Network UPS Tool users?

$
0
0
Hi all,

I'm trying to figure out how to setup NUT to do the following:
At time (or charge) X, shut down all VM servers
At time (or charge) Y, after all VM servers are shutdown, stop the Ceph cluster software
At time Z (~1 minute after time Y) shut down the ceph hosts.

Does anyone have any tips on how to do a staggered shutdown like what I'm looking for? Or have any tips on blogs or what not that I might find some ideas on how to make this work?

How to Import Citrix NetScaler Virtual Appliance to Proxmox

$
0
0
Hi guys,

we have download the Citrix NetScaler virtual appliance for KVM. How can i import it to proxmox. looking for the correct steps. the files comes with a .raw disk and also .xml file which contains.

----------------------------------------------------------------------

<?xml version="1.0" encoding="UTF-8"?>
<domain type='kvm'>
<name>NetScaler-VPX</name>
<memory>2097152</memory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='NSVPX-KVM-10.5-56.15_nc.raw'/>
<target dev='vda' bus='virtio'/>
</disk>
<controller type='ide' index='0'>
</controller>
<interface type='direct'>
<mac address='52:54:00:29:74:b3'/>
<source dev='eth0' mode='bridge'/>
<model type='virtio'/>
</interface>
<serial type='pty'/>
<console type='pty'/>
<graphics type='vnc' autoport='yes'/>
</devices>
</domain>


-------------------------------------------------------

i have try to create and instance with the same disk, memory and cpu requirement and replace the disk with the .raw files come together inside the folder. but failed to boot. Looking forward for the correct guide to do this. Please advise.

Thanks.

unable to aquire pmxcfs lock - trying again

$
0
0
Hi,

Currently having a problem with my Proxmox host:

I'm trying to upgrade the packet pve-cluster but it fails with the following error:

Code:

$ sudo apt-get install -f
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
0 aktualisiert, 0 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
4 nicht vollständig installiert oder entfernt.
Nach dieser Operation werden 0 B Plattenplatz zusätzlich benutzt.
pve-cluster (3.0-17) wird eingerichtet ...
Restarting pve cluster filesystem: pve-cluster[main] notice: unable to aquire pmxcfs lock - trying again [main] crit: unable to aquire pmxcfs lock: Resource temporarily unavailable [main] notice: exit proxmox configuration filesystem (-1)  (warning). invoke-rc.d: initscript pve-cluster, action "restart" failed.
dpkg: Fehler beim Bearbeiten von pve-cluster (--configure):
 Unterprozess installiertes post-installation-Skript gab den Fehlerwert 255 zurück
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von qemu-server:
 qemu-server hängt ab von pve-cluster; aber:
  Paket pve-cluster ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von qemu-server (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von pve-manager:
 pve-manager hängt ab von qemu-server (>= 1.1-1); aber:
  Paket qemu-server ist noch nicht konfiguriert.
 pve-manager hängt ab von pve-cluster (>= 1.0-29); aber:
  Paket pve-cluster ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von pve-manager (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
dpkg: Abhängigkeitsprobleme verhindern Konfiguration von proxmox-ve-2.6.32:
 proxmox-ve-2.6.32 hängt ab von pve-manager; aber:
  Paket pve-manager ist noch nicht konfiguriert.
 proxmox-ve-2.6.32 hängt ab von qemu-server; aber:
  Paket qemu-server ist noch nicht konfiguriert.

dpkg: Fehler beim Bearbeiten von proxmox-ve-2.6.32 (--configure):
 Abhängigkeitsprobleme - verbleibt unkonfiguriert
Fehler traten auf beim Bearbeiten von:
 pve-cluster
 qemu-server
 pve-manager
 proxmox-ve-2.6.32

E: Sub-process /usr/bin/dpkg returned an error code (1)

At the moment, there is no active cluster configuration. I created a cluster in the past for testing purposes but removed it later.

Maybe the following output will help:
Code:

$ /etc/init.d/pve-cluster stop
Stopping pve cluster filesystem: pve-cluster apparently not running.

Code:

$ /etc/init.d/pve-cluster start
Starting pve cluster filesystem : pve-cluster[main] notice: unable to aquire pmxcfs lock - trying again
[main] crit: unable to aquire pmxcfs lock: Resource temporarily unavailable
[main] notice: exit proxmox configuration filesystem (-1)
 (warning).

I read an old thread about this problem (http://forum.proxmox.com/threads/884...k-trying-again) but nothing helps.
I've no shared filesystem for /etc/pve or similar.

Code:

$ pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-37-pve)
pve-manager: not correctly installed (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-38-pve: 2.6.32-155
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: not correctly installed
qemu-server: not correctly installed
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1



Thanks for your help.

Differences between Proxmox 3.4 and clean installation

$
0
0
Hi,I found some problems in Proxmox 3.4 installation process which constantly requires to reboot the computer (CTRL+D).Because of that I use clean Debian 7.8 and this manual https://pve.proxmox.com/wiki/Install..._Debian_Wheezy.The problem arises when you find that you don't have firewall future. That is not soo big problem, because I can configure iptables manually, but I wonder what else is missing that is not immediately possible to determine.Nowhere in the manual is not stated that if you install proxmox packages on clean Debian, you will not have the firewall option or something elseWhat is the difference between Proxmox 3.4 and installation on clean Debian distro?

SSD ata-trim for guest in virtio configuration not working

$
0
0
Dear all,

playing around a while to test ata-trim, pve 3.4, 3.10.0-8, on Samsung ssd 840+850 both evo.

Host on ext4, noop, noatime, barrier=0, without discard, running fstrim periodically by cron.

Windows guest Win7, 0.1.103 virtio, qcow2 with discard option, validation by trimcheck.exe and output of fstrim in host-shell.

** Working: Ide/Sata: trim launched by itself after some seconds (without manual fstrim in host shell).
** Not working: Virtio: trim not working, also during fstrim launch on host no difference in the result.

Tried various options in configuration: combinations of virtio and scsi hdd, and combinations of virtio and scsi controllers.
It looks like discard option in the hdd attach dialog is mandatory (in opposite to the host config), but I am lacking the working virtio config combination.

Has anybody found a working solution using virtio-drivers and ssd-trim (ata-trim) in windows guests?

Same behaviour on Win8.1 and Ubuntu14, Ide/Sata-hdd based trim on both working (with discard option in guest hdd dialog), but not in virtio/scsi combination.

Maybe any hints on this topic? Had found some discussions concerning virtio-block and virtio-scsi, but I am not sure if this is actual and related to this problem.

Thanks and best wishes

Alex

Edit: kernel version
Viewing all 171679 articles
Browse latest View live