Quantcast
Channel: Proxmox Support Forum
Viewing all 171745 articles
Browse latest View live

OpenGL on Windows guest

$
0
0
Dear users,

We are trying to use OpenGL >= 2.1 on a windows 7 guest (VGA qxl) but the virtual hardware do not seems to allow this

Question is : How to add OpenGL >= 2.1 to a windows guest ?

Thanks !

zfs / why "/dev/sdX" and not "/dev/disk/by-id/.." ?

$
0
0
I did a Test Installation with Proxmox-4 on 2x SSD's (Raid-1/Mirror).

"zpool status" shows:
Code:

root@proxmox-test:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE    READ WRITE CKSUM
        rpool      ONLINE      0    0    0
          mirror-0  ONLINE      0    0    0
            sdb2    ONLINE      0    0    0
            sdc2    ONLINE      0    0    0

errors: No known data errors

Why use Proxmox the "/dev/sdX" Identifier and not "/dev/disk/by-id/.." as recommended here http://zfsonlinux.org/faq.html ?

[SOLVED] Route network with LXC

$
0
0
Hello,
I test the new version of Proxmox, and I want to migrate all my KVM to LXC containers.

I have two network on the proxmox serveur :
VMBR0 bridged on eth0
VMBR2 bridged, with IP : 192.168.2.1.


Actually, with proxmox 3, I have iptables on proxmox and routing for all my internals VMs :
On proxmox : "/sbin/iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o vmbr0 -j MASQUERADE"
Network on KVM :
Code:

auto eth0
iface eth0 inet static
        address 192.168.2.2
        broadcast 192.168.2.2
        post-up route add PROXMOX_IP dev eth0
        post-up route add default gw PROXMOX_IP
        post-down route del PROXMOX_IP dev eth0
        post-down route del default gw PROXMOX_IP

So, my kvm have an internal IP and can access to internet.

I search the same work, but with LXC containers. If I create a container with the vmbr2 network, I can't route the traffic to internet. The container can ping the internal IP PROXMOX (192.168.2.1) but not public IP.

Can anyone help me ?


EDIT :
Just forget to uncomment the "net.ipv4.ip_forward=1" in /etc/sysctl.conf ...

Thank,
Stephane

linux VMs using virtio_net lose network connectivity

$
0
0
Hi everybody,

I have several occurrences of Linux VMs losing connectivity on the virtio_net device.

If it happens, no traffic goes over the virtio_net network interface. Unloading and reloading the virtio_net kernel module in the VM does fix the problem as well as rebooting the VM.

Proxmox 3.4.11 and several types of VMs: suse 12.2, 13.1, centos 6.7

Regards
....Volker

LVM and virtio-scsi - detects volume as complete drive

$
0
0
Hi,

I have successfully created a few VMs on PVE 4.0. Hardware raid (/dev/sda - Dell H730 RAID10) is partitioned like this:

Code:

NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                        8:0    0  2.7T  0 disk
├─sda1                    8:1    0    1M  0 part
└─sda2                    8:2    0  2.7T  0 part
  ├─vg0-root            252:0    0  14.9G  0 lvm  /
  ├─vg0-vm--100--disk--1 252:1    0  100G  0 lvm
  ├─vg0-vm--101--disk--1 252:2    0  512G  0 lvm
  ├─vg0-vm--102--disk--1 252:3    0    16G  0 lvm
  ├─vg0-vm--102--disk--2 252:4    0    64G  0 lvm
  ├─vg0-vm--103--disk--1 252:5    0  512G  0 lvm
  ├─vg0-vm--104--disk--1 252:6    0    16G  0 lvm
  ├─vg0-vm--104--disk--2 252:7    0    64G  0 lvm
  └─vg0-vm--105--disk--1 252:8    0  100G  0 lvm
sdb                        8:16  0 446.6G  0 disk
sdc                        8:32  0 446.6G  0 disk
sdd                        8:48  0 446.6G  0 disk
sde                        8:64  0 446.6G  0 disk

Note: /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde are RAID0 SSD-based single disks also configured through H730 controller.

VM 102 was created using virtio-blk driver all disks:
Code:

virtio0: lvm:vm-102-disk-1,size=16G
virtio1: lvm:vm-102-disk-2,size=64G
virtio2: /dev/sdb,size=457344M
virtio3: /dev/sdc,size=457344M

virtio2 and virtio3 were added manually as PVE UI doesn't allow attaching physical devices. VM (Debian 8.2) works fine although disks are shown as /dev/vdX (due to use of virtio-blk driver).

For VM 104 I used virtio-scsi driver for all disks:
Code:

scsihw: virtio-scsi-pci
scsi0: lvm:vm-104-disk-1,size=16G
scsi1: lvm:vm-104-disk-2,size=64G
scsi2: /dev/sdd,size=457344M
scsi3: /dev/sde,size=457344M

Note: I also tried to remove lvm: prefix and size= prefix - nothing changed.

So the problem 1 is: installer detects both /dev/sda and /dev/sdb as 3TB disks (as /dev/sda on pve host). How can I fix that?
The problem 2 is: I'm not sure if /dev/sdc inside vm104 really maps to /dev/sdd (inspired by problem 1). How can I check if it happened or not? hdparm/smartctl can't access inquiry data on disks.

Any ideas are welcome.

Regards,

CT instant death after apt-get upgrade (testing/unstable source)

$
0
0
Hi,

I use Proxmox VE 4 and debian 7 template (get here https://openvz.org/Download/template/precreated). Under Proxmox VE 3, i used to use testing or unstable sources and no problem with packet or update.

Under Proxmox 4, when i use testing source and do an upgrade "apt-get upgrade", after rebooting CT got an error and no boot.

Code:

deb http://ftp.fr.debian.org/debian testing main contrib non-free
deb-src http://ftp.fr.debian.org/debian testing main contrib non-free

This is error log on pct start command:

Code:

lxc-start 1445965506.963 INFO    lxc_start_ui - lxc_start.c:main:264 - using rcfile /var/lib/lxc/103/config
      lxc-start 1445965506.963 WARN    lxc_confile - confile.c:config_pivotdir:1825 - lxc.pivotdir is ignored.  It will soon become an error.
      lxc-start 1445965506.964 WARN    lxc_cgmanager - cgmanager.c:cgm_get:993 - do_cgm_get exited with error
      lxc-start 1445965506.964 INFO    lxc_lsm - lsm/lsm.c:lsm_init:48 - LSM security driver AppArmor
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .reject_force_umount  # comment this to allow umount -f;  not recommended.
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:410 - Adding native rule for reject_force_umount action 0
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:do_resolve_add_rule:210 - Setting seccomp rule to reject force umounts


      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:413 - Adding compat rule for reject_force_umount action 0
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:do_resolve_add_rule:210 - Setting seccomp rule to reject force umounts


      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .[all].
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .kexec_load errno 1.
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:410 - Adding native rule for kexec_load action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:413 - Adding compat rule for kexec_load action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .open_by_handle_at errno 1.
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:410 - Adding native rule for open_by_handle_at action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:413 - Adding compat rule for open_by_handle_at action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .init_module errno 1.
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:410 - Adding native rule for init_module action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:413 - Adding compat rule for init_module action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .finit_module errno 1.
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:410 - Adding native rule for finit_module action 327681
      lxc-start 1445965506.965 WARN    lxc_seccomp - seccomp.c:do_resolve_add_rule:227 - Seccomp: got negative # for syscall: finit_module
      lxc-start 1445965506.965 WARN    lxc_seccomp - seccomp.c:do_resolve_add_rule:228 - This syscall will NOT be blacklisted
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:413 - Adding compat rule for finit_module action 327681
      lxc-start 1445965506.965 WARN    lxc_seccomp - seccomp.c:do_resolve_add_rule:227 - Seccomp: got negative # for syscall: finit_module
      lxc-start 1445965506.965 WARN    lxc_seccomp - seccomp.c:do_resolve_add_rule:228 - This syscall will NOT be blacklisted
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:318 - processing: .delete_module errno 1.
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:410 - Adding native rule for delete_module action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:413 - Adding compat rule for delete_module action 327681
      lxc-start 1445965506.965 INFO    lxc_seccomp - seccomp.c:parse_config_v2:420 - Merging in the compat seccomp ctx into the main one
      lxc-start 1445965506.965 INFO    lxc_conf - conf.c:run_script_argv:356 - Executing script '/usr/share/lxc/hooks/lxc-pve-prestart-hook' for container '103', config section 'lxc'
      lxc-start 1445965507.273 DEBUG    lxc_start - start.c:setup_signal_fd:262 - sigchild handler set
      lxc-start 1445965507.274 DEBUG    lxc_console - console.c:lxc_console_peer_default:500 - opening /dev/tty for console peer
      lxc-start 1445965507.274 DEBUG    lxc_console - console.c:lxc_console_peer_default:506 - using '/dev/tty' as console
      lxc-start 1445965507.274 DEBUG    lxc_console - console.c:lxc_console_sigwinch_init:179 - 24196 got SIGWINCH fd 9
      lxc-start 1445965507.274 DEBUG    lxc_console - console.c:lxc_console_winsz:88 - set winsz dstfd:6 cols:146 rows:47
      lxc-start 1445965507.274 INFO    lxc_start - start.c:lxc_init:454 - '103' is initialized
      lxc-start 1445965507.274 DEBUG    lxc_start - start.c:__lxc_start:1145 - Not dropping cap_sys_boot or watching utmp
      lxc-start 1445965507.275 INFO    lxc_conf - conf.c:run_script:406 - Executing script '/usr/share/lxc/lxcnetaddbr' for container '103', config section 'net'
      lxc-start 1445965507.578 DEBUG    lxc_conf - conf.c:instantiate_veth:2702 - instantiated veth 'veth103i0/vethMM06XP', index is '88'
      lxc-start 1445965507.578 INFO    lxc_cgroup - cgroup.c:cgroup_init:65 - cgroup driver cgmanager initing for 103
      lxc-start 1445965507.581 DEBUG    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1393 - cgroup 'memory.limit_in_bytes' set to '2147483648'
      lxc-start 1445965507.582 DEBUG    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1393 - cgroup 'memory.memsw.limit_in_bytes' set to '2684354560'
      lxc-start 1445965507.582 DEBUG    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1393 - cgroup 'cpu.cfs_period_us' set to '100000'
      lxc-start 1445965507.582 DEBUG    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1393 - cgroup 'cpu.cfs_quota_us' set to '200000'
      lxc-start 1445965507.582 DEBUG    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1393 - cgroup 'cpu.shares' set to '1024'
      lxc-start 1445965507.582 INFO    lxc_cgmanager - cgmanager.c:cgm_setup_limits:1397 - cgroup limits have been setup
      lxc-start 1445965507.597 DEBUG    lxc_conf - conf.c:lxc_assign_network:3119 - move 'eth0' to '24222'
      lxc-start 1445965507.601 DEBUG    bdev - bdev.c:find_fstype_cb:151 - trying to mount '/dev/loop0'->'/usr/lib/x86_64-linux-gnu/lxc/rootfs' with fstype 'ext3'
      lxc-start 1445965507.603 DEBUG    bdev - bdev.c:find_fstype_cb:159 - mount failed with error: Invalid argument
      lxc-start 1445965507.603 DEBUG    bdev - bdev.c:find_fstype_cb:151 - trying to mount '/dev/loop0'->'/usr/lib/x86_64-linux-gnu/lxc/rootfs' with fstype 'ext2'
      lxc-start 1445965507.603 DEBUG    bdev - bdev.c:find_fstype_cb:159 - mount failed with error: Invalid argument
      lxc-start 1445965507.603 DEBUG    bdev - bdev.c:find_fstype_cb:151 - trying to mount '/dev/loop0'->'/usr/lib/x86_64-linux-gnu/lxc/rootfs' with fstype 'ext4'
      lxc-start 1445965507.606 INFO    bdev - bdev.c:find_fstype_cb:167 - mounted '/dev/loop0' on '/usr/lib/x86_64-linux-gnu/lxc/rootfs', with fstype 'ext4'
      lxc-start 1445965507.606 DEBUG    lxc_conf - conf.c:setup_rootfs:1284 - mounted 'loop:/media/clients/images/103/vm-103-disk-2.raw' on '/usr/lib/x86_64-linux-gnu/lxc/rootfs'
      lxc-start 1445965507.606 INFO    lxc_conf - conf.c:setup_utsname:919 - 'pmloikju.com' hostname has been setup
      lxc-start 1445965507.621 DEBUG    lxc_conf - conf.c:setup_hw_addr:2244 - mac address '02:00:00:21:55:c0' on 'eth0' has been setup
      lxc-start 1445965507.621 DEBUG    lxc_conf - conf.c:setup_netdev:2471 - 'eth0' has been setup
      lxc-start 1445965507.621 INFO    lxc_conf - conf.c:setup_network:2492 - network has been setup
      lxc-start 1445965507.621 INFO    lxc_conf - conf.c:mount_autodev:1148 - Mounting /dev under /usr/lib/x86_64-linux-gnu/lxc/rootfs
      lxc-start 1445965507.621 INFO    lxc_conf - conf.c:mount_autodev:1169 - Mounted tmpfs onto /usr/lib/x86_64-linux-gnu/lxc/rootfs/dev
      lxc-start 1445965507.621 INFO    lxc_conf - conf.c:mount_autodev:1187 - Mounted /dev under /usr/lib/x86_64-linux-gnu/lxc/rootfs
      lxc-start 1445965507.622 DEBUG    lxc_conf - conf.c:mount_entry:1727 - remounting /sys/fs/fuse/connections on /usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections to respect bind or remount options
      lxc-start 1445965507.622 DEBUG    lxc_conf - conf.c:mount_entry:1742 - (at remount) flags for /sys/fs/fuse/connections was 4096, required extra flags are 0
      lxc-start 1445965507.622 DEBUG    lxc_conf - conf.c:mount_entry:1751 - mountflags already was 4096, skipping remount
      lxc-start 1445965507.622 DEBUG    lxc_conf - conf.c:mount_entry:1777 - mounted '/sys/fs/fuse/connections' on '/usr/lib/x86_64-linux-gnu/lxc/rootfs/sys/fs/fuse/connections', type 'none'
      lxc-start 1445965507.622 INFO    lxc_conf - conf.c:mount_file_entries:2026 - mount points have been setup
      lxc-start 1445965507.622 INFO    lxc_conf - conf.c:run_script_argv:356 - Executing script '/usr/share/lxc/hooks/lxc-pve-mount-hook' for container '103', config section 'lxc'
      lxc-start 1445965507.930 ERROR    lxc_conf - conf.c:run_buffer:336 - Script exited with status 25
      lxc-start 1445965507.930 ERROR    lxc_conf - conf.c:lxc_setup:3827 - failed to run mount hooks for container '103'.
      lxc-start 1445965507.930 ERROR    lxc_start - start.c:do_start:702 - failed to setup the container
      lxc-start 1445965507.930 ERROR    lxc_sync - sync.c:__sync_wait:51 - invalid sequence number 1. expected 2
      lxc-start 1445965507.930 WARN    lxc_conf - conf.c:lxc_delete_network:2995 - failed to remove interface 'eth0'
      lxc-start 1445965507.950 ERROR    lxc_start - start.c:__lxc_start:1172 - failed to spawn '103'
      lxc-start 1445965507.950 INFO    lxc_conf - conf.c:run_script_argv:356 - Executing script '/usr/share/lxc/hooks/lxc-pve-poststop-hook' for container '103', config section 'lxc'
      lxc-start 1445965508.264 ERROR    lxc_start_ui - lxc_start.c:main:344 - The container failed to start.
      lxc-start 1445965508.264 ERROR    lxc_start_ui - lxc_start.c:main:348 - Additional information can be obtained by setting the --logfile and --logpriority options.

Am i the only one with this kind of bugs ?

virtio for disks (viostor) hangs with Windows 2003

$
0
0
I have an up-to-date proxmox server.
Several Windows 2012 R2 guests are working perfectly. They were installed directly on Proxmox.

I did a P2V migration for 2 Windows 2003 servers, but the virtio storage drivers don't work. So I currently use IDE virtual drives, which is clearly too slow.
- Windows 2003 R2 SP2 Standard
- Windows 2003 SP2 Enterprise

The balloon and network drivers are installed without problem (latest stable version: 0.1.102 also tested with 0.1.110 without problem).

When I install the virtio storage driver (viostor), the process stay forever on the stage to copy the viostor.sys file. I have to kill the device manager to exit.
The driver then seems to be installed correctly.

If I hot-add a virtio drive, device manager correctly show "Red Hat VirtIO SCSI controller", but display an exclamation mark on the drive.
If I boot Windows 2003 with a virtio drive attached (system drive or secondary drive), I got a 100% grey screen forever (which is not the case if the driver is not installed).
Same problem on both Windows 2003.

I tested with an old viostor version: 1.0.30 -> same problem
I uninstalled/reinstalled/rebooted/etc.
I asked Google.
So now I'm posting here :-)


Code:

$ pveversion -v
proxmox-ve: 4.0-16 (running kernel: 3.16.0-4-amd64)
pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-22
qemu-server: 4.0-30
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-25
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: 1.0-6
pve-firewall: 2.0-12
pve-ha-manager: 1.0-9
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1

LXC postqueue BUG: warning: close: Permission denied

$
0
0
Hello,
i think LCX DEBIAN 8 container have a Bug:

# mailq
# postqueue: warning: close: Permission denied

Proxmox Kernel:
Linux prx15 4.2.2-1-pve #1 SMP Mon Oct 5 18:23:31 CEST 2015 x86_64 GNU/Linux

Some Solutions ?

Thansk

resize VM HDD to 800GB and my real HDD has only 80GB

$
0
0
I install Proxmox 4 on my HP server , my HP server has two HDD one of them is 80GB and another HDD is 1000GB. I run centos on first HDD make 32 GB virtual HDD for it . 32 GB is not enough for my project , so I go to Proxmox UI and resize HDD . and make mistake and make 800GB HDD , real HD is 80GB, my centos boot normal ,
what I must do ?

Proxmox 4.0 - VM locked cluster problem

$
0
0
I'm getting some odd behavior. I have a six node cluster when I log into the proxmox GUI the datacenter view only shows one of the six nodes online (whichever I'm logged into). The others show red/offline.

I'm have two VM's that are offline and I'm unable to start due to them being locked from a backup.

When I issue qm unlock <VMID> the command just hangs and I'm unable to escape out of it. I have to start a new SSH session to get back to the prompt.

I'm assuming there's something wrong with the cluster but proxmox is reporting all nodes online, but the GUI only shows the node active that you've logged into.

I've tried rebooting the individual nodes, restarting the pve-cluster service, ensuring clocks are sync'd on all the servers... I'm not sure what else to do as proxmox believes the cluster is active...

I'd post relevant logs and command output but being that I'm new to this forum its blocking the content as it thinks it contains URLs... Maybe a moderator can remove this for my account?

set qemu note error when the string include non-ascii chractor

$
0
0
After upgrade to pve 4.0, I can't set the kvm note include chinese chractor, but in pve 3.x it works fine.

It seems like a encode or decode bug.

1.png
Attached Images

Proxmox 4 quorum tow node cluster

$
0
0
Hello,

I've some issues..

1. after I installed Proxomox 4 and logged into the webinterface, I can choose between my two nodes.
When I click on the second node the "login" popup appears again and again every 2-3 seconds, this happens all the time until I restart the browser session.


2. I'm missing mkqdisk to create a quorum disk. I've read (proxmox wiki) there is no way for a two node ha cluster in proxmox 4 anymore (we run two proxmox 3.4 clusters with two nodes and created a quorum disk on a network block device to do so).

Is there any chance to build up a proxmox 4.0 cluster with two nodes?

Problem when trying to restore from a backup

$
0
0
Hi,

I've a strange problem when trying to restore a CT from a backup.

I made backup of my containers using Proxmox 3.4 ("suspend" mode / "gz" compression) and i store them on a NAS.

A made a test with a very simple and minimalist CT (fresh installation of Linux Debian 8 with Apache). All work fine with this CT. I made a backup, destroy it and restore it from the backup (with the same CTid)

Just after i try to access to the default Apache page and i get an "Forbidden Error". A quick look into to Apache error logfile show this message :

"[Wed Oct 28 16:42:53.180464 2015] [core:error] [pid 241] (13)Permission denied: [client aaa.bbb.ccc.ddd:56441] AH00035: access to / denied (filesystem path '/var') because search permissions are missing on a component of the path"

It is as if the restoration process had damaged the Linux rights system ...

This is not the first time I see this kind of problem when restoring a container.

Any idea ?

Thanks for your help !

/Xavier

cannot ping from one vm to another vm

$
0
0
i have the following NAT setup on my host:

Code:

auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.240.14
        netmask 255.255.255.0
        gateway 192.168.240.253
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto vmbr1
iface vmbr1 inet static
        address  10.10.10.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up  iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o vmbr0 -j MASQUERADE

on my host there are 2 guests with win 7.

network config of the guests:
ip:10.10.10.11
netmask:255.255.255.0
gateway:10.10.10.1
dns:8.8.8.8

second machine is the same with ip 10.10.10.12

on guest machine i can ping google.com so internet is working, but i cannot ping the other machine.

what is the problem here?

in GUI i have set up both win7 machines with the virtio network card and "vmbr1"

thanks in advance

please tell me if you need additional information

Moving from HYPER-V to Proxmox

$
0
0
Hey there,

currently im moving my Virtual Machines from Hyper-V to Proxmox. We have only 3 machines running, but well they are quite important for us.

So i have an "old" server which i can use as a transition zone, installed Proxmox 4 on it and i have already migrated the first VM on it. It runs quite well right now.

The thing i don´t now yet is, when i have migrated all VM´s to the transition server and im installing the Hyper-V with Proxmox, how do i get them from the transit to the new one?

I haven´t found anything in the wiki about, migrating from Proxmox to Proxmox. I don´t wanna build a cluster, because i will need the transition server somewhere else afterwards.

So how do i do this?

Thanks!

PVE4 & SRP storage from another ZFS pool via SRPT

$
0
0
Hi All

I am new to PV4 (less than 24 hrs) but I am very familiar with ubuntu, infiniband and zfs

My setup

PVE4 node: Mellanox ConnectX-3 VPI dual port card (and sr-iov can be initiated but some issue with iommu grouping now)
SAN node: Mellanox ConnectX-3 VPI dual port card running ZFS pool serving virtual block device over RDMA ( ubuntu 14.04.3 but It can be another PVE4 node as well)

in /etc/modules
Code:

mlx4_en
mlx4_ib
ib_ipoib
ib_umad
ib_srp

and I have a startup script execute the following
Code:

echo "id_ext=xxxxxx,ioc_guid=xxxxxx,dgid=xxxxx,pkey=ffff,service_id=xxxxx" > /sys/class/infiniband_srp/srp-mlx4_0-1/add_target
The disks are found (controller SCST_BIO)
Code:

: ~# lsscsi
[0:0:1:0]    cd/dvd  MATSHITA DVD-ROM SR-8178  PZ16  /dev/sr0
[4:2:0:0]    disk    LSI      MegaRAID SAS RMB 1.40  /dev/sda
[5:0:0:0]    disk    SCST_BIO kvm-node0        310  /dev/sdb
[5:0:0:1]    disk    SCST_BIO kvm-node1        310  /dev/sdc
[5:0:0:2]    disk    SCST_BIO kvm-node2        310  /dev/sdd
[5:0:0:3]    disk    SCST_BIO kvm-node3        310  /dev/sde
[5:0:0:4]    disk    SCST_BIO kvm-node4        310  /dev/sdf
[5:0:0:5]    disk    SCST_BIO kvm-node5        310  /dev/sdg

I am assuming adding them to VM should follow
Quote:

pve wiki Physical_disk_to_kvm
( i can't even have url in my post yet)

Is there a more a native way that PVE can support virtual block device other than ZFS over iscsi? knowing those disks are actually not local (but multipath) so that when I setup PVE clusters can do live migration without the need to move data (because its actually handled by SAN)

Or I need to use ceph storage cluster method described by
Quote:

pve wiki Storage:_Ceph
(I can run ceph on top of zfs) and also
Quote:

mellanox docs/DOC-2141
. The downside is its not RDMA over infiniband network but RDMA over Ethernet network (huge performance drop)

I am trying moving away from VMWare to PVE4 due to it supports LXC and ZFS natively (FS on Root! however, many uses zpool in stripe-mirror mode, now is limited to raid1 or raidZx) Eventually I plan to run 4 PVE4 nodes with 2 of them as SAN (backup can be done using pve's zsync which basically is zfs send|zfs recv script) and all of them are interconnected with Mellanox ConnectX-3 VPI

PVE4 supports both inifiniband and zfs, somehow when both are used together, some of the HPC features are missing.

Mounting CIFS storage on PVE4.0

$
0
0
Capture.PNG

I have been having some issues with mounting storage for use in containers. In the image above you will see the same command issued twice consecutively the first one does not work and the second does. I changed nothing between issuing these commands (honestly I just issued the command over and over until eventually it worked since everything appeared to be configured properly) and was able to access the share from other windows devices before issuing either of the commands. I am trying to identify what exactly is causing this error to occur considering it just seems to resolve itself after a short period of the PVE system booting.

Is there any places that I should check or look since it really doesn't appear to be clear as to why the issue suddenly stopped causing problems.
Attached Images

Issue with 80003ES2LAN NIC, e1000e driver and Proxmox 4

$
0
0
Most of this data was reconstructed from phone photos.
Initial install of Proxmox 4 not working with the e1000e driver.
Edit: I can't update the hypervisor as it has no working ethernet ports, it is a vanilla install.

NIC: 80003ES2LAN
MOBO: Intel S5400SF
Kernel: 4.2.2-1-pve

Confirmed working with Fedora 22 live, Debian 8.2 live and Windows 7.
Fedora 22 and Debian 8.2 are both using the e1000e driver.
Tried copying the driver over from several different distros, without luck.
Tried using a different Proxmox 4 hypervisor as a build environment.
Built several different versions of the e1000e driver, including the latest on the Intel and Sourceforge websites (The latest Sourceforge one supposedly works with kernel 4.x).
We tried manually loading the module, as well as adding it to /etc/modules.
No mention of the driver, or the interface, in any log file.
During boot the lights on both networking ports turn off as soon as the initial install of Proxmox 4 starts to load.
After replacing the e1000e.ko in Proxmox 4 with the one from Fedora 22, the lights would stay on but nothing else changed.
The motherboard EFI was updated to the latest, and reset after the issue initially occured.
After force loading e1000e, lsmod shows it, and "ptp" shows it is using it.
I do not have an ifconfig photo to reconstruct the output from, but it looked good compared to a working hypervisor with a different NIC chipset.

/etc/network/interfaces
Code:

auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.192
        netmask 255.255.255.0
        gateway 192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

route -n
Code:

Destination  Gateway      Genmask        Flags  Metric  Ref  Use  Iface
default      192.168.1.1  0.0.0.0        UG    0      0    0    vmbr0
192.168.1.0  *            255.255.255.0  U      0      0    0    vmbr0

lspci | grep -i ethernet
Code:

06:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
06:00.1 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)

Let me know if you need any more info, I will see if I have photos for anything else.

ZFS Health

$
0
0
I installed Proxmox 4 on two 4TB drives with ZFS RAID1. How can I tell the health status of the RAID array? Will it email me etc if one drive fails?

Prox4 - Pcie passthrough - loosing pcie devices on boot

$
0
0
I followed the following guide:

https://pve.proxmox.com/wiki/Pci_passthrough

once i execute the following part of the guide:

Quote:

edit:#vi /etc/default/grub

changeGRUB_CMDLINE_LINUX_DEFAULT="quiet"

toGRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

then# update-grub
# reboot


i get the following error on reboot:

Screen Shot 2015-10-28 at 22.32.45.jpg

when i execute

Code:

dmesg | grep -e DMAR -e IOMMU
[    0.000000] ACPI: DMAR 0x00000000B9DC9310 0000B4 (v01 A M I  OEMDMAR  00000001 INTL 00000001)
[    0.000000] DMAR: IOMMU enabled
[    0.076102] DMAR: Host address width 46
[    0.076104] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[    0.076112] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0462 ecap f020ff
[    0.076113] DMAR: RMRR base: 0x000000b9a77000 end: 0x000000b9a83fff
[    0.076114] DMAR: ATSR flags: 0x0
[    0.076115] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[    0.076117] DMAR-IR: IOAPIC id 0 under DRHD base  0xfbffc000 IOMMU 0
[    0.076118] DMAR-IR: HPET id 0 under DRHD base 0xfbffc000
[    0.076119] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.076324] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.646476] DMAR: dmar0: Using Queued invalidation
[    0.646484] DMAR: Setting RMRR:
[    0.646493] DMAR: Setting identity map for device 0000:00:1a.0 [0xb9a77000 - 0xb9a83fff]
[    0.646504] DMAR: Setting identity map for device 0000:00:1d.0 [0xb9a77000 - 0xb9a83fff]
[    0.646508] DMAR: Prepare 0-16MiB unity mapping for LPC
[    0.646514] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    0.646518] DMAR: Intel(R) Virtualization Technology for Directed I/O
[    1.491414] DMAR: DRHD: handling fault status reg 2
[    1.491430] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[    1.706450] DMAR: DRHD: handling fault status reg 102
[    1.706465] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[    1.882715] DMAR: DRHD: handling fault status reg 202
[    1.882729] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[    2.085657] DMAR: DRHD: handling fault status reg 302
[    2.085671] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[    5.052440] DMAR: DRHD: handling fault status reg 402
[    5.052456] DMAR: DMAR:[DMA Read] Request device [03:00.0] fault addr b9221000
DMAR:[fault reason 01] Present bit in root entry is clear
[    7.040400] DMAR: DRHD: handling fault status reg 502
[    7.040417] DMAR: DMAR:[DMA Read] Request device [03:00.0] fault addr b9221000
DMAR:[fault reason 01] Present bit in root entry is clear
[  12.611242] DMAR: DRHD: handling fault status reg 602
[  12.611258] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  12.831820] DMAR: DRHD: handling fault status reg 702
[  12.831837] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  13.003815] DMAR: DRHD: handling fault status reg 2
[  13.003831] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  13.205852] DMAR: DRHD: handling fault status reg 102
[  13.205869] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  23.787045] DMAR: DRHD: handling fault status reg 202
[  23.787062] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  24.007131] DMAR: DRHD: handling fault status reg 302
[  24.007148] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  24.179927] DMAR: DRHD: handling fault status reg 402
[  24.179944] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  24.395113] DMAR: DRHD: handling fault status reg 502
[  24.395130] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  35.020072] DMAR: DRHD: handling fault status reg 602
[  35.020088] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  35.240777] DMAR: DRHD: handling fault status reg 702
[  35.241425] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  35.408119] DMAR: DRHD: handling fault status reg 2
[  35.408776] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  35.621399] DMAR: DRHD: handling fault status reg 102
[  35.622056] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  46.380216] DMAR: DRHD: handling fault status reg 202
[  46.380873] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  46.599426] DMAR: DRHD: handling fault status reg 302
[  46.600079] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  46.772392] DMAR: DRHD: handling fault status reg 402
[  46.773048] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  47.034842] DMAR: DRHD: handling fault status reg 502
[  47.035498] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  57.584096] DMAR: DRHD: handling fault status reg 602
[  57.584755] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  57.808051] DMAR: DRHD: handling fault status reg 702
[  57.808708] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  57.976532] DMAR: DRHD: handling fault status reg 2
[  57.977186] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  58.196034] DMAR: DRHD: handling fault status reg 102
[  58.196687] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  68.776021] DMAR: DRHD: handling fault status reg 202
[  68.776674] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  69.000018] DMAR: DRHD: handling fault status reg 302
[  69.000669] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  69.168642] DMAR: DRHD: handling fault status reg 402
[  69.169294] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  69.401406] DMAR: DRHD: handling fault status reg 502
[  69.402058] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  80.024008] DMAR: DRHD: handling fault status reg 602
[  80.024661] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  80.250337] DMAR: DRHD: handling fault status reg 702
[  80.250987] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  80.416839] DMAR: DRHD: handling fault status reg 2
[  80.417488] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  80.631588] DMAR: DRHD: handling fault status reg 102
[  80.632237] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  91.283990] DMAR: DRHD: handling fault status reg 202
[  91.284641] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  91.508981] DMAR: DRHD: handling fault status reg 302
[  91.509632] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  91.672052] DMAR: DRHD: handling fault status reg 402
[  91.672701] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  91.871926] DMAR: DRHD: handling fault status reg 502
[  91.872573] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  102.575976] DMAR: DRHD: handling fault status reg 602
[  102.576626] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  102.800978] DMAR: DRHD: handling fault status reg 702
[  102.801625] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  102.964295] DMAR: DRHD: handling fault status reg 2
[  102.964943] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  103.205305] DMAR: DRHD: handling fault status reg 102
[  103.205952] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  113.772903] DMAR: DRHD: handling fault status reg 202
[  113.773551] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f0000
DMAR:[fault reason 01] Present bit in root entry is clear
[  113.992965] DMAR: DRHD: handling fault status reg 302
[  113.993611] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f1000
DMAR:[fault reason 01] Present bit in root entry is clear
[  114.160404] DMAR: DRHD: handling fault status reg 402
[  114.161052] DMAR: DMAR:[DMA Write] Request device [03:00.1] fault addr ad8f2000
DMAR:[fault reason 01] Present bit in root entry is clear
[  114.415543] DMAR: DRHD: handling fault status reg 502
[  114.416191] DMAR: DMAR:[DMA Write] Request device [03:00.0] fault addr ad8f3000
DMAR:[fault reason 01] Present bit in root entry is clear
[  173.319533] AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
[  173.319535] AMD IOMMUv2 functionality not available on this system


lspci lists the following:
lspci -vv
Quote:

03:00.0 RAID bus controller: HighPoint Technologies, Inc. RocketRAID 640L 4 Port SATA-III Controller (rev 01)
Subsystem: HighPoint Technologies, Inc. RocketRAID 640L 4 Port SATA-III Controller
Flags: bus master, fast devsel, latency 0, IRQ 16
I/O ports at d050 [size=8]
I/O ports at d040 [size=4]
I/O ports at d030 [size=8]
I/O ports at d020 [size=4]
I/O ports at d000 [size=32]
Memory at fbd10000 (32-bit, non-prefetchable) [size=2K]
Expansion ROM at fbd00000 [disabled] [size=64K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit-
Capabilities: [70] Express Legacy Endpoint, MSI 00
Capabilities: [e0] SATA HBA v0.0
Capabilities: [100] Advanced Error Reporting
Problem is, the Raidcontroller drops its /dev/sd*'s.

If i #vi /etc/default/grub
changeGRUB_CMDLINE_LINUX_DEFAULT="quiet"


again, there is no more issue with the Raid-controller.
It should be said, i'm not even trying to passthroug the Raidcontroller on 03:00.0



anyone know how i can fix this ?
Attached Images
Viewing all 171745 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>