Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

noVNC console is killing direct VNC access port

$
0
0
Hello,

I'm running Qemu with the following VNC args
Code:

args: -vnc 0.0.0.0:101,password
Then I'm setting the password with
Code:

pvesh create /nodes/localhost/qemu/101/monitor -command 'set_password vnc mypasswd'
This let me connect regular VNC client to VM on the 6001 port, and it works very well :D

My problem is since proxmox 3.3, the great noVNC console is shutting down the VNC port when launched.
When I start a noVNC console, the 6001 port no more reply (which could be fine during the noVNC session).
The problem is when I close the noVNC popup, proxmox doesn't give me back the direct VNC port. :(

Any idea how to restore the vnc server on port 6001 when noVNC is closed ?

Thanks !
RCK

DAB Built Wheezy Templates broken in PVE 3.3-2?

$
0
0
DAB Compiled Wheezy templates based containers do not recognise the root console password set while creating it if created in the last few days under PVE 3.3-2. Anyone can try to simulate it by creating a DAB Wheezy Minimal template under PVE-3.3 (kernel build 136) and creating a container based on it and starting the container and trying to SSH into it. Has any change in debian wheezy broken it?

Vm-to-VM communication

$
0
0
I'm on a Microserver with 1 nic, all hooked into the same bridge. I transfer a lot of files from VM to VM, for various reasons. However, by the speeds I'm getting (29 MB/s for large ISOs), I'm assuming this is all going out the bridge and then coming back in the bridge to the other VM.

Is there a better way to transfer files Vm to Vm? I'm thinking something like the Vmware VSwitch.

I have not tried OpenVswitch, as I am unfamiliar with the config on it, but would love that it worked out to be that easy.

vzdump fails on a single qemu VM

$
0
0
Hi, I have a single proxmox node with 1 CT and 3 VMs
Code:

# pveversion -v
proxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-27-pve: 2.6.32-121
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Everything is working as expected, really good in fact, except vzdump cron job. It fails just on 1 VM.
Code:

# cat /var/lib/vz/dump/vzdump-qemu-101-2014_09_28-14_07_05.log
Sep 28 14:07:05 INFO: Starting Backup of VM 101 (qemu)
Sep 28 14:07:05 INFO: status = running
Sep 28 14:07:07 INFO: update VM 101: -lock backup
Sep 28 14:07:07 INFO: backup mode: stop
Sep 28 14:07:07 INFO: ionice priority: 7
Sep 28 14:07:07 INFO: stopping vm
Sep 28 14:17:08 INFO: VM quit/powerdown failed - got timeout
Sep 28 14:17:08 ERROR: Backup of VM 101 failed - command 'qm shutdown 101 --skiplock --keepActive --timeout 600' failed: exit code 255

My vzdump script is at /etc/cron.d/ folder as follow:
Code:

PATH="/usr/sbin:/usr/bin:/sbin:/bin"
0 14 * * 7          root qm unlock 101 && vzdump 100 101 102 103 --quiet 1 --mode stop --mailto xxxxx --node i7virt --compress gzip --storage local

After cron job done, I receive an e-mail with vzdump status: CT100 ok, VM101 failed, VM102 ok, VM103 ok.
I don't think that virtualized O.S. should be relevant, but VM101 is a W7 32bits Professional and it's the only is failing.

Prerouting and Postrouting with new Proxmox Firewall

$
0
0
My host have 1 public IP address bind on vmbr0. To allow access from/to my CT to/from internet I used to use prerouting and postroutin rule. Now I'm trying to use new build-in proxmox firewall to get the same result but I don't know how since NAT isn't present (or is it? Where???) and wiki doesn't mention about it.

How can I do this?

Proxmox through Tomato Firmware

$
0
0
Hello,

I've got a fibre connection in Australia and am setting up Proxmox and have a /28 IPv4 subnet to use. In between the Proxmox server and the fibre connection is an Asus router that has been flashed with Tomato. Here's what I've done, and the problems I am having. Any help or insight would be super appreciated!

I've set the startup scripts for Tomato to assign all of the public IPs to it, and they're all working great.
Code:

ifconfig vlan2:0 220.***.***.1 broadcast 220.***.***.15 netmask 255.255.255.240
ifconfig vlan2:1 220.***.***.2 broadcast 220.***.***.15 netmask 255.255.255.240
etc etc

I've set the firewall scripts in Tomato to be the following:
Code:

/usr/sbin/iptables -t nat -I PREROUTING -d 220.***.***.1 -j DNAT --to-destination 10.0.0.253
/usr/sbin/iptables -t nat -I POSTROUTING 1 -p all -s 10.0.0.253 -j SNAT --to 220.***.***.1
/usr/sbin/iptables -I FORWARD -d 10.0.0.253 -j ACCEPT

/usr/sbin/iptables -t nat -I PREROUTING -d 220.***.***.2 -j DNAT --to-destination 10.0.0.2
/usr/sbin/iptables -t nat -I POSTROUTING 1 -p all -s 10.0.0.2 -j SNAT --to 220.***.***.2
/usr/sbin/iptables -I FORWARD -d 10.0.0.2 -j ACCEPT

/usr/sbin/iptables -t nat -I PREROUTING -d 220.***.***.3 -j DNAT --to-destination 10.0.0.3
/usr/sbin/iptables -t nat -I POSTROUTING 1 -p all -s 10.0.0.3 -j SNAT --to 220.***.***.3
/usr/sbin/iptables -I FORWARD -d 10.0.0.3 -j ACCEPT

etc etc

For each VM set up in Proxmox, I allocate it a local IP (10.0.0.x) and leave Tomato's iptables to do its thing, which has been working well.

Problems:
If I add the external IP 220.***.***.x to a VM, that VM can't access the internet using that IP. I tried having the iptables from Tomato set to forward everything to the Node IP (10.0.0.253) but it wouldn't pass it on to the VMs.
I'm trying to install DirectAdmin on one VM, but it won't allow me to install it unless I have an external IP added to the VM and can access directadmin.com through that external IP. But, as mentioned before, when using an external IP (like binding WGET to it) it can't access the web. According to tcpdump, it seems to be going out, but when the packets come back in they don't get forwarded to the correct place.

I'm at a bit of a loss here. Can someone please help me?

Thanks,
Jarrod.

CMAN fails after update to 3.3

$
0
0
Aijaijai! I do an update to the new version on 28 sep and now both nodes have a fail when booting:

Sun Sep 28 10:42:11 2014: Starting cluster:
Sun Sep 28 10:42:11 2014: Checking if cluster has been disabled at boot... [ OK ]
Sun Sep 28 10:42:11 2014: Checking Network Manager... [ OK ]
Sun Sep 28 10:42:11 2014: Global setup... [ OK ]
Sun Sep 28 10:42:11 2014: Loading kernel modules... [ OK ]
Sun Sep 28 10:42:11 2014: Mounting configfs... [ OK ]
Sun Sep 28 10:42:11 2014: Starting cman... tempfile:13: element device: Relax-NG validity error : Invalid attribute nodename for element device
Sun Sep 28 10:42:16 2014: Relax-NG validity error : Extra element fence in interleave
Sun Sep 28 10:42:16 2014: tempfile:6: element clusternodes: Relax-NG validity error : Element clusternode failed to validate content
Sun Sep 28 10:42:16 2014: tempfile:7: element clusternode: Relax-NG validity error : Element clusternodes has extra content: clusternode
Sun Sep 28 10:42:16 2014: Configuration fails to validate
Sun Sep 28 10:42:16 2014: [ OK ]
Sun Sep 28 10:42:16 2014: Starting qdiskd... [ OK ]
Sun Sep 28 10:42:26 2014: Waiting for quorum... [ OK ]
Sun Sep 28 10:42:26 2014: Starting fenced... [ OK ]
Sun Sep 28 10:42:26 2014: Starting dlm_controld... [ OK ]
Sun Sep 28 10:42:27 2014: Tuning DLM kernel config... [ OK ]
Sun Sep 28 10:42:27 2014: Unfencing self... [ OK ]
Sun Sep 28 10:42:27 2014: Joining fence domain... [ OK ]
Sun Sep 28 10:42:28 2014: Starting PVE firewall logger: pvefw-logger.
Sun Sep 28 10:42:29 2014: Starting OpenVZ: ..done
Sun Sep 28 10:42:29 2014: Bringing up interface venet0: ..done
Sun Sep 28 10:42:29 2014: Starting Cluster Service Manager: [ OK ]


root@node1:~# pveversion --verbose
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Before the update everything was fine and i think now everything still running but whats about the failure when starting the nodes?

root@node1:~# /etc/init.d/cman status
cluster is running.

My cluster.conf

root@node1:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="67" name="BlDMZ">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="1" label="proxmoxquorum" tko="10" votes="1"/>
<totem token="54000"/>
<clusternodes>
<clusternode name="node2" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="fenceNode2"/>
</method>
<method name="2">
<device name="human" nodename="node2"/>
</method>
</fence>
</clusternode>
<clusternode name="node1" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="fenceNode1"/>
</method>
<method name="2">
<device name="human" nodename="node1"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent="fence_ipmilan" auth="md5" ipaddr="192.168.1.11" login="clusterp" name="fenceNode1" passwd="-----" power_wait="5"/>
<fencedevice agent="fence_ipmilan" auth="md5" ipaddr="192.168.1.12" login="clusterp" name="fenceNode2" passwd="-----" power_wait="5"/>
<fencedevice agent="fence_manual" name="human"/>
</fencedevices>
<rm>
<pvevm autostart="1" vmid="101"/>
</rm>
</cluster>

MineOS

$
0
0
There is an Turnkey ISO for MineOs!

I don't know why - but i can't find it in my appliances-list :)

Increasing disk image size TO a specific amount

$
0
0
I'm still assimilating a disparate and slightly messy system from my predecessor, as well as keeping abreast with new acquisitions... but one task I've been asked to do is standardize the sizes of disk images for our clients.

This being the case, I'd dearly love to be able to increase disk images not BY a certain amount, but TO a certain size. Say, rather than trying to increase sizes by a given amount of GB to get to, for instance, 64GB... or, more annoyingly, FRACTIONS THEREOF (to get from 59728MB to 64GB, for instance), a function that simply resizes the disk image directly TO the specified size. Is this something that's already in there? Or will I need to make a script for it myself?

If it's NOT available, I guess this constitutes a feature request, as well.

"system" full?

$
0
0
Code:

root@myhost:~# pvdisplay
  --- Physical volume ---
  PV Name              /dev/sda5
  VG Name              system
  PV Size              71.59 GiB / not usable 0
  Allocatable          yes (but full)
  PE Size              4.00 MiB
  Total PE              18328
  Free PE              0
  Allocated PE          18328
  PV UUID              qOaCR9-FAXK-pjKv-pr70-6Vas-btOr-6R5jqS


  --- Physical volume ---
  PV Name              /dev/sda3
  VG Name              system
  PV Size              840.33 GiB / not usable 2.00 MiB
  Allocatable          yes
  PE Size              4.00 MiB
  Total PE              215124
  Free PE              23124
  Allocated PE          192000
  PV UUID              y8Ekfn-2c6i-2mXK-DoV1-Cee3-s7ir-yDEp6U


root@myhost:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/system/root
  LV Name                root
  VG Name                system
  LV UUID                PSW62Z-WMcK-nEMd-zxL4-1jNV-mRyS-htw9HP
  LV Write Access        read/write
  LV Creation host, time mail, 2014-09-30 05:58:46 +0200
  LV Status              available
  # open                1
  LV Size                71.59 GiB
  Current LE            18328
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:0


  --- Logical volume ---
  LV Path                /dev/system/pve-vz
  LV Name                pve-vz
  VG Name                system
  LV UUID                ItxDOY-e0CH-pwO1-BzHv-kVfc-mq1C-SwFzOv
  LV Write Access        read/write
  LV Creation host, time srvlive, 2014-09-30 13:24:00 +0200
  LV Status              available
  # open                1
  LV Size                500.00 GiB
  Current LE            128000
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:1


  --- Logical volume ---
  LV Path                /dev/system/pve-backup
  LV Name                pve-backup
  VG Name                system
  LV UUID                rXo3vN-YMQa-zoLG-bPcz-BASp-DNb2-2CPDe2
  LV Write Access        read/write
  LV Creation host, time srvlive, 2014-09-30 13:25:04 +0200
  LV Status              available
  # open                1
  LV Size                250.00 GiB
  Current LE            64000
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:2

root@myhost:~# vgdisplay
  --- Volume group ---
  VG Name              system
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                3
  Open LV              3
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size              911.92 GiB
  PE Size              4.00 MiB
  Total PE              233452
  Alloc PE / Size      210328 / 821.59 GiB
  Free  PE / Size      23124 / 90.33 GiB
  VG UUID              LD2b34-KMrf-uSWb-AWwr-hLZS-LDFc-vJ7TUa

Is sda5 system full? I've just done a brand new installation.

CEPH - osd.xxx [WRN] map yyyy wrongly marked me down

$
0
0
Hello forum,

we are running a subscripted 5-nodes proxmox and ceph cluster for a while, in last hours our mon log includes entries like the following ones:

Code:

2014-09-30 16:24:20.599818 mon.0 10.10.0.75:6789/0 2198 : [INF] pgmap v5025678: 640 pgs: 640 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail; 2154 B/s wr, 0 op/s
2014-09-30 16:24:35.083070 osd.9 10.10.0.79:6803/8950 459 : [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.131652 secs
2014-09-30 16:24:35.083077 osd.9 10.10.0.79:6803/8950 460 : [WRN] slow request 30.131652 seconds old, received at 2014-09-30 16:24:04.951384: osd_op(client.3337472.0:47152 rbd_data.4fbcc2ae8944a.0000000000000324 [set-alloc-hint object_size 4194304 write_size 4194304,write 3255808~16384] 2.6f4005e7 ack+ondisk+write e8761) v4 currently waiting for subops from 5,14
2014-09-30 16:24:39.674463 mon.0 10.10.0.75:6789/0 2343 : [INF] osd.12 10.10.0.77:6800/4418 failed (3 reports from 2 peers after 34.001207 >= grace 20.878662)
2014-09-30 16:24:39.674922 mon.0 10.10.0.75:6789/0 2345 : [INF] osd.13 10.10.0.77:6803/4625 failed (3 reports from 2 peers after 34.001424 >= grace 20.878662)
2014-09-30 16:24:39.675364 mon.0 10.10.0.75:6789/0 2347 : [INF] osd.14 10.10.0.77:6806/4869 failed (3 reports from 2 peers after 34.001730 >= grace 20.878662)
2014-09-30 16:24:39.675750 mon.0 10.10.0.75:6789/0 2349 : [INF] osd.15 10.10.0.77:6809/5207 failed (3 reports from 2 peers after 34.001986 >= grace 20.878662)
2014-09-30 16:24:40.426452 mon.0 10.10.0.75:6789/0 2402 : [INF] osdmap e8762: 15 osds: 11 up, 15 in
2014-09-30 16:24:40.432713 mon.0 10.10.0.75:6789/0 2403 : [INF] pgmap v5025679: 640 pgs: 640 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail; 3962 B/s wr, 0 op/s
2014-09-30 16:24:40.451743 mon.0 10.10.0.75:6789/0 2404 : [INF] pgmap v5025680: 640 pgs: 151 stale+active+clean, 489 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail; 5781 B/s wr, 0 op/s
2014-09-30 16:24:40.805781 osd.1 10.10.0.75:6800/4528 3 : [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.698548 secs
2014-09-30 16:24:40.805798 osd.1 10.10.0.75:6800/4528 4 : [WRN] slow request 30.698548 seconds old, received at 2014-09-30 16:24:10.107177: osd_op(client.3336927.0:4636 rbd_data.1975eb2ae8944a.0000000000000a14 [set-alloc-hint object_size 4194304 write_size 4194304,write 2609152~8192] 2.d61f0b59 ack+ondisk+write e8761) v4 currently reached pg
2014-09-30 16:24:41.440616 mon.0 10.10.0.75:6789/0 2409 : [INF] osd.12 10.10.0.77:6800/4418 boot
2014-09-30 16:24:41.440780 mon.0 10.10.0.75:6789/0 2410 : [INF] osd.14 10.10.0.77:6806/4869 boot
2014-09-30 16:24:41.440953 mon.0 10.10.0.75:6789/0 2411 : [INF] osdmap e8763: 15 osds: 13 up, 15 in
2014-09-30 16:24:39.085681 osd.0 10.10.0.75:6803/4729 5 : [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.105718 secs
2014-09-30 16:24:39.085693 osd.0 10.10.0.75:6803/4729 6 : [WRN] slow request 30.105718 seconds old, received at 2014-09-30 16:24:08.979886: osd_op(client.3147237.0:1968458 rbd_data.1985d22ae8944a.00000000000002f5 [set-alloc-hint object_size 4194304 write_size 4194304,write 2178560~12288] 2.220c9e08 ack+ondisk+write e8761) v4 currently waiting for subops from 7,8
2014-09-30 16:24:40.777186 osd.12 10.10.0.77:6800/4418 33 : [WRN] map e8762 wrongly marked me down
2014-09-30 16:24:41.446618 mon.0 10.10.0.75:6789/0 2412 : [INF] pgmap v5025681: 640 pgs: 151 stale+active+clean, 489 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail
2014-09-30 16:24:42.448706 mon.0 10.10.0.75:6789/0 2413 : [INF] osd.15 10.10.0.77:6809/5207 boot
2014-09-30 16:24:42.448999 mon.0 10.10.0.75:6789/0 2414 : [INF] osd.13 10.10.0.77:6803/4625 boot
2014-09-30 16:24:42.449243 mon.0 10.10.0.75:6789/0 2415 : [INF] osdmap e8764: 15 osds: 15 up, 15 in
2014-09-30 16:24:42.453775 mon.0 10.10.0.75:6789/0 2416 : [INF] pgmap v5025682: 640 pgs: 151 stale+active+clean, 489 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail
2014-09-30 16:24:43.458588 mon.0 10.10.0.75:6789/0 2417 : [INF] osdmap e8765: 15 osds: 15 up, 15 in
2014-09-30 16:24:43.463012 mon.0 10.10.0.75:6789/0 2418 : [INF] pgmap v5025683: 640 pgs: 151 stale+active+clean, 489 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail
2014-09-30 16:24:40.573054 osd.15 10.10.0.77:6809/5207 44 : [WRN] map e8762 wrongly marked me down
2014-09-30 16:24:41.377720 osd.13 10.10.0.77:6803/4625 38 : [WRN] map e8762 wrongly marked me down
2014-09-30 16:24:40.993859 osd.14 10.10.0.77:6806/4869 78 : [WRN] map e8762 wrongly marked me down
2014-09-30 16:24:47.483629 mon.0 10.10.0.75:6789/0 2429 : [INF] pgmap v5025684: 640 pgs: 75 stale+active+clean, 565 active+clean; 1401 GB data, 4209 GB used, 11596 GB / 15806 GB avail; 15716 B/s wr, 2 op/s

Meanwhile, all seems OK in the proxmox and ceph clusters as

Code:

root@c00:~# ceph status
    cluster 59127979-f5d9-4eae-ace2-e29dda08525f
    health HEALTH_OK
    monmap e3: 3 mons at {0=10.10.0.77:6789/0,1=10.10.0.75:6789/0,2=10.10.0.76:6789/0}, election epoch 120, quorum 0,1,2 1,2,0
    osdmap e8761: 15 osds: 15 up, 15 in
      pgmap v5025655: 640 pgs, 3 pools, 1401 GB data, 350 kobjects
            4209 GB used, 11596 GB / 15806 GB avail
                639 active+clean
                  1 active+clean+scrubbing
  client io 3050 B/s rd, 82351 B/s wr, 28 op/s
root@c00:~# pvecm nodes
Node  Sts  Inc  Joined              Name
  1  M    888  2014-09-30 15:53:59  c00
  2  M    892  2014-09-30 15:54:02  c01
  3  M    892  2014-09-30 15:54:02  c02
  4  M    892  2014-09-30 15:54:02  c03
  5  M    908  2014-09-30 16:23:02  c04
root@c00:~# pvecm status
Version: 6.2.0
Config Version: 9
Cluster Name: cctest
Cluster Id: 6430
Cluster Member: Yes
Cluster Generation: 908
Membership state: Cluster-Member
Nodes: 5
Expected votes: 5
Total votes: 5
Node votes: 1
Quorum: 3 
Active subsystems: 5
Flags:
Ports Bound: 0 
Node name: c00
Node ID: 1
Multicast addresses: 239.192.25.55
Node addresses: 10.10.0.75

and

Code:

root@c00:~# pveversion -v
proxmox-ve-2.6.32: 3.2-132 (running kernel: 2.6.32-31-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-15
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
root@c00:~#

We are experimenting a laggish GUI too

Any idea on the reason of those [WRN] entries and how to make them disappear?

Thanks for your attention

Pasquale

Proxmox HA - my concept, is it possible

$
0
0
Hello guys,
There are so many infos in google but i cant find interesting solution.
Tell me is that possible with proxmox 3.3:

I have two identical hardware servers Dell PowerEdgeR620

Now i wat to use High Avaibility like this:
- there is no shared storage, so there is only disk space on node1 and node2.
Normally VMs are running on node1 and syncing with node2 all the time via 1gbit or meaby better interface ?
Now when node2 will see that node1 isnt answering it wil imidietly turn on VM that was running on node1 (with disk and ram synced ?)

Is it possible ?

HDD Speed on KVM and Host (Node)

$
0
0
Hi all,

we have on our Host (Node) a HDD Speed of more than 400MB/s and on a KVM only max. 40 MB/s. With or without cache om KVM, it´s the same issue. What can we do to have same Speed on KVM or more Speed, well 40 MB/s is very small?

regards

vzmigrate Error: FPU state size unsupported

$
0
0
It seems live migration to older XEONs is not working with openvz ..... does anyone know a workaround?


http://forum.openvz.org/index.php?t=...df28#msg_51667


root@euler:~# vzctl --version
vzctl version 4.0-4.git.162dded
root@euler:~#
root@euler:~# uname -a
Linux euler 2.6.32-29-pve #1 SMP Thu Apr 24 10:03:02 CEST 2014 x86_64 GNU/Linux
root@euler:~#


Reason for version bumps

$
0
0
Any reason for the pve-manager deb version bump without any file / content change from v3.3-1 to 3.3-2?

It would be nice to include the dpkg-diffs into the /usr/bin in the standard Proxmox VE packaging.
http://code.google.com/p/dpkg-diffs/

This requires the original deb to be present in /var/cache/apt/archives folder and the newer one to be available (or downloaded) in the file system path for dpkg-diffs to compare contents and provide as a unified diff.

Proxmox Post Install script

Ceph: Device usage LVM

$
0
0
Hello everyone,

I'm trying to set up ceph stoare on my 4 node cluster.
The problem is I cannot use the disks.
In the GUI they are marked as in use for LVM.
In CLI it just uotputs this:
Code:

root@pm02:~# pveceph createosd /dev/sdk -journal_dev /dev/sdj
device '/dev/sdk' is in use
root@pm02:~#

I've tried MBR and GPT partition table.
Tried zaping disk and the dd solution as described in the wiki.

At first I thought this was because of multipath (I also have MPIO iSCSI).
Also tried plugging in the disks after reboot with multipath tools turned off.
The disks are not mounted in any way:

Code:

root@pm02:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=6179463,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=4945336k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=9890660k)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,data=ordered)
/dev/sda2 on /boot type ext3 (rw,relatime,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
none on /sys/kernel/config type configfs (rw,relatime)
10.100.1.252:/mnt/vol00/nfspm on /mnt/pve/nfspm type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.100.1.252,mountvers=3,mountport=695,mountproto=udp,local_lock=none,addr=10.100.1.252)

Code:

root@pm02:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "vg_sata2-01" using metadata type lvm2
  Found volume group "vg_sata2-03" using metadata type lvm2
  Found volume group "vg_sata2-02" using metadata type lvm2
  Found volume group "vg_sata2-00" using metadata type lvm2
  Found volume group "pve" using metadata type lvm2

Code:

root@pm02:~# lvscan
  inactive          '/dev/vg_sata2-01/vm-101-disk-1' [927.00 GiB] inherit
  inactive          '/dev/vg_sata2-02/vm-102-disk-1' [927.00 GiB] inherit
  inactive          '/dev/vg_sata2-00/vm-100-disk-1' [10.00 GiB] inherit
  ACTIVE            '/dev/pve/swap' [13.88 GiB] inherit
  ACTIVE            '/dev/pve/root' [27.75 GiB] inherit
  ACTIVE            '/dev/pve/data' [55.79 GiB] inherit
root@pm02:~#

Any ideas?


pm.PNG
Attached Images

Proxmox VE on Wheezy script

$
0
0
Has anyone tried to install Proxmox VE on an existing Wheezy system using:
https://github.com/lichti/install-proxmox
Code:

#!/bin/bash
cat << EOL > /etc/apt/sources.list
# Debian Wheezy
deb http://ftp.at.debian.org/debian wheezy main contrib
# PVE packages provided by proxmox.com
deb http://download.proxmox.com/debian wheezy pve
# Debian Wheezy security updates
deb http://security.debian.org/ wheezy/updates main contrib
EOL
wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -
apt-get update && apt-get dist-upgrade -y
apt-get install pve-firmware pve-kernel-2.6.32-23-pve pve-headers-2.6.32-23-pve build-essential git vim acpi -y
apt-get remove linux-image-amd64 linux-image-3.2.0-4-amd64 -y
update-grub
apt-get install proxmox-ve-2.6.32 ntp ssh lvm2 postfix ksm-control-daemon vzprocps open-iscsi bootlogd -y
apt-get purge network-manager -y

Useful logging addition?

Proxmox VE 3 on 32 bit machines

Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>