Quantcast
Channel: Proxmox Support Forum
Viewing all 172486 articles
Browse latest View live

Unable to activate storage

$
0
0
I'm running Proxmox 2.2-32/3089a616 with three VMs on an 8 core machine. I backup to a NAS each night. I haven't made any changes to the system in a few months now.

This morning I found an error in the backup report. Two of the VMs backed up normally but the third did not.
************
102: May 01 23:47:34 INFO: received signal - terminate process
102: May 01 23:47:36 ERROR: Backup of VM 102 failed - command '/usr/lib/qemu-server/vmtar '/mnt/pve/Backups-IOMEGA/dump/vzdump-qemu-102-2013_05_01-23_42_49.tmp/qemu-server.conf' 'qemu-server.conf' '/mnt/vzsnap0/images/102/vm-102-disk-1.qcow2' 'vm-disk-ide0.qcow2'|lzop >/mnt/pve/Backups-IOMEGA/dump/vzdump-qemu-102-2013_05_01-23_42_49.tar.dat' failed: exit code 1
***********
I tried to access the backup NAS from inside the Proxmox GUI. I can access the summary page and permissions page. Summary shows it's not active and has a size of 0, used 0, avail 0. I can't access the content page at all. It comes up with "unable to activate storage 'Backups-IOMEGA' - directory '/mnt/pve/Backups-IOMEGA' does not exist (500)". This is the same target as the first two backups.

If I click on the top level server view and select the Backkups-IOMEGA device It shows it's enabled.

Thank you in advance for any help offered.

Restarting PVE Status Daemon: pvestatdERROR: can't aquire lock '/var/run/pvestatd.pid

$
0
0
My gui was again corrupt because a mounted mapping got disconnected.

Normally: service pvestatd restart is fixing this. But now i see error:

Restarting PVE Status Daemon: pvestatdERROR: can't aquire lock '/var/run/pvestatd.pid

Is there a way to solve this ? With out rebooting the server ? :confused:

OpenVZ CT and disk usage

$
0
0
Hello to all,

1) Thanks to Proxmox, it's a great soft
2) I've a problem with disk usage in my OpenVZ containers :

Via the Web GUI, I create a CT, with, let's say, 4 GB of disk.
Once the CT has booted, if i do a "df" here is the output :

root@ct1:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/simfs 4.0G 0 4.0G 0% /
tmpfs 256M 0 256M 0% /lib/init/rw
tmpfs 256M 0 256M 0% /dev/shm

Mmm... Not good ! 4.0G is ok for the size, but 0 for "used" is not cool.

I need to monitor disk usage via Zabbix, so... I need a solution.
Do you have any idea ?

Thanks.

PS : Proxmox 2.3, last ISO + last upgrades.
No special configuration.

PVE host pauses when network connection has a problem.

$
0
0
I wouldn't have thought this was possible. We're in the process of determining the exact problem with the external network device, but I thought I'd go ahead and see what the experts have to say about this.

I have a PVE 2.0.3 host, with an Intel (82571EB) PCIe dual nic card. One of the ports is attached to an Adtran network device (I don't know which model), which as a T1 coming in to it. The Adtran device appears to have an intermittent problem, stalling for 3-5 minutes at a time, once per day or so at random times. When the Adtran device stalls (phone lines are unresponsive as well), the PVE host pauses as if someone pressed the pause button on a keyboard (there is no KVM on this host). When the Adtran begins working again, the PVE host and all VMs resume w/out missing a step. After the pause event, the performance summary charts in PVE contain a gap (no graph lines at all) that corresponds to the outage period.

What I don't understand is how a network device connection can cause the PVE host to freeze like this. I would expect (and desire) a network failure event to occur instead of the entire server seizing up.

Can someone explain this?

My apologies if this is construed to be off topic in some way. I realize that the problem may not be specifically PVE related.
Thanks.

Add space disk with hardware raid controller

$
0
0
Hello, i'm a newbie and I have a problem


I install proxmox VE 2.2 in my house server.
My hardware is:
CPU:Intel Xeon E5335 2.0Ghz


RAM: 4 GB (4 x 1gb)


Mainboard: Intel S5000PSL


controller raid:Intel(R) - controller RAID SROMBSAS18E


3 HD WD red 2TB in raid 5 (bay 0, 1, e 2)


so I have 3.5 TB space in sda disk


I add a new disk (WD red 2TB) and I expand my raid by hardware controller raid


now I have:
command: dmesg | grep "[h,s]d[a,b]" | grep " logical blocks: "
output: sd 2:2:0:0: [sda] 11712884736 512-byte logical blocks: (5.99 TB/5.45 TiB)




now I want expand my space in proxmox from 3.5TB to 5.45TB or 5.99TB but I don't understand how i can do it


This is the information that I have:


command: df -a -h -T
output:
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/pve-root
ext3 95G 777M 89G 1% /
tmpfs tmpfs 2.0G 0 2.0G 0% /lib/init/rw
proc proc 0 0 0 - /proc
sysfs sysfs 0 0 0 - /sys
udev tmpfs 2.0G 252K 2.0G 1% /dev
tmpfs tmpfs 2.0G 16M 2.0G 1% /dev/shm
devpts devpts 0 0 0 - /dev/pts
/dev/mapper/pve-data
ext3 3.5T 12G 3.5T 1% /var/lib/vz
/dev/sda2 ext3 494M 35M 435M 8% /boot
fusectl fusectl 0 0 0 - /sys/fs/fuse/connections
/dev/fuse fuse 30M 16K 30M 1% /etc/pve
beancounter cgroup 0 0 0 - /proc/vz/beancounter
container cgroup 0 0 0 - /proc/vz/container
fairsched cgroup 0 0 0 - /proc/vz/fairsched


I want expand:
/dev/mapper/pve-data ext3 3.5T 12G 3.5T 1% /var/lib/vz


Command: fdisk -l
output:
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 5997.0 GB, 5996996984832 bytes
255 heads, 63 sectors/track, 729093 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT


Disk /dev/dm-0: 103.1 GB, 103079215104 bytes
255 heads, 63 sectors/track, 12532 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-0 doesn't contain a valid partition table


Disk /dev/dm-1: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-1 doesn't contain a valid partition table


Disk /dev/dm-2: 3872.9 GB, 3872907067392 bytes
255 heads, 63 sectors/track, 470854 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/dm-2 doesn't contain a valid partition table




command: cat /etc/mtab
output:
root@proxmox:~# cat /etc/mtab
/dev/mapper/pve-root / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/mapper/pve-data /var/lib/vz ext3 rw 0 0
/dev/sda2 /boot ext3 rw 0 0
fusectl /sys/fs/fuse/connections fusectl rw 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,default_permissions,allow_other 0 0
beancounter /proc/vz/beancounter cgroup rw,name=beancounter 0 0
container /proc/vz/container cgroup rw,name=container 0 0
fairsched /proc/vz/fairsched cgroup rw,name=fairsched 0 0




command: cat /etc/fstab
output:
root@proxmox:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=04ddbc67-82c5-4ab5-8d91-cee72256e39a /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0




command: mount
output:
root@proxmox:~# mount
/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
/dev/sda2 on /boot type ext3 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,default_permissions,allow_other)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)




command: df -k
output:
root@proxmox:~# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/pve-root 99083868 803352 93247352 1% /
tmpfs 2019024 0 2019024 0% /lib/init/rw
udev 2009216 252 2008964 1% /dev
tmpfs 2019024 15664 2003360 1% /dev/shm
/dev/mapper/pve-data 3722787896 11789728 3710998168 1% /var/lib/vz
/dev/sda2 505764 34953 444699 8% /boot
/dev/fuse 30720 16 30704 1% /etc/pve




command: pvdisplay
output:
root@proxmox:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 3.64 TiB / not usable 3.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 953068
Free PE 4095
Allocated PE 948973
PV UUID 6nrOUr-fYyj-6k9i-jmMf-iOFQ-0C9Y-WHPBIs




command: vgdisplay
output:
root@proxmox:~# vgdisplay
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953068
Alloc PE / Size 948973 / 3.62 TiB
Free PE / Size 4095 / 16.00 GiB
VG UUID nd3Cr2-0vQ8-Qfl6-yLPc-18bz-G3eg-1LzhAF




command: lvdisplay
output:
root@proxmox:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID J3x31n-zxnb-5865-XGUn-uvkM-qkT9-zODYBl
LV Write Access read/write
LV Creation host, time proxmox, 2013-02-09 22:38:03 +0100
LV Status available
# open 1
LV Size 4.00 GiB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1


--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID O6Gx69-loEu-msip-1Xql-wwpZ-7zha-kGLq89
LV Write Access read/write
LV Creation host, time proxmox, 2013-02-09 22:38:04 +0100
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0


--- Logical volume ---
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID yqgJ1c-VsBf-k0rb-Ke9d-Z6Ht-qSZ2-vb2oG3
LV Write Access read/write
LV Creation host, time proxmox, 2013-02-09 22:38:04 +0100
LV Status available
# open 1
LV Size 3.52 TiB
Current LE 923373
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2




If I try to expand PVE, i need a new disk, but my new disk is inside my only disk create by hardware raid controller


I try this command: lvextend /dev/mapper/pve-data /dev/sda
output: Physical Volume "/dev/sda" not found in Volume Group "pve"


I read http://pve.proxmox.com/wiki/Extendin...tainer_Storage but it work with a new disk that create sdb1. This isn't my situation, and I don't understand how I can do it.

I can reinstall proxmox, but I want learn how to increase disk space without reinstallation. My server have 16 bay and I want add disk when I finish the space without reinstall and erase my file.
I search a solution, but i don't find it...


Somebody can help me?
Thank you.

public IP to guest kvm instance

$
0
0
Is there any way that the single public IP be assigned to a guest KVM instance of say pfSense and assign local IP to host? I know this is not ideal in production, but on times, it is helpful when resources (IP and Host HW) is scarce! Thanks!

Proxmox 2.3 on DELL R720

$
0
0
Hi,

I want to install a proxmox 2.3 (in order to use High Availability Cluster) on a DELL R720.
Did someone install it on this hardware?

I will have this configuration and I want to know if it is well compatible :
CPU : Intel Xeon E5-2630 -> I saw that it worked on Proxmox 2.1
Raid : PERC H310 -> Don't see anything on that
SAN : QLogic 2662 -> I saw at
http://pve.proxmox.com/wiki/Roadmap that it should be supported since v1 but I don't know version)
Network : Broadcom 5720 ( 4 port 1GB) -> seems to work fine on R520
Broadcom 57810 (2 port 10GB) ->
include latest Broadcom bnx2/bnx2x drivers

Do you know if all is good or if I miss something?

thank you for your reply and have a nice day!

Back successfull with Errors??????????????

$
0
0
Hello,
I am on schedule backing up a VM and although the reports that the backup was successful the actual email inside shows thousands of errors such as the ones below:


-----
VMID STATUS TIME SIZE FILENAME
100 ok 01:37:21 70.17GB /backup/dump/vzdump-openvz-100-2013_05_03-01_00_03.tar.gz


Detailed backup logs:


vzdump 100 --quiet 1 --mailto my-email-address@gmail.com --mode snapshot --compress gzip --storage LVM-Backup --node server000


100: May 03 01:00:03 INFO: Starting Backup of VM 100 (openvz)
100: May 03 01:00:03 INFO: Warning: Unknown iptable module: xt_connlimit, skipped
100: May 03 01:00:03 INFO: Warning: Unknown iptable module: xt_owner, skipped
100: May 03 01:00:03 INFO: CTID 100 exist mounted running
100: May 03 01:00:03 INFO: status = running
100: May 03 01:00:04 INFO: backup mode: snapshot
100: May 03 01:00:04 INFO: ionice priority: 7
100: May 03 01:00:04 INFO: creating lvm snapshot of /dev/mapper/vg0-data ('/dev/vg0/vzsnap-server000-0')
100: May 03 01:00:07 INFO: Logical volume "vzsnap-server000-0" created
100: May 03 01:00:08 INFO: creating archive '/backup/dump/vzdump-openvz-100-2013_05_03-01_00_03.tar.gz'
100: May 03 02:34:03 INFO: tar: ./home/username/Maildir/.INBOX.2012/cur/1284966879.8791_0.some-hostname.com\:2,: File shrank by 96591 bytes; padding with zeros
100: May 03 02:34:03 INFO: tar: ./home/username/Maildir/.INBOX.2012/cur/1289422495.19868_0.some-hostname.com.com\:2,: Read error at byte 0, while reading 2462 bytes: Input/output error
100: May 03 02:34:03 INFO: tar: ./home/username/Maildir/.INBOX.2012/cur/1289387710.15869_0.some-hostname.com.com\:2,: Read error at byte 0, while reading 7168 bytes: Input/output error
100: May 03 02:34:03 INFO: tar: ./home/username/Maildir/.INBOX.2012/cur/1291114962.17947_0.some-hostname.com.com\:2,: Read error at byte 0, while reading 2048 bytes: Input/output error
100: May 03 02:34:03 INFO: tar: ./home/username/Maildir/.INBOX.2012/cur/1282987822.20256_0.some-hostname.com\:2,: Read error at byte 0, while reading 6144 bytes: Input/output error
.
.
.
------




What are those errors? I have not tried to recover from backup and I am not certain what to do :o
Any ideas? :(

V2.1-14/f32f3f46

Backup failed after upgrade

$
0
0
Did an upgrade from 2.2 to 2.3 and now get the following error from the backup/dump:

Code:

Undefined subroutine &PVE::Storage::volume_is_base called at /usr/share/perl5/PVE/QemuServer.pm line 4487
Backup:

Code:

vzdump --quiet --mode snapshot --compress gzip --storage drive2_backup --maxfiles 1 --mailto scott@blah.blah --all
Version info:

Code:

# pveversion --verbose
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.0-36
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

Any ideas what could be the issue?

Content of storage drives is not showing up

$
0
0
I have proxmox installed on a server in our work. Everything was working great for quite a while. I have automatic backups set to happen everyday spaced apart a few hours in the middle of the night. The backups are successful, except when i go to view the contents of either my local storage or my NFS the backups are not present. I don't know what could have caused this, but I need some help fixing it.

this is my pveversion -v


pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-17-pve
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1

all of your help is greatly appreciated!!

own group rules

$
0
0
Hi all,

how can we create a own group rule with pveuser rights + pvedatastoreuser rights + backup restore right? Please help. Very thanks

regards

Backup Scheduler Flexibility Question

$
0
0
I'd prefer to have my VM's backed up one by one, node by node, in a serial fashion. I'd prefer to manage one big backup window, and not have to manage a job for each node. Possible?

Love Proxmox, forever grateful.

Cheers!

-Xminer

What is the best practice for busy web application?

$
0
0
I have a web application that is about to see some traffic. Here are a few approaches I've thought of, but I don't know which one is best:

  • Multiple VMs each with own files, with a Varnish front-end load balancing them
  • Multiple VMs with a shared NFS on separate VM, with a Varnish front-end load balancing them
  • Multiple VMs with a shared NFS on host node, with a Varnish front-end load balancing them
  • One powerful VM, with Varnish acting as reverse proxy


Can you please advise which is the correct best practice approach?

Changing permissions on bind mount

$
0
0
I have a SMB bind mount to a openvz client:


Code:

#!/bin/bash
mount -o gid=107,uid=107 --bind /mnt/media /var/lib/vz/root/104/mnt/media

I am trying to set the owner of the bind mount to a user on the CLIENT. Not sure how to do that, since I get permission denied when I attempt to change it on the client itself. The ID of the user is 107 on the client, with made me try to mount it with that user, but really didn't think that would work.. at a loss at how to set the owner to a user on the client....

HDD2 Problem

$
0
0
Hello everyone,
I followed instructions:

  • Create a big single partition (sdb1)

fdisk /dev/sdb
  • Create the physical volume:

pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created
  • Create the volume group 'vmdisks' (just choose a unique name):

vgcreate vmdisks /dev/sdb1
Volume group "vmdisks" successfully createdAnd finally: Add the LVM Group to the storage list via the web interface.





What have I done wrong here?

Thanks

How to properly update the system when in THIS state????????????

$
0
0
I am thinking of updating my system so I run pveversion -v to see the versions, however I just realized that two entries are incorrectly installed.


What is the best strategy here in order to fully update the system?

A) Try to reinstall the missing parts and then try to fully update the system OR

B) Try to fully update the system?

++++++++++++++++++++++++++++++++++
pveversion -v

pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-31
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: not correctly installed
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: not correctly installed
++++++++++++++++++++++++++++++++++


Also, the system reports the following as requiring an update:

An update to apache2 from 2.2.16-6+squeeze8 to 2.2.16-6+squeeze11 is available.
An update to apache2-mpm-prefork from 2.2.16-6+squeeze8 to 2.2.16-6+squeeze11 is available.
An update to apache2-utils from 2.2.16-6+squeeze8 to 2.2.16-6+squeeze11 is available.
An update to apache2.2-bin from 2.2.16-6+squeeze8 to 2.2.16-6+squeeze11 is available.
An update to apache2.2-common from 2.2.16-6+squeeze8 to 2.2.16-6+squeeze11 is available.
An update to apt-show-versions from 0.16 to 0.16+squeeze1 is available.
An update to base-files from 6.0squeeze6 to 6.0squeeze7 is available.
An update to ceph-common from 0.48argonaut-1~bpo60+1 to 0.56.3-1~bpo60+1 is available.
An update to corosync-pve from 1.4.3-1 to 1.4.4-4 is available.
An update to fence-agents-pve from 3.1.8-1 to 3.1.9-1 is available.
An update to gnupg from 1.4.10-4 to 1.4.10-4+squeeze1 is available.
An update to gpgv from 1.4.10-4 to 1.4.10-4+squeeze1 is available.
An update to gzip from 1.3.12-9 to 1.3.12-9+squeeze1 is available.
An update to libapache2-mod-perl2 from 2.0.4-7 to 2.0.4-7+squeeze1 is available.
An update to libcorosync4-pve from 1.4.3-1 to 1.4.4-4 is available.
An update to libcurl3 from 7.21.0-2.1+squeeze2 to 7.21.0-2.1+squeeze3 is available.
An update to libcurl3-gnutls from 7.21.0-2.1+squeeze2 to 7.21.0-2.1+squeeze3 is available.
An update to libcurl4-openssl-dev from 7.21.0-2.1+squeeze2 to 7.21.0-2.1+squeeze3 is available.
An update to libiscsi1 from 1.5.0-1 to 1.8.0-1 is available.
An update to libldap-2.4-2 from 2.4.23-7.2 to 2.4.23-7.3 is available.
An update to libldap2-dev from 2.4.23-7.2 to 2.4.23-7.3 is available.
An update to libnss3-1d from 3.12.8-1+squeeze5 to 3.12.8-1+squeeze6 is available.
An update to libperl5.10 from 5.10.1-17squeeze3 to 5.10.1-17squeeze6 is available.
An update to libpve-access-control from 1.0-24 to 1.0-26 is available.
An update to libpve-common-perl from 1.0-30 to 1.0-49 is available.
An update to libpve-storage-perl from 2.0-31 to 2.3-7 is available.
An update to librados2 from 0.48argonaut-1~bpo60+1 to 0.56.3-1~bpo60+1 is available.
An update to librbd1 from 0.48argonaut-1~bpo60+1 to 0.56.3-1~bpo60+1 is available.
An update to libssl-dev from 0.9.8o-4squeeze13 to 0.9.8o-4squeeze14 is available.
An update to libssl0.9.8 from 0.9.8o-4squeeze13 to 0.9.8o-4squeeze14 is available.
An update to libxml2 from 2.7.8.dfsg-2+squeeze5 to 2.7.8.dfsg-2+squeeze7 is available.
An update to libxml2-utils from 2.7.8.dfsg-2+squeeze5 to 2.7.8.dfsg-2+squeeze7 is available.
An update to libxslt1.1 from 1.1.26-6+squeeze2 to 1.1.26-6+squeeze3 is available.
An update to linux-base from 2.6.32-46 to 2.6.32-48squeeze1 is available.
An update to linux-libc-dev from 2.6.32-46 to 2.6.32-48squeeze1 is available.
An update to openssh-client from 5.5p1-6+squeeze2 to 5.5p1-6+squeeze3 is available.
An update to openssh-server from 5.5p1-6+squeeze2 to 5.5p1-6+squeeze3 is available.
An update to openssl from 0.9.8o-4squeeze13 to 0.9.8o-4squeeze14 is available.
An update to perl from 5.10.1-17squeeze3 to 5.10.1-17squeeze6 is available.
An update to perl-base from 5.10.1-17squeeze3 to 5.10.1-17squeeze6 is available.
An update to perl-modules from 5.10.1-17squeeze3 to 5.10.1-17squeeze6 is available.
An update to perl-suid from 5.10.1-17squeeze3 to 5.10.1-17squeeze6 is available.
An update to proxmox-ve-2.6.32 from 2.1-74 to 2.3-95 is available.
An update to pve-cluster from 1.0-27 to 1.0-36 is available.
An update to pve-firmware from 1.0-18 to 1.0-21 is available.
An update to pve-manager from 2.1-14 to 2.3-13 is available.
An update to pve-qemu-kvm from 1.1-8 to 1.4-10 is available.
An update to qemu-server from 2.0-49 to 2.3-20 is available.
An update to redhat-cluster-pve from 3.1.92-3 to 3.1.93-2 is available.
An update to tzdata from 2012c-0squeeze1 to 2012g-0squeeze1 is available.
An update to vncterm from 1.0-3 to 1.0-4 is available.
An update to vzctl from 3.0.30-2pve5 to 4.0-1pve2 is available.
An update to vzquota from 3.0.12-3 to 3.1-1 is available.
An update to xsltproc from 1.1.26-6+squeeze2 to 1.1.26-6+squeeze3 is available.




I am basically too scared to run a full system update because this a production server and although backup is available it will be a disaster to have a few hours of downtime trying to rebuild the non-working server after the reboot.


What do you recommend?


Thank you

LVM setup - backup/restore

$
0
0
Hello,

I'm looking for some guidance please.

I have 3 physical disks. I created 3 LVM VG's:

1 - System (100GB)
2 - SSD (60GB)
3 - Storage (1TB)

I have a KVM with two disks, one on SSD (50GB) and one on Storage (200GB).

I used the poxmox web interface to do a backup and this was all ok.

I then deleted the KVM completely and clicked restore. It asked me where to restore to, I selected SSD but I get an error saying I do not have enough free space.

I assume this is because both the 200gb (which should be restored to the Storage volume) and the 50gb (which rightly belongs on the ssd) are stored in one backup file and it is extracting both the disks to the same storage medium.

How can I do a restore correctly - place the disks on the storage medium they were originally backed up from, rather than trying to extract them both to SSD.

Or, if I were to just restore them both to Storage, is it possible to move the one which belongs on the SSD to it ? I am using LVM, so unsure how to go about this,

Any help or tips would be fantastic,

Thanks

How can I have this type of redundancy failover ???

$
0
0
I currently have one Proxmox server which has 1 CT running Webmin/Virtualmin for a hosting my websites & emails and 1 VM with my VoIP PBX system.
I would like to ensure that whatever happens those 2 will continue to work even if the physical host fails.

Obviously I need another physical host with Proxmox but how can I ensure that the CT and the VMs are replicated on a daily basis onto the second Proxmox host and that if the primary goes down the secondary will work automatically and transparently without my intervention? Long shot I know but at least if I know what’s needed it will take me somewhere!

Your thoughts are welcome!

How to Re-Add a drive

$
0
0
Hello

I have a home server running under proxmox. This VM has 3 hdd on it. OS drive and 2 data drives.
I wanted to remove a drive from the server so in proxmox I removed the wrong drive.

How do I re-add the drive so I can get access to the data on there again.

How to make Proxmox masquerade to secondary IP?

$
0
0
I have 2 IP addresses, but I want to Masquerade using secondary IP, guests using KVM-WinXP with LAN IP.
My primary IP is 192.95.31.41 (eth0) and secondary IP is 198.50.153.144.

Here is my /etc/network/interface:

Code:

# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr1
iface vmbr1 inet static
        address  192.168.0.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        post-up /etc/pve/kvm-networking.sh

auto vmbr0
iface vmbr0 inet static
        address  192.95.31.41
        netmask  255.255.255.0
        gateway  192.95.31.254
        broadcast  192.95.31.255
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        network 192.95.31.0

auto vmbr2
iface vmbr2 inet static
        address  198.50.153.144
        netmask  255.255.255.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.0.0/24' -o vmbr2 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.0.0/24' -o vmbr2 -j MASQUERADE

Above settings won't Masquerade using vmbr2 (but using vmbr0 was fine). Should I use eth0:0 for secondary IP?

Any advice how to setup this in Proxmox way?
Viewing all 172486 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>