Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

BIND dns server rquired

$
0
0
Can anyone confirm if the BIND server that comes preinstalled and running with PVE 2 (on OVH servers at least) can be safely disabled and removed.
I use to run a small DNSMASQ instance on the host merely to act as a DNS cache and central point for all VMs on that host and BIND just seems to be too much for that.
Thanks.

pvectl list: missing vm

$
0
0
Hi all,

Doing a pvectl list displays a list with a missing vm?
pvectl list
Use of uninitialized value in printf at /usr/bin/pvectl line 46.
VMID NAME STATUS MEM(MB) DISK(GB)
101 repo running 512 50.00
102 nfs1 stopped 1024 100.00
103 postgres stopped 512 10.00
105 www stopped 1024 15.00
106 wheezy stopped 512 4.00
112 owncloud running 512 30.00
123 ubuntu-1204 stopped 1024 10.00
125 fw1-pub stopped 256 2.00

As the above shows the command also displays an error.

The missing vm:
cat /etc/pve/qemu-server/109.conf
#eth0%3A 192.168.2.201
#eth1%3A 172.16.1.201
boot: cn
bootdisk: virtio0
cores: 2
cpu: qemu32
ide2: none,media=cdrom
memory: 384
name: ns1.datanom.net
net0: virtio=76:18:DB:B7:EB:C9,bridge=vmbr0
net1: virtio=7E:D7:33:6D:FB:4A,bridge=vmbr10
ostype: l26
sockets: 1
startup: order=1
virtio0: qnap_nfs:109/vm-109-disk-1.raw,size=4303355904

pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

- - - Updated - - -

Actually several vm's are missing.

Screenshot-1.png
Attached Images

Proxmox clustered and SAN problems

$
0
0
Hi All,

Bit of a long shot here I have inherited a 2 node cluster connected to a SAN all fairly new kit, the problem I have is the VM's are all running i am able to start stop etc.

However when i try to migrate from one node to another I get the following

ERROR: migration aborted (duration 00:02:05): Failed to sync data - mount error: mount.nfs: Connection timed out
TASK ERROR: migration aborted

Now if i go into storage view the 2 nodes are visible with red marks in them.

Also if i try to create a new VM I have no storage pools to select from, I cant even select an iso from local storage.

Any help would be great.

Regards


D.

kernel:Kernel panic - not syncing: Fatal exception

$
0
0
Hi there,

i've got a new server with a i7-3930K and 64GB RAM. Storage is on Adaptec6405E.

pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1


dmidecode:
System Information
Manufacturer: MSI
Product Name: MS-7760
Version: 2.0
Serial Number: To be filled by O.E.M.
UUID: AAAAAAAA-AAAA-AAAA-AAAA-D43D7E553DD4
Wake-up Type: Power Switch
SKU Number: To be filled by O.E.M.
Family: To be filled by O.E.M.


Is this a known bug or something else?
What can i do to make this server stable??


Big thanks for looking into my problem


Greetz
MasterTH

Open-VZ, Samba and additional HDD(s)...

$
0
0
After reading a lot, here is my first post...

The system has the following hardware:
HP ML350 with 2 Xeons / 8GB of RAM
6 SCSi-HDDs on a SmartArray-Controller
2 of them build a Raid 1 ==> /dev/cciss/c0d0
4 of them build a Raid 5 ==> /dev/cciss/c0d1
ProxMox is installed with default values on c0d0 (lvm etc.)
The first goal is to have a smb-server. Since i'm quite familar with samba i downloaded the turnkey-domain-controller template and let it run in an open-vz container.
Until here everything works fine, samba is up and accessible via the web-frontend. The PDC is visible by clients, shares can be created, etc...
Problem now is, that i want to share files which are on an existing NTFS-partition "/dev/cciss/c0d1p1" via SaMBa, but the partition doesn't show up, neither in proxmox-frontend nor in the container-frontend.

Until now I tried the following:
- logged in via ssh on the host system
- installed NTFS-Support with "apt-get install ntfs-3g"
- mounted the partition testwise manually to a directory "/storage". Worked fine, files are visible an accessible in the console, but nothing happened or showed in the samba container
- made the according entries in fstab an rebooted the machine: same result. Filesystem is mounted correctly but nothing "shareable" appears in host- or guest-frontend.
Hmmm... :confused:
Probably i have some some lacks in my understanding of the storage concept - could maybe someone light me the way? Thanks ...

KVM Live migration finishes with errors

$
0
0
just started testing our proxmox cluster and we have this problem with KVM live migrations. The problem is always when we first migrate a guest from one node to another. When we migrate back we don't have this error.

Running Proxmox 2.2-32
Nodes are identical hardware
Infiniband storage network
shared storage (iscsi)

Feb 21 20:00:59 migration status: active (transferred 189637177, remaining 353685504), total 545652736) , expected downtime 0
Feb 21 20:01:01 migration status: active (transferred 225608317, remaining 317435904), total 545652736) , expected downtime 0
Feb 21 20:01:03 migration status: active (transferred 245588682, remaining 297140224), total 545652736) , expected downtime 0
Feb 21 20:01:05 migration status: active (transferred 294085561, remaining 247664640), total 545652736) , expected downtime 0
Feb 21 20:01:07 migration status: active (transferred 320607226, remaining 220876800), total 545652736) , expected downtime 0
Feb 21 20:01:09 migration status: active (transferred 333685754, remaining 207798272), total 545652736) , expected downtime 0
Feb 21 20:01:11 migration status: active (transferred 347751418, remaining 193732608), total 545652736) , expected downtime 0
Feb 21 20:01:26 migration speed: 10.89 MB/s - downtime 14485 ms
Feb 21 20:01:26 migration status: completed
Feb 21 20:01:27 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@10.100.1.101 qm resume 103 --skiplock' failed: exit code 2
Feb 21 20:01:29 ERROR: migration finished with problems (duration 00:00:52)
TASK ERROR: migration problems

Two Node cluster, each node only sees itself "online" in GUI

$
0
0
Everything appears to be functioning properly other than each node only sees itself as online (green). I can access the other node in the GUI but it does not display the VM names, only IDs, and its icon appears as red.

I have updated to the newest Proxmox release and rebooted the nodes several times. The nodes have 2x bonded 1gbps crossover for node communication and drbd. Multicast seem to be working fine tested with asmping.

Some information:

Code:

root@pm1:~# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

Code:

root@pm2:~# pveversion -vpve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

Code:

root@pm1:~# cat /etc/hostname
pm1

(Domain name replaced with x's)
Code:

root@pm1:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
172.16.1.1 pm1.xxxxxxxxxxxxxxx pm1 pvelocalhost


# The following lines are desirable for IPv6 capable hosts
172.16.1.2 pm2.xxxxxxxxxxxxxxx pm2


::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Code:

root@pm2:~# cat /etc/hostname
pm2

Code:

root@pm2:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
172.16.1.2 pm2.xxxxxxxxxxxxxxx pm2 pvelocalhost
172.16.1.1 pm1.xxxxxxxxxxxxxxx pm1
# The following lines are desirable for IPv6 capable hosts


::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Code:

root@pm1:~# pvecm nodes
Node  Sts  Inc  Joined              Name
  1  M    12  2013-02-21 17:23:40  pm1
  2  M    20  2013-02-21 17:24:15  pm2

Code:

root@pm2:~# pvecm nodes
Node  Sts  Inc  Joined              Name
  1  M    20  2013-02-21 17:24:16  pm1
  2  M    20  2013-02-21 17:24:16  pm2

This is the only weird thing I notice, each node only shows itself in the .members file (I have restarted pvestatd):

Code:

root@pm1:~# cat /etc/pve/.members{
"nodename": "pm1",
"version": 0
}

Code:

root@pm2:~# cat /etc/pve/.members{
"nodename": "pm2",
"version": 0
}

Any help would be greatly appreciated!

Proxmox 2.2 hangs at boot

$
0
0
I'm running proxmox 2.2 on one of my servers, we had a UPS fail which caused a power failure to the system. Upon reboot the server outputs the following:

Setting parameters fo discs: (none).
Activating swap...done.
Checking root file system... fsck from util-linux-ng 2.17.2
/dev/mapper/pve-root: clean, 69494/6291456 files, 2235656/25165824 blocks
done.
Cleaning up ifupdown....
Loading kernel modules...done.
Setting up networking....
Setting up LVM Volume Groups Reading all phsyical volumes. This may take a while...
Found volume group "pve" using metadata type lvm2
3 logical volumes in volume group "pve" now active
Activating lvm and md swap...done.
Checking file systems... fsck from util-linux-ng 2.17.2
/dev/mapper/pve-data has gone 201 days without being checked, check forced.
/dev/mapper/pve-data: |= / 3.0%

This server then hangs and will not go any further. Any ideas on where to start? I am pretty new to proxmox and didn't want to fool with it too much until I got some ideas from you guys. Thanks in advance.

how to switch vm console to/from read only mode?

$
0
0
hi,
is there any way to put the browser-vnc console in read only mode?
sometime i need to be able to look what a user is doing without interfering with its activity (mouse pointer)

before you ask, no i'm not spying users, its an external tech support for a commercial app that often needs to work an a windows server (btw via the "supremo" remote control app). i just wish to be able to check what he's doing, in case something goes wrong. sometimes he asks me to act locally (because it's faster) so i need to be able to switch back to interactive mode, too. i think it's possible with the regular vnc client too.

any ideas? is this regulated by a "monitor" option or else?

Thanks, Marco

pve-manager/2.1/f9b0f63a
root@pve2:~# pveversion -v
pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a)
running kernel: 2.6.32-11-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-39
pve-firmware: 1.0-16
libpve-common-perl: 1.0-27
libpve-access-control: 1.0-21
libpve-storage-perl: 2.0-18
vncterm: 1.0-2
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

Certificate Error on console

$
0
0
Hi, we have a ssl certificate error on open a console:

Code:

Error: TLS handshake failed javax.net.ssl.SSLHandshakeException: java.security.CertificateException: certificate does not match
What can we do here?

regards

Proxmox Install on USB-Device

$
0
0
Hi,

i have just installed Proxmox-2.2 on a 8GB USB Stick. The Installation ran without problems, but the System
does not boot:

Quote:

Loading, please wait
Volume group "pve" not found
Skipping Volume group pve
Unable to find LVM volume pve/root
What is the Problem?


Alex

Can I delete template for running container?

$
0
0
I have tried googling and searching the forum with little success. What is the template used for after a container is created? For example, can I delete the template of a running container?

Thanks

shell link with error message

$
0
0
Hi,

if i click on button "shell" on a host-server, i get the message:
Code:

Fehler Permission check failed (realm != pam)  (403)
how can i fix it? please help. very thanks


regards

Omnios: zfs and iscsi

$
0
0
Hi all,

FYI. Omnios (latest stable) seems to have fixed the zfs and iscsi problems found in freenas, nas4free and openindiana. Even with ZIL in sync mode I have not been able to provoke any iscsi errors yet. And since latest napp-it has official support for omnios (the developer is actively migrating all his current openindiana systems to omnios) we seem to have found a winner for NAS/SAN in the production server room:p

Official guide can be found here: http://www.napp-it.org/downloads/omnios_en.html

Storage migrating a 4 GB disk
Code:

Hardware:

Hardware via prtdiag, Memory via prtconf:

Memory size: 2038 Megabytes

System Configuration:
BIOS Configuration: Intel Corp. LF94510J.86A.0278.2010.0414.2000 04/14/2010

==== Processor Sockets ====================================
Version                            Location Tag
-------------------------------- --------------------------
Intel(R) Atom(TM) CPU  330  @ 1.60GHz U1PR

==== Memory Device Sockets ================================

Type      Status Set Device Locator  Bank Locator
----------- ------    ---  ------------------- ----------------
DDR2      in use  0    J1MY                CHAN A DIMM 0

==== On-Board Devices =====================================
Intel(R) Extreme Graphics 3 Controller
Realtek RTL8102E Ethernet Device
Intel(R) High Definition Audio Device

Disks: SATA 2 but MB only supports SATA 1
nic: gigabit

time ./migrate.pl -d omv_lvm:vm-124-disk-2 -s omnios_lvm
Storage migrate type: lvm2lvm
Create target...
Logical volume "vm-124-disk-2" changed
Logical volume "vm-124-disk-2" created
Migrate source...
(100.00/100%)
Update vm config...
Remove source...
Logical volume "vm-124-disk-2" changed
Logical volume "vm-124-disk-2" successfully removed


real 1m31.568s
user 0m0.432s
sys 0m11.324s

Proxmox 1.9 -> Proxmox 2.2/2.3 test issues

$
0
0
Hi,

I've migrated over 40 of our VM's to Proxmox 2.2 and/or Proxmox 2.3 test and I have two that I can't seem to get to work.

The first is an OLD BSD/OS machine that worked fine in Proxmox 1.9 (and also worked fine in Proxmox 2.1), now when I try boot it, it gets partway through the boot (up to the point where it changes the root device to wd0a) and starts complaining about "wd0: lost interrupt". FYI - even on Proxmox 1.9, this VM only runs when I set the CPU to an old CPU (currently using 486).

My second problem machine is a 32 bit Solaris 10 machine. Again this works fine in Proxmox 1.9 (using "args: -no-kvm"), but in Proxmox 2.2/2.3test, I get "disk read error, sector xxxxxxxxx" where xxxxxxxxx is usually a large value (e.g. 31780350) followed by "Short read. 0xffffffff chars read".

I'm putting these both in the same thread because I suspect they may have the same problem. Is there something about either the ide controller or disks in general that changed with 2.2? I never tried the Solaris VM on my 2.1 machines before upgrading, so I don't know if it would have worked with 2.1 like the BSD/OS machine.

Thanks,
Jeff

PCIe SSD experiencia with proxmox?

$
0
0
Hi all,

i am planning a new proxmox infrastructure which requires high I/O performance and was thinking about including PCIe SSD disks. Due to their high costs, we cannot buy them just for testing and finding they do not behave well under Debian/Proxmox.

I would be delighted if any of could tell me any experience they have had in such an environment and perhaps give some recommendations on which hardware models work flawlessly under Proxmox.

thanks in advance,

Jose

N> Information about Proxmox with KVM

$
0
0
Hey everybody :)

i am making an overview of all the big hypervisors, including kvm, and need some information. so can anyone answer my questions regarding the usage of proxmox with kvm?

to the host system:
how many logical cores can use the host?
how many physical memory?
what are the maximum virtual CPU's per Host?

to the VM system:
what are the maximum virtual CPU's per VM?
what is the maximum physical memory per VM?
how many active VM's per Host can exist?

to the cluster:
how many nodes can be in a cluster?
how many VM's can be in a cluster?

best regards tobi

restore with qmrestore failed

$
0
0
Hi, how can i restore a vzdump file with name
Code:

vzdump-qemu-5531-2013_02_23-11_44_14.vma.gz
?

The command
Code:

qmrestore vzdump-qemu-5531-2013_02_23-11_44_14.vma.gz 5531
doesn´t work. can anywhere help me, please?

regards

PCI Passthrough already on Boot?

$
0
0
Hi,

i want to passthrough a SATA-Controller to a VM, all works well so far.
But is it possible, to unbind the PCI/PCIe Device already on boot, so that the host can't see the controller at any time (as it is in ESXi)?


Alex

MTU for Dummies ?

$
0
0
Is there by chance a "how to set mtu for dummies" guide?
I did some searching - found some perl code - in the forums - some referenced back to 1.x etc...
Sadly searching the forums since 2.x is out has been a bit more difficult

that being said here is what I need to figure out.

We have a cluster with 8 systems in it.
We want to have that cluster use our SAN.
Our SAN is set to an MTU of 9000 on its own ip range (not vlan just ip range)

What I would like to do in each system is setup so the vmbr0 operates at its normal 1500 MTU
and each system to have the eth1 network port to use the MTU of 9000

Truth to be told some systems have more than 1 card so doing some bonding might help - but for now just setting these up with the mtu of 9000 and being able to force the traffic for the SAN to go over eth1 vs vmbr0 is really what we need.

So - any ideas

If the SAN network is 192.168.13.1/24 where .254 is the san ?
:confused:
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>