Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Unable to login

$
0
0
I just installed a fresh version of proxmox and i am able to login via ssh and on the terminal to the server.

But when i try to login through the webUI it returns a 401 error (Connection error 401: No ticket).

Can anyone explain what possibly i am doing wrong ?

pve-no-subscription AND pve-enterprise, together?

$
0
0
I'm 99% sure I know the answer, but want to double-check:

Is there any problem with having both the enterprise repo AND the pve-no-subscription repo enabled simultaneously?
I will have a separate, non-mission-critical host that I can test new packages on, and this *seems* to be the way to do so.

Thanks,
-Adam

nvidia driver installation and loss of gui

$
0
0
Dear Proxmox Community,

I am trying to install a nvidia graphics card onto my proxmox server. I use the server as a desktop and high-end workstation to perform statistical processing, as well as serving openvz and kmv machines. It has a geforce GT 640 Graphic Processing Unit (GPU), which is capable of parallel processing using nvidia's CUDA software and R or other languages.

To utilize these capabilities, I believe it is necessary to install the proprietary nvidia video driver. Unfortunately, in attempting to do this, I lost all GUI capacity and only have command line terminal abilities.

Details

I followed the directions on https://wiki.debian.org/NvidiaGraphicsDrivers and when I rebooted, I lost graphics.

The kernel according to "uname -a" is 2.6.32.23-pve.

pve-headers-2.6.32.23-pve and nvidia.glx were successfully installed. I verify this by running sudo apt-get install and it confirms that both are installed with the latest versions.

"lspci -nn | grep -i nvidia" reports:

================================================== ==
03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107 [GeForce GT 640] [10de:0fc1] (rev a1)
03:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0e1b] (rev a1)
================================================== ==

"lspci -nn | grep -i nouveau" reports no instance of nouveau

startx reports:

================================================== ==
X.Org X Server 1.12.4 Release Date: 2012-08-27
X Protocol Version 11, Revision 0 Build Operating
System: Linux 3.2.0-4-amd64 x86_64 Debian Current
Operating System: Linux coventure 2.6.32-23-pve #1
SMP Tue Aug 6 07:04:06 CEST 2013 x86_64 Kernel
command line: BOOT_IMAGE=/vmlinuz-2.6.32-23-pve
root=/dev/mapper/root-root ro quiet Build Date: 17
April 2013 10:22:47AM xorg-server 2:1.12.4-6 (Julien
Cristau <jcristau@debian.org>) Current version of
pixman: 0.26.0
Before reporting problems, check
http://wiki.x.org
to make sure that you have the latest
version. Markers: (--) probed, (**) from config
file, (==) default setting,
(++) from command line, (!!) notice, (II)
informational,
(WW) warning, (EE) error, (NI) not
implemented, (??) unknown.
(==) Log file:
"/var/log/Xorg.0.log", Time: Sun Sep 15 11:20:00 2013
(==) Using config directory:
"/etc/X11/xorg.conf.d"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
FATAL: Module nvidia not found. Fatal server error: no screens
found Please consult the The X.Org Foundation support
at http://wiki.x.org
for help. Please also check the log file at
"/var/log/Xorg.0.log" for additional information.
Server terminated with error (1). Closing log file.
xinit: giving up xinit: unable to connect to X
server: Connection refused xinit: server error
================================================== ==

I do not understand why lspci reports the nvidia driver, yet startx cannot find the nvidia module. I suspect therein lies the problem, so if you understand why this is happening, it may be the key to the problem.

Here is what “sudo X -configure” reports:

================================================== =
X.Org X Server
1.12.4 Release Date: 2012-08-27 X Protocol Version
11, Revision 0 Build Operating System: Linux
3.2.0-4-amd64 x86_64 Debian Current Operating
System: Linux coventure 2.6.32-23-pve #1 SMP Tue Aug
6 07:04:06 CEST 2013 x86_64 Kernel command line:
BOOT_IMAGE=/vmlinuz-2.6.32-23-pve
root=/dev/mapper/root-root ro quiet Build Date: 17
April 2013 10:22:47AM xorg-server 2:1.12.4-6 (Julien
Cristau <jcristau@debian.org>) Current version of
pixman: 0.26.0
Before reporting problems, check
http://wiki.x.org
to make sure that you have the latest
version. Markers: (--) probed, (**) from config
file, (==) default setting,
(++) from command line, (!!) notice, (II)
informational,
(WW) warning, (EE) error, (NI) not
implemented, (??) unknown. (==) Log file:
"/var/log/Xorg.0.log", Time: Sun Sep 15 11:35:03
2013 List of video drivers:
rendition
intel
s3
sisusb
trident
mach64
cirrus
vmware
r128
tseng
chips
ark
savage
nouveau
openchrome
radeon
neomagic
voodoo
sis
tdfx
apm
s3virge
ati
i128
siliconmotion
nvidia
mga
fbdev
vesa FATAL: Module nvidia not found. (++)
Using config file: "/root/xorg.conf.new" (==) Using
config directory: "/etc/X11/xorg.conf.d" (==) Using
system config directory "/usr/share/X11/xorg.conf.d"
Number of created screens does not match number of
detected devices.
Configuration failed. Server terminated with error
(2). Closing log file.
================================================== ==

I think the key message here is "Number of created screens does not match number of detected devices."

Could the solution be to simply edit the xorg configuration files? And if so, in what way?

Lastly, I list below the tail 200 lines of /var/log/Xorg.0.log. This confuses me because it seems to be successfully loading many of the video drivers listed in the "X -configure" report, but that the nvidia driver could not load. My confusion is why this contradicts the lspci report. Also, why is it loading all of these unnecessary drivers and which ones can I get rid of?

Could anyone help me to resolve this problem and get me back to my KDE gui?

Thanks for your help,

Cheers,

Joe

/var/log/Xorg.0.log
================================================== =
[ 5279.403] compiled for 1.12.4, module version = 0.2.906
[ 5279.403] Module class: X.Org Video Driver
[ 5279.403] ABI class: X.Org Video Driver, version 12.1
[ 5279.403] (II) LoadModule: "radeon"
[ 5279.403] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so
[ 5279.404] (II) Module radeon: vendor="X.Org Foundation"
[ 5279.404] compiled for 1.12.4, module version = 6.14.99
[ 5279.404] Module class: X.Org Video Driver
[ 5279.404] ABI class: X.Org Video Driver, version 12.1
[ 5279.404] (II) LoadModule: "neomagic"
[ 5279.404] (II) Loading /usr/lib/xorg/modules/drivers/neomagic_drv.so
[ 5279.404] (II) Module neomagic: vendor="X.Org Foundation"
[ 5279.404] compiled for 1.12.1, module version = 1.2.6
[ 5279.404] Module class: X.Org Video Driver
[ 5279.404] ABI class: X.Org Video Driver, version 12.0
[ 5279.404] (II) LoadModule: "voodoo"
[ 5279.404] (II) Loading /usr/lib/xorg/modules/drivers/voodoo_drv.so
[ 5279.404] (II) Module voodoo: vendor="X.Org Foundation"
[ 5279.404] compiled for 1.12.1, module version = 1.1.0
[ 5279.404] Module class: X.Org Video Driver
[ 5279.404] ABI class: X.Org Video Driver, version 12.0
[ 5279.404] (II) LoadModule: "sis"
[ 5279.404] (II) Loading /usr/lib/xorg/modules/drivers/sis_drv.so
[ 5279.404] (II) Module sis: vendor="X.Org Foundation"
[ 5279.404] compiled for 1.12.1, module version = 0.10.4
[ 5279.404] Module class: X.Org Video Driver
[ 5279.404] ABI class: X.Org Video Driver, version 12.0
[ 5279.404] (II) LoadModule: "tdfx"
[ 5279.405] (II) Loading /usr/lib/xorg/modules/drivers/tdfx_drv.so
[ 5279.405] (II) Module tdfx: vendor="X.Org Foundation"
[ 5279.405] compiled for 1.12.1, module version = 1.4.4
[ 5279.405] Module class: X.Org Video Driver
[ 5279.405] ABI class: X.Org Video Driver, version 12.0
[ 5279.405] (II) LoadModule: "apm"
[ 5279.405] (II) Loading /usr/lib/xorg/modules/drivers/apm_drv.so
[ 5279.405] (II) Module apm: vendor="X.Org Foundation"
[ 5279.405] compiled for 1.12.1, module version = 1.2.3
[ 5279.405] Module class: X.Org Video Driver
[ 5279.405] ABI class: X.Org Video Driver, version 12.0
[ 5279.405] (II) LoadModule: "s3virge"
[ 5279.405] (II) Loading /usr/lib/xorg/modules/drivers/s3virge_drv.so
[ 5279.405] (II) Module s3virge: vendor="X.Org Foundation"
[ 5279.405] compiled for 1.12.1, module version = 1.10.4
[ 5279.405] Module class: X.Org Video Driver
[ 5279.405] ABI class: X.Org Video Driver, version 12.0
[ 5279.405] (II) LoadModule: "ati"
[ 5279.405] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so
[ 5279.405] (II) Module ati: vendor="X.Org Foundation"
[ 5279.405] compiled for 1.12.4, module version = 6.14.99
[ 5279.405] Module class: X.Org Video Driver
[ 5279.405] ABI class: X.Org Video Driver, version 12.1
[ 5279.405] (II) LoadModule: "i128"
[ 5279.405] (II) Loading /usr/lib/xorg/modules/drivers/i128_drv.so
[ 5279.406] (II) Module i128: vendor="X.Org Foundation"
[ 5279.406] compiled for 1.12.1, module version = 1.3.5
[ 5279.406] Module class: X.Org Video Driver
[ 5279.406] ABI class: X.Org Video Driver, version 12.0
[ 5279.406] (II) LoadModule: "siliconmotion"
[ 5279.406] (II) Loading /usr/lib/xorg/modules/drivers/siliconmotion_drv.so
[ 5279.406] (II) Module siliconmotion: vendor="X.Org Foundation"
[ 5279.406] compiled for 1.12.1, module version = 1.7.6
[ 5279.406] Module class: X.Org Video Driver
[ 5279.406] ABI class: X.Org Video Driver, version 12.0
[ 5279.406] (II) LoadModule: "nvidia"
[ 5279.406] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[ 5279.406] (II) Module nvidia: vendor="NVIDIA Corporation"
[ 5279.406] compiled for 4.0.2, module version = 1.0.0
[ 5279.406] Module class: X.Org Video Driver
[ 5279.408] (EE) NVIDIA: Failed to load the NVIDIA kernel module. Please check your
[ 5279.408] (EE) NVIDIA: system's kernel log for additional error messages.
[ 5279.408] (II) UnloadModule: "nvidia"
[ 5279.408] (II) Unloading nvidia
[ 5279.408] (EE) Failed to load module "nvidia" (module-specific error, 0)
[ 5279.408] (II) LoadModule: "mga"
[ 5279.408] (II) Loading /usr/lib/xorg/modules/drivers/mga_drv.so
[ 5279.408] (II) Module mga: vendor="X.Org Foundation"
[ 5279.408] compiled for 1.12.4, module version = 1.5.0
[ 5279.408] Module class:
X.Org Video Driver
[ 5279.408] ABI class: X.Org Video Driver, version 12.1
[ 5279.408] (II) LoadModule: "fbdev"
[ 5279.408] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so
[ 5279.408] (II) Module fbdev: vendor="X.Org Foundation"
[ 5279.408] compiled for 1.12.1, module version = 0.4.2
[ 5279.408] ABI class: X.Org Video Driver, version 12.0
[ 5279.408] (II) LoadModule: "vesa"
[ 5279.409] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so
[ 5279.409] (II) Module vesa: vendor="X.Org Foundation"
[ 5279.409] compiled for 1.12.1, module version = 2.3.1
[ 5279.409] Module class: X.Org Video Driver
[ 5279.409] ABI class: X.Org Video Driver, version 12.0
[ 5279.409] (WW) Falling back to old probe method for s3
[ 5279.409] (WW) Falling back to old probe method for sisusb
[ 5279.409] (WW) Falling back to old probe method for trident
[ 5279.409] (WW) Falling back to old probe method for cirrus
[ 5279.409] (II) Loading sub module "cirrus_laguna"
[ 5279.409] (II) LoadModule: "cirrus_laguna"
[ 5279.409] (II) Loading /usr/lib/xorg/modules/drivers/cirrus_laguna.so
[ 5279.409] (II) Module cirrus_laguna: vendor="X.Org Foundation"
[ 5279.409] compiled for 1.12.1.902, module version = 1.0.0
[ 5279.409] ABI class: X.Org Video Driver, version 12.0
[ 5279.409] (II) Loading sub module "cirrus_alpine"
[ 5279.409] (II) LoadModule: "cirrus_alpine"
[ 5279.409] (II) Loading /usr/lib/xorg/modules/drivers/cirrus_alpine.so
[ 5279.409] (II) Module cirrus_alpine: vendor="X.Org Foundation"
[ 5279.409] compiled for 1.12.1.902, module version = 1.0.0
[ 5279.409] ABI class: X.Org Video Driver, version 12.0
[ 5279.409] (WW) Falling back to old probe method for tseng
[ 5279.409] (WW) Falling back to old probe method for ark
[ 5279.409] (II) NOUVEAU driver Date: Fri Jul 6 16:23:50 2012 +1000
[ 5279.409] (II) NOUVEAU driver for NVIDIA chipset families :
[ 5279.409] RIVA TNT (NV04)
[ 5279.409] RIVA TNT2 (NV05)
[ 5279.409] GeForce 256 (NV10)
[ 5279.409] GeForce 2 (NV11, NV15)
[ 5279.409] GeForce 4MX (NV17, NV18)
[ 5279.409] GeForce 3 (NV20)
[ 5279.409] GeForce 4Ti (NV25, NV28)
[ 5279.409] GeForce FX (NV3x)
[ 5279.409] GeForce 6 (NV4x)
[ 5279.409] GeForce 7 (G7x)
[ 5279.409] GeForce 8 (G8x)
[ 5279.409] GeForce GTX 200 (NVA0)
[ 5279.409] GeForce GTX 400 (NVC0)
[ 5279.410] (WW) Falling back to old probe method for neomagic
[ 5279.410] (WW) Falling back to old probe method for voodoo
[ 5279.410] (WW) Falling back to old probe method for sis
[ 5279.410] (WW) Falling back to old probe method for apm
[ 5279.410] (WW) Falling back to old probe method for s3virge
[ 5279.410] (WW) Falling back to old probe method for i128
[ 5279.410] (WW) Falling back to old probe method for siliconmotion
[ 5279.410] (II) FBDEV: driver for framebuffer: fbdev
[ 5279.410] (II) VESA: driver for VESA chipsets: vesa
[ 5279.454] (++) Using config file: "/root/xorg.conf.new"
[ 5279.454] (==) Using config directory: "/etc/X11/xorg.conf.d"
[ 5279.454] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[ 5279.454] (==) ServerLayout "X.org Configured"
[ 5279.454] (**) |-->Screen "Screen0" (0)
[ 5279.454] (**) | |-->Monitor "Monitor0"
[ 5279.454] (**) | |-->Device "Card0"
[ 5279.454] (**) |-->Screen "Screen1" (1)
[ 5279.454] (**) | |-->Monitor "Monitor1"
[ 5279.454] (**) | |-->Device "Card1"
[ 5279.454] (**) |-->Screen "Screen2" (2)
[ 5279.454] (**) | |-->Monitor "Monitor2"
[ 5279.454] (**) | |-->Device "Card2"
[ 5279.454] (**) |-->Screen "Screen3" (3)
[ 5279.454] (**) | |-->Monitor "Monitor3"
[ 5279.454] (**) | |-->Device "Card3"
[ 5279.454] (**) |-->Screen "Screen4" (4)
[ 5279.454] (**) | |-->Monitor "Monitor4"
[ 5279.454] (**) | |-->Device "Card4"
[ 5279.454] (**) |-->Input Device "Mouse0"
[ 5279.454] (**) |-->Input Device "Keyboard0"
[ 5279.454] (==) Automatically adding devices
[ 5279.454] (==) Automatically enabling devices
[ 5279.454] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[ 5279.454] Entry deleted from font path.
[ 5279.454] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist.
[ 5279.454] Entry deleted from font path.
[ 5279.454] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[ 5279.454] Entry deleted from font path.
[ 5279.454] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist.
[ 5279.454] Entry deleted from font path.
[ 5279.454] (**) FontPath set to:
/usr/share/fonts/X11/misc,
/usr/share/fonts/X11/100dpi/:unscaled,
/usr/share/fonts/X11/75dpi/:unscaled,
/usr/share/fonts/X11/Type1,
/usr/share/fonts/X11/100dpi,
/usr/share/fonts/X11/75dpi,
built-ins,
/usr/share/fonts/X11/misc,
/usr/share/fonts/X11/100dpi/:unscaled,
/usr/share/fonts/X11/75dpi/:unscaled,
/usr/share/fonts/X11/Type1,
/usr/share/fonts/X11/100dpi,
/usr/share/fonts/X11/75dpi,
built-ins
[ 5279.454] (**) ModulePath set to "/usr/lib/xorg/modules"
[ 5279.454] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled.
[ 5279.454] (WW) Disabling Mouse0
[ 5279.454] (WW) Disabling Keyboard0
[ 5279.454] (EE) [drm] No DRICreatePCIBusID symbol
[ 5279.454] (II) Loading sub module "fbdevhw"
[ 5279.454] (II) LoadModule: "fbdevhw"
[ 5279.454] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
[ 5279.454] (II) Module fbdevhw: vendor="X.Org Foundation"
[ 5279.454] compiled for 1.12.4, module version = 0.0.2
[ 5279.454] ABI class: X.Org Video Driver, version 12.1
[ 5279.454] (EE) open /dev/fb0: No such file or directory
[ 5279.454] (II) Loading sub module "fbdevhw"
[ 5279.454] (II) LoadModule: "fbdevhw"
[ 5279.454] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
[ 5279.454] (II) Module fbdevhw: vendor="X.Org Foundation"
[ 5279.454] compiled for 1.12.4, module version = 0.0.2
[ 5279.454] ABI class: X.Org Video Driver, version 12.1
[ 5279.454] (EE) open /dev/fb0: No such file or directory
[ 5279.454] (WW) Falling back to old probe method for fbdev
[ 5279.454] Number of created screens does not match number of detected devices.
Configuration failed.

Snapshots with pci-passthrough possible (getting an error)?

$
0
0
I'm attempting to snapshot a pfsense VM running on proxmox version: 3.1-4/f6816604. This VM has a gigabit network card attached to it with pci passthrough and is using a qcow2 containter with an ide interface.

VM config:

Code:

bootdisk: virtio0
cores: 1
hostpci0: 07:00.0
ide0: ssd1:110/vm-110-disk-1.qcow2,format=qcow2,size=5G
ide2: none,media=cdrom
memory: 512
name: pfsense386
net0: e1000=2E:51:34:ED:E5:27,bridge=vmbr0
ostype: other
sockets: 1


proxmox throws this error when I try a snapshot:

Code:

VM 110 qmp command 'savevm-start' failed - State blocked by non-migratable device '0000:00:10.0/pci-assign'

Thanks for any help!

"pve configuration filesystem not mounted" after creating a cluster

$
0
0
I tried to create a Proxmox cluster this afternoon following this tutorial. When I ran the pvecm create command I got the following error:

pve configuration filesystem not mounted

All of the virtual machines are still up and running perfectly, but the whole web interface is no longer accessible.

Does anyone have any ideas on how I can fix this? I would prefer to not have to reboot the whole system if that's possible.

Edit: I've been doing a little more research and I now have another question which is whether or not the pvecm create command cleared out the /dev/fuse or /etc/pve directory.

What is the best storage solution for our needs?

$
0
0
Hi,

It's everything but written in stone that we've decided to use Proxmox for our new business venture. I've spent quite some time researching Proxmox, testing within VMWare Fusion 5, and have researched storage solutions to an extent but haven't tested. With all of the information available on this forum and the Proxmox wiki, I'm still not sure what's best for our particular deployment, local, distributed, shared, etc. I will try to give you an idea of our requirements and if you'd like please read it and make some suggestions based on your experience. This will be a legit business with paying customers so it's crucial we do this right. We will have Proxmox paid support. We want to consider keeping startup costs to a minimum, allowing us to grow as we go so to speak. This could mean starting out with local storage or distributed then migrating to shared storage later if that makes more sense for a larger deployment. The business will be cloud based storage with steaming capabilities, audio and video with on the fly transcoding. So we have high storage and CPU needs.

For a server we will start out with 1 with local storage. All will be containers (OpenVZ) with Debian 7.

The first server will have (12) 4TB SATA drives in a hardware RAID 5 configuration. It has 4 Nodes, each of which will have 12 Cores (2 x 6 Core L5639 2.13GHz) and 24GB DDR3 RAM.

This gives 48 Cores, 96GB RAM, 44TB of storage.

Long term (multiple servers) the requirements are:


  • Container failover to another server node if a node fails (High Availability)
  • Full data redundancy for containers and customer data (RAID 5 and Duplication of customer data)
  • Live container migration to other nodes for manually load balancing nodes (Can Proxmox do auto load-based migrations?)
  • Container backup off-site with automation (this may not be needed with HA)
  • Customer data mirror (backup) with automation
  • Hardware RAID 5 or 6
  • I'm sure there's more, that's all I can think of off the top of my head.


We need to consider our long term goals before we begin, so we will understand the migration path from 1 server to the requirements above. For example we may start out with local storage for the 4 nodes in 1 server, but what about when we buy another server? Would it be best to do some type of distributed shared solution (ceph?) for local storage, or a 10Gbit NAS? If we do the 10Gbit NAS we must consider the migration path from local to NAS. Really I'd like to keep it simple if it makes sense.

So based on the info above, do you guys have any suggestions on which path to take short term and long term? Thanks!

ipcc_send_rec failed and unable to access gui

$
0
0
Hi,

I installed Proxmox successfully and have installed a 4 node cluster with about 15 vm between them all working great until a couple of weeks ago. When I try and connect to the GUI https://172.17.x.x:8006 it fails to connect. I have updated all the nodes with the latest version.

root@nice:/var/log# pvecm status
Version: 6.2.0
Config Version: 6
Cluster Name: vm-cluster
Cluster Id: 41908
Cluster Member: Yes
Cluster Generation: 49924
Membership state: Cluster-Member
Nodes: 4
Expected votes: 4
Total votes: 4
Node votes: 1
Quorum: 3
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: nice
Node ID: 1
Multicast addresses: 239.192.163.88
Node addresses: 172.17.xx.xx

------

root@nice:/var/log# /etc/init.d/cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Tuning DLM kernel config... [ OK ]
Unfencing self... [ OK ]

----


I am seeing the following error in /var/log/syslog


Sep 16 12:26:01 nice /usr/sbin/cron[2320]: (*system*vzdump) CAN'T OPEN SYMLINK (/etc/cron.d/vzdump)
Sep 16 12:26:09 nice pvestatd[2754]: WARNING: ipcc_send_rec failed: Transport endpoint is not connected
Sep 16 12:26:09 nice pvestatd[2754]: WARNING: ipcc_send_rec failed: Connection refused
Sep 16 12:26:09 nice pvestatd[2754]: WARNING: ipcc_send_rec failed: Connection refused
Sep 16 12:26:09 nice pvestatd[2754]: WARNING: ipcc_send_rec failed: Connection refused
Sep 16 12:26:09 nice pvestatd[2754]: WARNING: ipcc_send_rec failed: Connection refused
Sep 16 12:26:09 nice pvestatd[2754]: WARNING: ipcc_send_rec failed: Connection refused
Sep 16 12:26:12 nice pmxcfs[3073]: [status] notice: update cluster info (cluster name vm-cluster, versio$
Sep 16 12:26:12 nice pmxcfs[3073]: [status] notice: node has quorum
Sep 16 12:26:12 nice pmxcfs[3073]: [dcdb] notice: members: 1/3073, 4/2556
Sep 16 12:26:12 nice pmxcfs[3073]: [dcdb] notice: starting data syncronisation
Sep 16 12:26:12 nice pmxcfs[3073]: [dcdb] notice: members: 1/3073, 4/2556
Sep 16 12:26:12 nice pmxcfs[3073]: [dcdb] notice: starting data syncronisation
Sep 16 12:26:12 nice pmxcfs[3073]: [dcdb] notice: received sync request (epoch 1/3073/00000001)


Please can someone assist with this?

Thanks

How to "update" a node IP address ?

$
0
0
I had set up a VPN between 2 servers to make a cluster. pvecm status says:

Code:


Version: 6.2.0
Config Version: 8
Cluster Name: ponytech
Cluster Id: 28530
Cluster Member: Yes
Cluster Generation: 4784
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2 
Active subsystems: 5
Flags:
Ports Bound: 
Node name: server2
Node ID: 1
Multicast addresses: 239.192.111.225
Node addresses: 10.0.0.11

The IP address 10.0.0.11 is not correct. My server has the address 10.0.0.2 but I could not find a way to "update" it.
Is there a solution excluding a server reboot ?

Thanks.

Performance problem on one vmhost proxmox 3.1

$
0
0
Hello Forum,

we have a strange performance problem running a VM on our 3 node cluster.
The vm is a a windows 2003 Standard edition with no virtio driver.
Once a day there is a high perfomce database job running on this vm.
If we run this vm on our different proxmox hosts we discovered that the newest host delivers the worst performance.

We achieve the following job runtimes on our different proxmox hosts:
Host A: (6 years old) 44 minutes
Host B (3 years old) 38 minutes
Host C (brandnew) 68 minutes

All hosts are running with an actual proxmox 3.1. They are all connected via dedicated bonded nic to a nfs storage.

The hosts are consisting of the following hardware:

Host A:
Quote:

Intel(R) Xeon(R) CPU E5420 @ 2.50GHz
32 GB RAM
Supermicro X7DB8 (Bios Phoenix Vers. 2.1c)
3ware 9650SE SATA-II RAID
One Gigabit ET Dual Port Server Adapter 82576 Gigabit Network Connection (igb / bonded)
Two Onboard Intel 80003ES2LAN Gigabit Ethernet Controller (Copper) (igb / bonded)
Proxmox is installed on hardware raid1 with two SATA Disks

~# pveperf
CPU BOGOMIPS: 39996.92
REGEX/SECOND: 988985
HD SIZE: 73.33 GB (/dev/mapper/pve-root)
BUFFERED READS: 112.24 MB/sec
AVERAGE SEEK TIME: 9.84 ms
FSYNCS/SECOND: 1723.32
DNS EXT: 103.80 ms
DNS INT: 2949.81 ms
Host B:
Quote:

Intel "Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
64 GB RAM
Adaptec ASR5805 (aacraid driver)
82576 Gigabit ET Dual Port Server Adapter / 82576 Gigabit Network Connection (igb driver / bonded)
Onboard: Two Intel 82576 Gigabit Network Connection (igb driver / bonded)
Proxmox is installed on hardware raid1 with two SAS disks

pveperf
CPU BOGOMIPS: 127995.72
REGEX/SECOND: 1093625
HD SIZE: 33.47 GB (/dev/mapper/pve-root)
BUFFERED READS: 184.50 MB/sec
AVERAGE SEEK TIME: 4.19 ms
FSYNCS/SECOND: 3302.67
DNS EXT: 57.02 ms
DNS INT: 2858.06 ms
Host C:
Quote:

Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
64 GB RAM
Intel S2600CP (Bios: Intel SE5C600.86B.01.08.0003.022620131521)
Adaptec 7805Q MaxC 8Port
2 x Intel Dualport Server Adapter (bonded)
2 x Intel Onboard NIC (bonded)
Proxmox is installed on hardware raid1 with two SSD disks

# pveperf
CPU BOGOMIPS: 119705.52
REGEX/SECOND: 1176046
HD SIZE: 41.09 GB (/dev/mapper/pve-root)
BUFFERED READS: 729.30 MB/sec
AVERAGE SEEK TIME: 0.23 ms
FSYNCS/SECOND: 2738.21
DNS EXT: 51.01 ms
DNS INT: 3162.55 ms

What is the best method to find the obviously existing bottleneck on Host C
Any Help is appreciated

Error in console????

$
0
0
good afternoon
can someone explain to me should I get this error on the screen (in this one I have a VM server with ubuntu linux 12.4 pangolin)

Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled rdmsr: 0x345
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x40 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x60 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x41 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x61 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x42 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x62 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x43 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu0 unhandled wrmsr: 0x63 data 0
Sep 14 10:29:51 meduza-I kernel: kvm: 3375: cpu1 unhandled wrmsr: 0x40 data 0

Volunteers Wanted to test Virtual Machine Cloud Platform!

$
0
0
Hello Proxmox Community,
I am excited to offer this opportunity to our Proxmox Community who helped me shape and form my knowledge on Clusters based on Proxmox and brought me where I am today.

I am part of a start-up IT company, Symmcom based out of Calgary, Canada and avid supporter of all things Open Source. We are specially fond of Proxmox and all that it can do. We have taken up a project to see how far Proxmox can be pushed and setup a test platform for Virtual Machines to be tested remotely by many users. Each volunteer will be given 1 or more Virtual Machine and access to ISO Storage to install OS of their choice. What better way to test it other than by the people who already knows Proxmox, our community. :)

Please keep in mind, we are NOT affiliated with Proxmox company in any way and there is NO money involve. Your effort will be pure voluntary. NO money is being generated through this project. It is purely test purpose only.

Our goal is to see how at which point our test platform maxes out and how Proxmox performs under immense pressure of live users. The test platform is setup using Proxmox as Server/VM Cluster and CEPH as Storage Cluster. You will be using the VMs just like you would use your desktop. All VMs will have to be accessed using SPICE. There is NO restriction how you would want to use the machines.

Following are some expectations from all volunteer testers:
1. Use the VM like your own.
2. Provide regular feedback of any sort of issues.
3. User is responsible to secure their VM with password.
4. None of the test VMs will be backed up, so do not save something you do not want to lose. Users will be given minimum of 1GB separate storage to save important files.
5. Leave the VM turned on at all times so we can monitor the Cluster performance.


What you should expect from us:
1. Appreciate regularly your effort and say Thank You!
2. Provide as stable and secured platform as possible to create real world scenario where long downtime is not acceptable.
3. Help you get started with your assigned Virtual Machine.
4. Provide minimum 1GB of separate storage for data safe keep which will be backed up regularly.

Upon receiving your request, we will send you an email with login info and all details you will need to start using your assigned Virtual Machine. There is no restriction on who can and cannot apply. All are welcome. Our goal is to fill up 100 users. We still have about 70 seats. If our cluster maxes out before we get that far, the number will be cut short.

Known issues of the Cloud Platform at this moment:
1. Video performance is not up to the par. If you are playing any video, try to test it in small window without maximizing.
2. Audio seems to be 99% ok, but sometimes Audio and Video gets out of sync.
3. Slow upload speed due to restriction in being test platform.

An example of Typical Virtual Machine provided:
CPU : Single
RAM : 1.5GB
HDD : 50GB
NIC : 1
CD-Rom : 1 (Access to ISO Storage for OS Installer ISOs)
OS : Windows XP Pro 32 Bit (pre-loaded). Valid License Provided.

If you are interested to participate in this never-before-tried Project, send your request to : beta@symmcom.com
with your First/Last name, email address you would like to use and country of origin.

[[[DISCLAIMER : THIS PROJECT IS PURELY FOR TESTING PURPOSE ONLY. THE PROVIDER, SYMMCOM IS NOT RESPONSIBLE FOR ANY DAMAGE CAUSED OR TIME LOST BY ANY USERS. THE USER UNDERSTANDS THAT ANY EFFORT FROM THEM IS PURELY VOLUNTEER BASIS.]]]

-Wasim Ahmed
Senior IT Consultant, Symmcom

Disable SSL/TLS compression in 3.1

$
0
0
After upgrading a proxmox cluster from 2.3 to 3.1 the new apache replacement is triggering an error in our Nessus scanner, for TLS CRIME vulnerability. The mitigation suggested is to disable SSL/TLS compression, but for the life of me, I can't find a way to do that.

Has anyone else run into this issues/found a work around?

Traffic Shaping

$
0
0
I really like the ability to set limits on the VMs, but I have two questions.

1) How can I only have the rate limit apply to all addresses except a specific real IP subnet?

2) How can I set a rate limit for the Proxmox server it self, and set the limit to apply to all addresses, except a specific real IP subnet?

Vlan tagging for management interface

$
0
0
Hi.

I have tried searching and applying various network configurations, but none of my configurations are working for my needs.
My network consists of multiple VLANs trunked to all my hosts, including the management VLAN.

The hosts have 2x1GB nic, that I want bonded.
This is my setup: VLAN(85, 202, 300, 301, 302) -> HOST (eth0, eth1->bond0)

What I want is to be able to:
  • One IP for the host on the management vlan 85
  • One IP for the host on the private vlan 300
  • Tag one VLAN 300, 301 etc to a VM


This is my current config:

Code:

auto lo
iface lo inet loopback


# Bond0 slave 0
iface eth0 inet manual


# Bond0 slave 1
iface eth1 inet manual


auto bond0
iface bond0 inet manual
    slaves eth0 eth1
    bond_miimon 100
    bond_mode 802.3ad
    bond-lacp-rate 0
    pre-up ip link set eth0 mtu 9000
    pre-up ip link set eth1 mtu 9000
    up ip link set bond0 mtu 9000


# Management IP in VLAN 885
auto vmbr0
iface vmbr0 inet static
    address 192.168.1.79
    netmask 255.255.255.0
    gateway 192.168.1.1
    dns-nameservers 8.8.8.8
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
    bridge_maxage 0
    bridge_ageing 0
    bridge_maxwait 0


# iSCSI/NFS vlan 202
auto bond0.202
iface bond0.202 inet manual
    vlan-raw-device bond0


auto vlbr202
iface vlbr202 inet static
    address 10.0.2.13
    netmask 255.255.255.240
    bridge_ports bond0.202
    bridge_stp off
    bridge_fd 0
    bridge_maxage 0
    bridge_ageing 0
    bridge_maxwait 0


# Public vlan 300 (internal server)
auto bond0.300
iface bond0.300 inet manual
    vlan-raw-device bond0

My question is, how can I assign a IP to the host on vlan 300, and also be able to tag it to a VM?

IPv6 ERROR: Unable to add route ip route add xxxx:xxxx:xxxx:xxxx:xxxx dev venet0

$
0
0
Hi!

I got IPv6 working on containers with Proxmox 3, however before adding IPv6 support, I was able to do ifdown vmbr0; ifup vmbr0 (to change something in the network configuration).

Now this is not possible, because i get the following error:
Quote:

# ifdown vmbr0; ifup vmbr0
Cannot find device "vmbr0"


Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
RTNETLINK answers: File exists
vzifup-post ERROR: Unable to add route ip route add xxxx:xxxx:3:1:3:0:0:1 dev venet0
run-parts: /etc/network/if-up.d/vzifup-post exited with return code 34
RTNETLINK answers: File exists
vzifup-post ERROR: Unable to add route ip route add xxxx:xxxx:3:1:3:0:0:1 dev venet0
run-parts: /etc/network/if-up.d/vzifup-post exited with return code 34

And I lost connectivity with the container. The only solution to this is restart the container.

Quote:

proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
This is my /etc/network/interfaces
Quote:

auto lo
iface lo inet loopback


auto vmbr0
iface vmbr0 inet static
address xxx.xxx.98.9
netmask 255.255.255.0
gateway xxx.xxx.98.1
bridge_ports eth0
bridge_stp off
bridge_fd 0


iface vmbr0 inet6 static
address xxxx:xxxx:3:1:4:0:0:1
netmask 64
up ip -6 route add default via xxxx:xxxx:3:1:1:0:0:1 dev vmbr0
up sysctl -p # <------ ADDED THIS LINE TO FIX IPv6 CONNECTIVITY ISSUES
down ip -6 route del default via xxxx:xxxx:3:1:1:0:0:1 dev vmbr0
And I have added this to sysctl.conf
Quote:

net.ipv6.conf.all.proxy_ndp=1net.ipv6.conf.default .autoconf=0
net.ipv6.conf.default.accept_ra=0
net.ipv6.conf.default.accept_ra_defrtr=0
net.ipv6.conf.default.accept_ra_rtr_pref=0
net.ipv6.conf.default.accept_ra_pinfo=0
net.ipv6.conf.default.accept_source_route=0
net.ipv6.conf.default.accept_redirects=0
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.all.accept_ra_defrtr=0
net.ipv6.conf.all.accept_ra_rtr_pref=0
net.ipv6.conf.all.accept_ra_pinfo=0
net.ipv6.conf.all.accept_source_route=0
net.ipv6.conf.all.accept_redirects=0
net.ipv6.conf.all.forwarding=1
If I stop container with IPv6s addresses before ifdown vmbr0; ifup vmbr0 everything works fine (it show warnings but it work as expected):
Quote:

Cannot find device "vmbr0"

Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).


Any help would be appreciated.


Regards

Move KVM image from one storage to another

$
0
0
Hello!

On a proxmox server (version 2.2-31/e94e95e9) I've two storage:

local with path target /var/lib/vz
sata with path target /data

I have a KVM VM (id 101) that i want to move from "local" storage to "sata" storage.

May I easly do that moving the 101 directory from /var/lib/vz/images to /data/images/ ?

Or some thing will break?

Thank you

PVE 3.1 - Directory - samba - not mounted after reboot

$
0
0
Hello,

I am using Proxmox VE 3.1 with latest patches and I tried to mount an external buffalo linkstation to store my backups on.
I followed this guide:
http://pve.proxmox.com/wiki/Storage_....2Fetc.2Ffstab

On this buffalo I created 2 folders and shared them.
on my /etc/fstab I added these two lines
Code:

//172.17.240.150/VM300-BACKUP /VM300-BACKUP cifs username=admin,password=mySecretPublicPass.,domain=hpa 0 0
//172.17.240.150/VM301-BACKUP /VM301-BACKUP cifs username=admin,password=mySecretPublicPass.,domain=hpa 0 0

After that I mounted these folder with:
Code:

mount -a
On GUI I added the directory and everything is fine. The GUI displays the correct size of ~3000GB and ~250GB used.
When rebooting proxmox and going back to the GUI I found that the directories only have ~16.9GB and x00MB used.
When I do:
Code:

mount -a
it displays the correct size on GUI.

It does not seem to be a GUI problem because when starting a backup of a VM which is greater than these ~16.9GB it stops and tells me "no free disc space".

This is ms fstab:
Code:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=9e294e32-aa02-4290-bb4a-e6429d92163c /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
/dev/sdb /HDD500GB ext3 defaults,errors=remount-ro 0 1
//172.17.240.150/VM300-BACKUP /VM300-BACKUP cifs username=admin,password=mySecretPublicPass.,domain=hpa 0 0
//172.17.240.150/VM301-BACKUP /VM301-BACKUP cifs username=admin,password=mySecretPublicPass.,domain=hpa 0 0
proc /proc proc defaults 0 0

I would really appreciate any help of you as I am no linux expert.

Thank you very much in advance!

No snapshots in backups ?

$
0
0
Hello everyone

We are looking at our overall backup strategy with Proxmox. Although the backup feature seems to work very well, it looks like the vzdump that the Proxmox backup system uses doesn't have any options to include the snapshots, that we might like to keep in our case (of course with KVM and qcow2 instances). I know it's not bulletproof (we have other strategy to backup data from inside our VMs, or prepare for disaster recovery), but keeping the snapshots is needed in some cases.

And this seems quite "official"...
http://pve.proxmox.com/wiki/Live_Snapshots#Backup
"If you want to (live) backup a VM containing snapshots you need at least Proxmox VE 2.3. Please note, snapshots are not included in the backup - see backup logs."

I searched quite a bit on how could we include snapshots in our backups if needed. Anybody has an idea on what should be the best strategy right now to do this within (or outside) Proxmox ?

Thank you !

Firefox on debian: no way to spice

$
0
0
Hi to all,

I think i did all steps documented in your Spice-Tutorial, but remote-viewer doesn't want work. If I push the spice-button on the vm-entry in the proxmox-admin-interface, I get the message "Verbindungstyp konnte nicht von URI ermittelt werden" and nothing else ... ?!?

In firefox, I associated ...

application/x-virt-viewer to remote-viewer verwenden

Best regards

telekomiker
(yes, I have 3 subscriptions ... ;) )

High availibility with nodes in different subnet

$
0
0
Hello,

I have 3 proxmox nodes that use iscsi on freenas as thier storage backend. We have three sites all connected via a vpls. I would like to know if it would be possible to put one of the nodes in at a separate site from the other 2. This would leave me with 2 nodes and the freenas server in my server room (10.12.4.0/22) and one node in my CoLo data center (10.4.4.0/24). What I would like to happen is if the the site where the server room is located was to go offline for some reason the VM's that were there would be brought up @ the datacenter. This would allow our 3rd administrative site (10.32.4.0/22) access to all of our VM's.

The 2 questions that I have not resolved are:


  • can a node live in a separate subnet?
  • can machines on a node in 1 subnet with an IP of say 10.12.4.103 be brought up in another subnet without changing their IP?
  • If yes to above, how would this be configured?


Any help would be greatly appreciated.

Thanks.
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>