Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

New Kernel for Proxmox VE 3.1 and bug fixes

$
0
0
We released a new kernel for Proxmox VE 3.1 and a lot of fixes. All this can be updated via GUI with just a few clicks, or as always via "apt-get update && apt-get dist-upgrade".

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Release notes

- pve-kernel-2.6.32 (2.6.32-111)

  • KVM: x86: Fix invalid secondary exec controls in vmx_cpuid_update()
  • update to vzkernel-2.6.32-042stab079.6.src.rpm
  • update aacraid to aacraid-1.2.1-30300.src.rpm
  • update e1000e to 2.5.4
  • update igb to 5.0.5
  • update ixgbe to 3.17.3

- pve-manager (3.1-14)

  • avoid warning in daily cron script
  • dissable SSL compression (avoid TLS CRIME vulnerability)
  • fix bug #456: allow glusterfs as backup storage
  • vzdump: fix hook script environment for job-* phase
  • vzdump: pass storage ID to hook scripts
  • depend on libcrypt-ssleay-perl and liblwp-protocol-https-perl (for correct LWP https support)
  • fix https proxy calls (always use CONNECT)
  • use correct changelog URLs for pve repositories
  • vzdump: correctly handle maxfiles parameter (0 is unlimited)
  • improve API2Client.pm code
  • update German translation
  • update Chinese translation
  • apt: proxy changelog API call to correct node

- qemu-server (3.1-4)

  • qemu migrate : only wait for spice server online + eval
  • speedup restore on glusterfs (do not write zero bytes)
  • Allow VMAdmin/DatastoreUser to delete disk

- apt (0.9.7.10)

  • fix Bug#669620: apt-transport-https: proxy authentication credentials are ignored

- ceph-common (0.67.3-1~bpo70+1), librados2 (0.67.3-1~bpo70+1), librbd1 (0.67.3-1~bpo70+1), python-ceph (0.67.3-1~bpo70+1)
  • New upstream release

- libpve-storage-perl (3.0-13)

  • bug fix: use filesysetm_path for LVM/iSCSI storage
  • glusterfs: really delete volumes when requested
  • API fix: auto-detect format for files with vmdk extension
  • API fix: return error if volume does not exists

- pve-libspice-server (0.12.4-2)
  • conflict with debian package 'libspice-server1'

- pve-sheepdog (0.6.2-1)
  • Bump to sheepdog 0.6.2

Package Repositories
http://pve.proxmox.com/wiki/Package_repositories

Source Code
http://git.proxmox.com
__________________
Best regards,

Martin Maurer
Proxmox VE project leader

Three nodes cluster Questions

$
0
0
Hi,
i have setup a Three nodes cluster + DRBD for testing.
All works well, i have tested HA with manual fencing, great!

I have 2 nodes that can store Virtual Machines in to a shared DRBD
and a third small Proxmox Server only for quorum.

I have some questions:
With 3 nodes, i don't need a quorum disk.. is it true ?
In my third small proxmox server whitout DRBD can i stop rgmanager ?
( update-rc.d rgmanager remove ?? )

How can i notified if a node fails ?



Some info:
--------------------------------------------------------------------------
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
proxmox-1 1 Online, rgmanager
proxmox-2 2 Online, Local, rgmanager
proxmox-3 3 Online, rgmanager


Service Name Owner (Last) State
------- ---- ----- ------ -----
pvevm:100
--------------------------------------------------------------------------

proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
--------------------------------------------------------------------------

pvecm s
Version: 6.2.0
Config Version: 3
Cluster Name: testC
Cluster Id: 3423
Cluster Member: Yes
Cluster Generation: 936
Membership state: Cluster-Member
Nodes: 3
Expected votes: 2
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: proxmox-2
Node ID: 2
Multicast addresses: 239.192.13.108
Node addresses: 192.168.0.2
--------------------------------------------------------------------------
cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="testC">
<cman expected_votes="2" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="0"/>
<fencedevices>
<fencedevice agent="fence_manual" name="human"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox-1" nodeid="1" votes="1">
<fence>
<method name="single">
<device name="human" nodename="proxmox-1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox-2" nodeid="2" votes="1">
<fence>
<method name="single">
<device name="human" nodename="proxmox-2"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox-3" nodeid="3" votes="1">
<fence>
<method name="single">
<device name="human" nodename="proxmox-3"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100"/>
</rm>
</cluster>
------------------------------------------------------------

Thanks!
Enrico

DAB Build Debian 7.1 Wheezy Template runtime errors in log of PVE 1.9

$
0
0
Using DAB, built a debian 7.1 template on PVE 1.9

dab.conf:
Code:

Suite: wheezy
CacheDir: ../cache
#Source: http://ftp.debian.org/debian SUITE main contrib
#Source: http://ftp.debian.org/debian SUITE-updates main contrib
#Source: http://security.debian.org SUITE/updates main contrib
Architecture: i386
Name: debian-7.0-standard
Version: 7.1-1
Section: system
Maintainer: Proxmox Support Team <support@proxmox.com>
Infopage: http://pve.proxmox.com/wiki/Debian_7.0_Standard
Description: Debian 7.0 (standard)
 A small Debian Wheezy system including all standard packages.

Makefile:
Code:



BASEDIR:=$(shell dab basedir)

all: info/init_ok
    dab bootstrap
    sed -e 's/^\s*1:2345:respawn/# 1:2345:respawn/' -i ${BASEDIR}/etc/inittab
    dab finalize

info/init_ok: dab.conf
    dab init
    touch $@

.PHONY: clean
clean:
    dab clean
    rm -f *~

.PHONY: dist-clean
dist-clean:
    dab dist-clean
    rm -f *~

When running an OpenVZ container based on this template, the init boot log shows:
Code:

INIT LOG    Online
starting init logger
INIT: version 2.88 booting
stty: standard input: Invalid argument
Using makefile-style concurrent boot in runlevel S.
tcgetattr: Invalid argument
[[36minfo[39;49m] Not setting System Clock.
Activating swap...done.
[....] Cleaning up temporary files... /tmp[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Filesystem type 'simfs' is not supported. Skipping mount. ...[?25l[?1c7[warn[39;49m8[?25h[?0c [33m(warning).
Fast boot enabled, so skipping file system check. ... (warning).
[....] Mounting local filesystems...[?25l[?1c7[[32m ok [39;49m8[?25h[?0cdone.
/etc/init.d/mountall.sh: 59: kill: Illegal number: 42 1
[....] Activating swapfile swap...[?25l[?1c7[[32m ok [39;49m8[?25h[?0cdone.
[....] Cleaning up temporary files...[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Setting kernel variables ...[?25l[?1c7[[32m ok [39;49m8[?25h[?0cdone.
[....] Configuring network interfaces...[?25l[?1c7[[32m ok [39;49m8[?25h[?0cdone.
[....] Starting rpcbind daemon...[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Cleaning up temporary files...[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Setting up X socket directories... /tmp/.X11-unix /tmp/.ICE-unix[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
INIT: Entering runlevel: 2
stty: standard input: Invalid argument
Using makefile-style concurrent boot in runlevel 2.
tcgetattr: Invalid argument
[....] Starting rpcbind daemon...[....] Already running.[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Starting enhanced syslogd: rsyslogd[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Starting deferred execution scheduler: atd[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Starting periodic command scheduler: cron[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
[....] Starting Postfix Mail Transport Agent: postfix[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
generating ssh host keys
[....] Starting OpenBSD Secure Shell server: sshd[?25l[?1c7[[32m ok [39;49m8[?25h[?0c.
INIT: no more processes left in this runlevel

The container runs normally.
In the Makefile, if the sed statement is not there then respawning error occurs.

The errors in the logs do not appear when running a container built using a similar DAB template for debian v6.0.7 with and without the sed statement in the Makefile.

PVE Enterprise Mirrors

$
0
0
If bandwith is the question why not comunity provide a mirror of packages large distribuited like debian?

I can donate some disk space and bandwith to a pve enterprise mirror?

Anyway on the same subject anyone know how remove the pop-up at pmox login? Thanks!!! :)

How to count sockets for pve subscriptions?

$
0
0
Hi,

I tried to count my CPU-sockets to plan a budget for subscription,

my system has:
Code:

root@pve2:~# lshw | grep -i cpu
    *-cpu:0
          description: CPU
          product: Intel(R) Xeon(R) CPU          E5520  @ 2.27GHz
          bus info: cpu@0
          version: Intel(R) Xeon(R) CPU          E5520  @ 2.27GHz
    *-cpu:1
          description: CPU
          product: Intel(R) Xeon(R) CPU          E5520  @ 2.27GHz
          bus info: cpu@1
          version: Intel(R) Xeon(R) CPU          E5520  @ 2.27GHz

so I guess the cpu-socket is 2.

I've searched on the web and also found this page which ends up with


  • to count cpu-sockets: "#cat /proc/cpuinfo | grep "physical id" | sort | uniq | wc -l"
    • this on my system gives 2

  • to count cpu-cores: "#cat /proc/cpuinfo | egrep "core id|physical id" | tr -d "\n" | sed s/physical/\\nphysical/g | grep -v ^$ | sort | uniq | wc -l"
    • this on my system gives 8

  • to count cpu-threads: "#cat /proc/cpuinfo | grep processor | wc -l"
    • this on my system gives 16



Is it right, so 2 is my cpu-socket count?

Thanks, Marco

Backup question

$
0
0
As it is now possible to do a snapshot backup of a VE to the same storage because of LVM reasons.
Would it be possible to do the backup one the dedicated disk and then have the backup process move it to the LVM disk by ussing the --dumpdir function ?

For example it is now doing

vzdump 101 --quiet 1 --mailto xxxx@xxxx.com --mode snapshot --compress gzip --maxfiles 3 --storage Backup

But I want to store the backup on local which is where the VM is on. (openvz)
Could I add --dumpdir /var/lib/vz/dump to this to have the end result store on the LVM disk ?


3 node HA Cluster, who wins?

$
0
0
Hi,

i have a 3 node HA Cluster. Which node will get the kvm, if one node is crashing?


thx!

Erik

Proxmox 2.3 Cluster to 3.1 Upgrade

$
0
0
So i hope you all bare with me... I am trying to upgrade a datacenter of 3 proxmox 2.3 nodes to the newest 3.1... i haven't been keeping up too well with everything, but i wanted to get some advice on how i should go about the upgrade?

I have read the wiki info to upgrade from 2.3 to 3.0
http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0 it all sounds great, but since i have a cluster of three, what would be the best way to upgrade with "Minimal" to "No" down time? and the question is if i update say node 3 to 3.0 will that node disassociate with the cluster?

Any help is very appreciated!

fence_node reports old agent

$
0
0
I had quite a few problems with DRAC6 setup... so I tried IPMI out and it worked like a charm. I've changed my cluster.conf to reflect using IPMI per the wiki. I ran fence_ipmilan -l root -p calvin -a 192.168.254.13 -o reboot by hand and it worked, even ran it on my prox02, just to be sure. The problem comes down to when I try to run fence_node prox03 -vv. I get this back:

fence prox03 dev 0.0 agent fence_drac5 result: error from agentagent args: nodename=prox03 agent=fence_drac5 cmd_prompt=admin1-> ipaddr=192.168.254.14 login=root passwd=calvin secure=1 fence prox03 failed.

From what I'm seeing, it's still calling the drac5... which was my old config. I restarted CMan, to no avail. Below is my config, but any help would be greatly appreciated!



<?xml version="1.0"?>
<cluster name="geekitsystems1" config_version="28">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>
<fencedevices>
<fencedevice agent="fence_ipmilan" name="prox01" lanplus="1" ipaddr="192.168.254.12" login="root" passwd="calvin" power_wait="5"/>
<fencedevice agent="fence_ipmilan" name="prox02" lanplus="1" ipaddr="192.168.254.13" login="root" passwd="calvin" power_wait="5"/>
<fencedevice agent="fence_ipmilan" name="prox03" lanplus="1" ipaddr="192.168.254.14" login="root" passwd="calvin" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="prox01" votes="1" nodeid="1">
<fence>
<method name="1">
<device name="prox01"/>
</method>
</fence>
</clusternode>
<clusternode name="prox02" votes="1" nodeid="2">
<fence>
<method name="1">
<device name="prox02"/>
</method>
</fence>
</clusternode>
<clusternode name="prox03" votes="1" nodeid="3">
<fence>
<method name="1">
<device name="prox03"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<service autostart="1" exclusive="0" name="ha_test_ip" recovery="relocate">
<ip address="192.168.254.254"/>
</service>
</rm>
</cluster>

cannot access a VM from vnc

$
0
0
hello everyone,

I have just installed the proxmox for class demonstration and I could also set up two different vms with different os (windows and ubuntu). my knowladge is very limited in this field. however, I wanted to remote access the vms internally in my home network only (I am not planning to use nat or any other protocols for WAN) .

What I have done so far

1- I added to [nano /etc/inetd.conf] th below command.
PHP Code:

59004 stream tcp nowait root /user/sbin/qm qm vncporxy 104 password 


2- I restart that /etc/init.d/openbsd-inetd restart

3- I installed vnc viewer and I enterd the IP:59004 and it said
the connection closed unexpectedly




I used telent [telent 192.168.0.133 59004]
it shows
PHP Code:

Trying 192.168.0.133...
connected to 192.168.0.133.
Escape character is 
Connection closed by foreign host 


Hope someone can help me
thanks so much in advance

bug in migration function with clvm + iscsi

$
0
0
Hi,

we've probably faced a bug.

in a cluster setup with shared storage ( clvm over iscsi), the newly created lv isn't deactived after auto migration:

you can reproduce this the following way:
lets say you have a kvm template on node1 and do a "clone to node2", the new vm is firstly cloned on node1 and after that it gets moved to node2. the problem now is, that the
logical volume (lv) of the new vm (for example vm-101-disk-1) isn't deactivated on node1 after it is automatically moved to node2 (after the cloning process). if you now delete the vm on node2
node1 does not recognize this, and you will run into an error if you try to create a new vm with the same id after you have deleted the vm on node2.

reproduce:
- create a cluster with min 2 nodes and iscsi storage with clvm on top
- create a kvm vm on node1 on the clvm storage and define it as template (right click -> convert to template)
- now create a clone of the template. choose as destination: node2
- after cloning is completed and new vm is located on node2, you will see on node1 (via ssh and 'lvdisplay' or 'dminfo show') that vm-101-disk-1 is still active
- delete the new vm on node2
- try to create a new vm with the id of the just removed vm
- you will now get a lvm error



greets,
patrick

Migrating lab from vmware workstation to proxmox

$
0
0
Hello

I have a i7 box with 16gb ram and 2x160ssd currently running ubuntu x64 desktop+vmware workstation as test lab for different kinds of vms.
I want to migrate this box to Debian + PVE.
My questions are:
1. Can I use existing vmdk disks to proxmox or should I convert them to qcow format?
2. On ubuntu I have utilized linux raid for 2 ssd disks.Can I do the same with Debian + PVE?
3. Can I use nested vms inside proxmox so I can test hypervisors like esxi,hyperv etc?
4. Will I have any noticeable performance gain by doing this change?

Sent from my GT-I8160P using Tapatalk 2

Weird issue with Windows XP guest

$
0
0
Hello.

I have this really weird issue and can't stop thinking about it.
I installed Windows XP SP3 as guest with KVM on Proxmox VE host. When it booted up, I changed the system theme (desktop theme, or whatever it is) to classic instead of the default for XP. Everything works fine until a reboot. When system loads after restart, the theme is back to the default for XP. The freaking thing forgets my settings. It's really weird, because it forgets only about the theme settings. If for example, change the start menu to classic, or make quick launch to appear, their settings are saved after restart.

What's wrong here?

vzdump completion time

$
0
0
I have two servers both Lenovo TS130s with 12GB of RAM and Xeon processors. One has an Adaptec RAID controller with mirrored 500GB and the other just has a single hard drive for the OS and VMs. Both machines have a backup hard drive dedicated to backups only.

Server A, with the RAID controller takes around 1 hour to do a vzdump backup. It is also running PVE 3.1.
Server B, without a controller the vzdump takes around 10 hours. Running PVE 2.3.

Besides the RAID controller, the only other difference in the configs are the hard drive on Server B does not have the hard drive cache enabled. They're both backing up a similar amount of data as well.

Why would server B take so long to do it's backups? Also, does the hard drive cache even come into play here? Is there a benefit of using it without a raid controller?

Thanks.

Routed mode and IP tables FORWARD rules

$
0
0
Hi,

I'm a devops, not so familiar with iptables, but I'm trying to learn how it works.
I know this is not the place to learn the network basics, but since my proxmox installation is concerned, maybe some of you already experienced the same.

I have a proxmox node in routed mode and with some CT's.
I also added a private network.

So, network interfaces on the node look like the following:

Code:

# network interface settings
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet static
    address  XX.XX.XX.XX # public IP
    netmask  255.255.255.0
    gateway  XX.XX.XX.1
    broadcast  XX.XX.XX.255
    network XX.XX.XX.0


auto eth1
iface eth1 inet dhcp


auto vmbr0
iface vmbr0 inet static
        address 192.168.1.1
        netmask 255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0


        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up  iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '192.168.1.0/24' -o eth0 -j MASQUERADE

Then, from the proxmox GUI, I assigned two IPs to my CT, one public (FAILOVER) and one private, so its network interfaces look like:

Code:

# Auto generated lo interface
auto lo
iface lo inet loopback


# Auto generated venet0 interface
auto venet0
iface venet0 inet manual
    up ifconfig venet0 up
    up ifconfig venet0 127.0.0.2
    up route add default dev venet0
    down route del default dev venet0
    down ifconfig venet0 down


iface venet0 inet6 manual
    up route -A inet6 add default dev venet0
    down route -A inet6 del default dev venet0


auto venet0:0
iface venet0:0 inet static
    address 192.168.1.103
    netmask 255.255.255.255


auto venet0:1
iface venet0:1 inet static
    address IP_FAILOVER
    netmask 255.255.255.255

I also enabled ip forwarding (sysctl).

With this configuration, everything works well, so far so good.

Now, I want to secure a bit my node and CTs, by configuring IPTABLES on the node.

How do I set a default FORWARD DROP policy and still allow some traffic to be forwarded on the CTs ?

For instance, one of my CT is an apache webserver, so I need to allow forwarding on it on port 80.
But, whatever FORWARD rule I try to achieve this, it won't work.

My current IPTABLES (which doesn't forward traffic on port 80):

Code:

*filter


-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP


# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT
-A OUTPUT -p tcp --dport 22 -j ACCEPT


# Proxmox
-A INPUT -p tcp --dport 8006 -j ACCEPT
-A OUTPUT -p tcp --dport 8006 -j ACCEPT

-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp --dport 587 -j ACCEPT


# DNS, FTP, HTTP, NTP
-A OUTPUT -p tcp --dport 21 -j ACCEPT
-A OUTPUT -p tcp --dport 80 -j ACCEPT
-A OUTPUT -p tcp --dport 443 -j ACCEPT
-A OUTPUT -p tcp --dport 53 -j ACCEPT
-A OUTPUT -p udp --dport 53 -j ACCEPT
-A OUTPUT -p udp --dport 123 -j ACCEPT


# Loopback
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT


# Ping
-A INPUT -p icmp -j ACCEPT
-A OUTPUT -p icmp -j ACCEPT

# CTs


## Websever
-A FORWARD -p tcp --dport 80 -d 192.168.1.103 -j ACCEPT
-A FORWARD -p tcp --dport 443 -d 192.168.1.103 -j ACCEPT

I need some advices please ;-)

Thanks.

Bye.

Cluster wide cron?

$
0
0
Hi,

I'd like to run a cron entry on all nodes in my cluster. Is there a cron file that gets replicated across all nodes in the cluster like vzdump.cron?

Gerald


edit: I might as well tell you what I'm doing. I generate offsite backups every weekend. The VM's get dumped to an NFS mount. So, on Friday, the NFS servers gets a USB drive hooked up at an NFS export point. Proxmox already has the NFS point mounted, so doesn't see the USB drive. If I umount the NFS mount on Proxmox, it gets remounted automagically, and Proxmox now sees the USB drive.

On Monday, the USB drive is unmounted on the NFS server, so I Proxmox needs to umount it as well.

Right now, my crontab contains:

# unmount the offsite disk, forcing a remount with the actual mounted one on backuppc.norscan.com
45 12 * * 6 /bin/umount /mnt/pve/offsite
50 11 * * 1 /bin/umount /mnt/pve/offsite

But I need to maintain this on all nodes in the cluster. It would be easier if I only had to do it once.

Hard Disk free space - Something wrong!

$
0
0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ssd120 111G 41G 65G 39% /ssd120

But inside of this disk, I have a .raw file, with 80GB.
How it shows only 41GB in use?

Thanks.

What's the difference between Linux operating systems?

$
0
0
Linux has a lot of operating systems, I'm not even going to bother to list them. I was wondering, it there a difference between them? Does each type of Linux operating system hold a specific purpose? For example, Red Hat holds a different functionality to Ubuntu, or something like that.

Neither Console nor Spice works

$
0
0
1st try console

IcedTea-Web Plugin version: 1.4.1 (fedora-0.fc19-x86_64)
16.11.13 12:01
Die Ausnahme war:
net.sourceforge.jnlp.LaunchException: Fatal: Initialisierungsfehler: Konnte Applet nicht initialisieren. Um weitere Information zu erhalten, bitte den Knopf „Weitere Informationen“ klicken.
at net.sourceforge.jnlp.Launcher.createApplet(Launche r.java:734)
at net.sourceforge.jnlp.Launcher.getApplet(Launcher.j ava:662)
at net.sourceforge.jnlp.Launcher$TgThread.run(Launche r.java:914)
Caused by: net.sourceforge.jnlp.LaunchException: Fatal: Anwendungsfehler: Unbekannte Hauptklasse. Konnte die Hauptklasse für diese Anwendung nicht bestimmen.
at net.sourceforge.jnlp.runtime.JNLPClassLoader.initi alizeResources(JNLPClassLoader.java:708)
at net.sourceforge.jnlp.runtime.JNLPClassLoader. (JNLPClassLoader.java:249)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.creat eInstance(JNLPClassLoader.java:382)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.getIn stance(JNLPClassLoader.java:444)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.getIn stance(JNLPClassLoader.java:420)
at net.sourceforge.jnlp.Launcher.createApplet(Launche r.java:700)
... 2 more
Dies ist die Liste der Ausnahmen, die während des Starts des Applets aufgetreten sind. Hinweis: Diese Ausnahmen können von mehreren Applets stammen. Um einen hilfreichen Fehlerbericht zu erstellen, sollte sichergestellt sein, dass nur ein Applet ausgeführt wird.
1) at 16.11.13 12:01
net.sourceforge.jnlp.LaunchException: Fatal: Anwendungsfehler: Unbekannte Hauptklasse. Konnte die Hauptklasse für diese Anwendung nicht bestimmen.
at net.sourceforge.jnlp.runtime.JNLPClassLoader.initi alizeResources(JNLPClassLoader.java:708)
at net.sourceforge.jnlp.runtime.JNLPClassLoader. (JNLPClassLoader.java:249)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.creat eInstance(JNLPClassLoader.java:382)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.getIn stance(JNLPClassLoader.java:444)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.getIn stance(JNLPClassLoader.java:420)
at net.sourceforge.jnlp.Launcher.createApplet(Launche r.java:700)
at net.sourceforge.jnlp.Launcher.getApplet(Launcher.j ava:662)
at net.sourceforge.jnlp.Launcher$TgThread.run(Launche r.java:914)
2) at 16.11.13 12:01
net.sourceforge.jnlp.LaunchException: Fatal: Initialisierungsfehler: Konnte Applet nicht initialisieren. Um weitere Information zu erhalten, bitte den Knopf „Weitere Informationen“ klicken.
at net.sourceforge.jnlp.Launcher.createApplet(Launche r.java:734)
at net.sourceforge.jnlp.Launcher.getApplet(Launcher.j ava:662)
at net.sourceforge.jnlp.Launcher$TgThread.run(Launche r.java:914)
Caused by: net.sourceforge.jnlp.LaunchException: Fatal: Anwendungsfehler: Unbekannte Hauptklasse. Konnte die Hauptklasse für diese Anwendung nicht bestimmen.
at net.sourceforge.jnlp.runtime.JNLPClassLoader.initi alizeResources(JNLPClassLoader.java:708)
at net.sourceforge.jnlp.runtime.JNLPClassLoader. (JNLPClassLoader.java:249)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.creat eInstance(JNLPClassLoader.java:382)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.getIn stance(JNLPClassLoader.java:444)
at net.sourceforge.jnlp.runtime.JNLPClassLoader.getIn stance(JNLPClassLoader.java:420)
at net.sourceforge.jnlp.Launcher.createApplet(Launche r.java:700)
... 2 more

It seems there is no start class defined in Java applet.

---edit---
Seems to be a problem caused by security appliance which works as a reverse proxy.
Is there a way to set up an unencrypted version for used in secured networks?
Reverse Proxy should care about encryption then.
---edit---

2nd try spice

I donloaded file spiceproxy from Spice button, delay to start following command is less than 5s (pretyped)

$ LANG=C virt-viewer spiceproxy --debug
(virt-viewer:3826): virt-viewer-DEBUG: Couldn't load configuration: No such file or directory
(virt-viewer:3826): virt-viewer-DEBUG: Insert window 0 0xadc860
(virt-viewer:3826): virt-viewer-DEBUG: fullscreen display 0: 0
(virt-viewer:3826): virt-viewer-DEBUG: fullscreen display 0: 0
(virt-viewer:3826): virt-viewer-DEBUG: connecting ...
(virt-viewer:3826): virt-viewer-DEBUG: Opening connection to libvirt with URI <null>
(virt-viewer:3826): virt-viewer-DEBUG: Add handle 12 1 0xbc8230
(virt-viewer:3826): virt-viewer-DEBUG: initial connect
(virt-viewer:3826): virt-viewer-DEBUG: notebook show status 0x9a5000
(virt-viewer:3826): virt-viewer-DEBUG: Cannot find guest spiceproxy
(virt-viewer:3826): virt-viewer-DEBUG: Remove handle 1 12
(virt-viewer:3826): virt-viewer-DEBUG: Disposing window 0xadc860

(virt-viewer:3826): virt-viewer-DEBUG: Set connect info: (null),(null),(null),-1,(null),(null),(null),0

To get a kvm guest running I installed virt-manager -> no more problems (X11 tunneld through ssh)

At least there is one question: Is it possible to get at least spice or console running on proxmox?

ZFS on Proxmox and rsync

$
0
0
I know it's not strictly a Proxmox question, but many folks here seem to use ZFS on Proxmox. What I have noticed is that compared to running ZFS on stock Wheezy kernel (3.2), running it on the latest Proxmox/RHEL 2.6.32 kernel, running times are more than doubled. Even when using rsync with -n option, when no actual file operations and transfers are performed, but everything else is. We're talking about several million files (around 7M files, spanning several runs of backup processes of servers), and a runtime of about 2 hours increasing to more than 4 hours.

Does anyone have an idea why? Tunable kernel parameters, maybe? Intrinsic differences affecting performance on the RHEL kernel? No VMs are runnig, just installed the default setup over an existing Wheezy installation.
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>