Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

High CPU % from Windows guest on first start up after cloning from template

$
0
0
I created a template on our Proxmox server 3.1-21 and have been using it to clone new VM's as needed, Windows 7 32 bit. Today I went to clone a new VM from my template and the VM went up to 100%-105% and the VM just kind of froze. I wasn't even able to use spice viewer to see what was going on. I stopped the newly created VM's as the CPU would not drop under 100% and destroyed it. I did this several times with the same results. I had downtime scheduled this afternoon to replace the cd drive in it so I shut the server down, changed out the drive, started back up and tried again. This time I was able to clone the new VM from my template, start it up, and ran through startup and everything was fine.

My question is can someone see how this might be happening, and why did a reboot solve it?

OVS OpenvSwitch howto for the gui

$
0
0
Version 3.2Hi, is there any starting point documentation from Proxmox for the new OVS? I cant get my head around it and I'm surely missing something.I'm through the general OpenVswitch Documentation available and I was able from the commandline to replace my old vmbr0 with "vmbr1 OVS bridge" and add INT ports.However those changes do no get reflected in the webgui while performing any action in the gui results in "Parameter verification failed. (400)gateway: Default gateway already exists on interface 'vmbr1'."what am I missing?

ProxMox VE 3.2 Installation Failure

$
0
0
I have attempted to install ProxMox VE 3.2 multiple times onto a Kingston V series SSD drive. In each case the installation would reach the point of "making system bootable," at which point it would indicate the install failed. I examined the verbose messages and saw the following errors.

1. cannot touch '/target/etc/bacula/do_not_run': No such file or directory
2. failed to run command 'a2ensite': No such file or directory
3. /usr/sbin/grub-setup: error: unable to identify a filesystem in hd0; safety check can't b
unable to install the boot loader
4. umount: /target: device is busy


I also have 2 RAID 5 file systems hanging off an Adaptec 6805 controller that are meant for data storage. The ProxMox system sits solely on the Kingston SSD drive.

Does anyone have any ideas as to the cause or a possible solution?

Thank you!

CRITICAL - Move Disk Issues

$
0
0
Hi,

I have NFS and iSCSI storage. I've slowly been moving off of iSCSI onto he NFS (better bandwidth to storage, better speed).

Last night, something weird happened. Basically, all my storage temporarily disconnected, both iSCSI and NFS. I'm suspecting a switch issue.

The problem is, I can no longer use the GUI to move a disk. The move disk dialog comes up, and target storage and format remain greyed out. I believe it's an issue with the iSCSI, and I would really like to move data off of it. The reason I think it's an iSCSI issue is because migrate VM takes forever on iSCSI (waiting to actually start, and then waiting to start VM ob new server before migrating). On the NFS, it's immediate.

I'm running 3.2. I'd like to try a command line move, but I'm not sure of the format of the command.

So, two questions:

1. Where can I look to see why the GUI is greyed out?
2. Whats the format of the command line move (qm move?)

Gerald


ps: interestingly, syslog is showing me a lot of "kernel: sd 6:0:0:0: [sde] Spinning up disk.............................................. ................." on all 3 of my Proxmox servers. Is that an iSCSI issue? This machine only natively has sda and sdb

Update 5 Nodes Cluster to 3.2 and lost one of the nodes

$
0
0
Hi,

I just upgraded our 5 node cluster to 3.2 and now one of the nodes can't join to the cluster. It says waiting for quorum and timeouts every time. Nodes are kvm44,kvm45,kvm46,kvm47,kvm48. I shutdown all the VMs and updated and rebooted the nodes, one node at a time. I started with kvm48, then kvm47 with success. But after I updated and rebooted kvm46 it could not join the cluster. I continued to update others with success too.

How can I re join kvm46 to cluster?

Here is pveversion -v outputs:

kvm44
Code:

root@kvm44:~# pveversion -vproxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

kvm45
Code:

root@kvm45:~# pveversion -vproxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

kvm46:
Code:

root@kvm46:~# pveversion -vproxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

kvm47:
Code:

root@kvm47:~# pveversion -vproxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

kvm48:
Code:

root@kvm48:~# pveversion -vproxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-24-pve: 2.6.32-111
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

3.2 update breakes guest ipv6 networking when using vlans onthe same bridge

$
0
0
Hi,

We use our KVM cluster for both DMZ servers and local servers. Local servers uses tagged vlans on the bridge. Our topology is like the attached image.
In every proxmox host VMs uses vmbr1 bridge. DMZ VMs does not configured with any vlans. So their traffic go through dmz switch -> firewall. Lan server VMs configured with vlans on vmbr1 so their traffic go through dmz switch -> Lan backbone. On dmz switch all local vlans are tagged on the switch ports which are connected to proxmox hosts vmbr1 physical interface (eth0) and port that connected to backbone switch.

It was working without any issues until I upgraded to 3.2. After the upgrade, Ipv4 traffic runs without issue but Ipv6 traffic screwed up.

Code:

root@webserver-new:~# route -6Kernel IPv6 routing table
Destination                    Next Hop                  Flag Met Ref Use If
xxx:101::/64          ::                        UAe  256 0    0 eth0
xxx:10a::/64          ::                        UAe  256 0    47 eth0
xxx:10b::/64          ::                        UAe  256 0    63 eth0
xxx:10c::/64          ::                        UAe  256 0  132 eth0
xxx:10d::/64          ::                        UAe  256 0    58 eth0
xxx:10e::/64          ::                        UAe  256 0    47 eth0
xxx:10f::/64          ::                        UAe  256 0    59 eth0
xxx:110::/64          ::                        UAe  256 0    48 eth0
xxx:111::/64          ::                        UAe  256 0    83 eth0
xxx:112::/64          ::                        UAe  256 0    48 eth0
xxx:113::/64          ::                        UAe  256 0    69 eth0
xxx:114::/64          ::                        UAe  256 0  840 eth0
xxx:115::/64          ::                        UAe  256 0    69 eth0
xxx:11a::/64          ::                        UAe  256 0    0 eth0
xxx:121::/64          ::                        UAe  256 0    46 eth0
xxx:252::/64          ::                        U    256 0    1 eth0
fe80::/64                      ::                        U    256 0    0 eth0
::/0                          xxx:252::1        UG  1  0  1659 eth0

This is the output of one of DMZ KVM guest. As you can see it thinks that all lan ipv6 blocks as neigbours. So when I try to connect to our web server via ipv6 address from a lan PC (xxx:10a::/64) the traffic goes through our firewall (xxx:252::1 ) as expected but the kvm guest doesn't send the reply via its default gateway as it thinks xxx:10a::/64 is his neighbour. So the the ipv6 traffic from lan to dmz and dmz to lan screwed up.

I don't understand why linux bridge vmbr1 forward tagged local vlan traffic to guest vms that has no vlan config?

Any advice?
Attached Images

Ceph on FibreChannel Storage

$
0
0
Hello!

Firstly I'd like to say "thank you" for you' awesome product. I'm new to proxmox and not use it in production yet (but I plan to do it). As you mentioned in 3.2 release there is now CEPH in proxmox. I watched the video tutorial and read manual about how to easily create it. But both video and text manual describe how to create ceph storage from separate drives: create monitor on every node, then create osd on every node and so on and as a result we have big storage consists of node's drivers. But is it possible to use ceph in a way like it is used in LVM configuration with SAN-storage: one shared block device for all nodes? Let me explain what I mean: I have three nodes and one HP Storage. Storage is attached to each node using FibreChannel switch. Now I use LVM to store VM's raw device images. It is possible to use ceph in this case and not create OSD on every node but have one on FC-shared storage? Thanks.

TUN/TAP Switch for OpenVZ

$
0
0
Proxmox has no button to enable/disable TUN/TAP for OpenVZ. Another OpenVZ panel has this important feature for VPS selling.

Support for software raid?

$
0
0
Hello, i m doing a project of virtualization and i m using this OS to do it, when i tried to do a raid, the installation doesnt know it and destroys all the raid. You know something to do there?
Thank you!

Unable to connect with SPICE (SSL error ?)

$
0
0
Hi

I'm running the last version of promox (3.2).
And for the 1st time, I tried to use spice to acces my VM (1 kvm with win7-64 and some openvz with debian 7 32/64).

I installed virt-viewer on my computer, spice guest in win7, and attached .vv file to remote-viewer, but it fails every time I try to use it.
I get a message like "Connecting to remote graphic .." in remote-viewer and then it says "timeout".

I get this message in proxmox:
Quote:

((null):93727): Spice-Warning **: reds.c:2799:reds_handle_ssl_accept: SSL_accept failed, error=5
listening on '127.0.0.1:61003' (TLS)
connection timeout - stopping server
TASK OK
- For the proxmox web interface, I use self-signed certificate (but I think that spice use its own certificate)
- I don't have any drop rules in iptables.

=> Any idea ?

Quote:

pveversion :
proxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-23-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-24-pve: 2.6.32-111
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

very newbie question - slave / master system?

$
0
0
Hello all,
I am considering installing proxmox on a new server that I am setting up that would be used to sell VPS'es to my clients. So far I've been using solusvm and thinking about proxmox due to the promising features and whmcs addons. Here's my question:

Does proxmox have to installed on the same server that will be used to distribute openvz VPSes? OR its better to install it on a separate server?

Highly appreciated.
Thanks.

[Feature Request] Node notes/description field

$
0
0
This is something that has been bugging us a bit:

VMs and containers have this nice "Notes" free text field in the summary tab to allow us to organize and describe details about VMs. This has worked well. However, since we've got a cluster with a lot of nodes of various kind (usage, hardware configuration, software configuration), it would be extremely useful to have the same "note" feature for PVE nodes in their summary page to add more information that may not be obvious or exposed about them.

I'm not sure how other PVE users manage their node informations, but having everything in the PVE interface would certainly be convenient and make it easy to maintain.

Correct Nic Configuration for WAN and LAN with Firewall

$
0
0
Hi,
i want to use pfsense as firewall and for my wan ppoe nat configuration. actually i have configured in proxmox 2 nics vmbr0 for lan (192.xxx) and vmbr1 (10.xxx) for wan both as bridge network. in pfsense i have two nics as bridge too, net0->vmbr0 and net1->vmbr1. my question is about security and the correct configuration of the nics. should i and when which nic, should i configure for nat if i want to activate it in pfsense?

Resizing root partition and giving proxmox more space

$
0
0
Hello proxmox experts. I have a issue with my dedicated server running proxmox. The provider gave us a 128 GB SSD , where only 58GB is assigned as storage in proxmox. How can I assign more storage room since 50% of my SSD is unused atm. This may be helpful :

Configure interface file 3 public IPS.

$
0
0
Hi
I have a proxmox server running. I have instaled a light ubunntu on my main Public ips.. This IP 1 I only uses for accesing Ubuntu.. adn edit files like interfaces etc.
tehn I have a second Ip. I have made a Virtual machine that I run a webserver IP 2
I have just bought a third public IP that I want to set up a new webserver (virtual machine) IP 3
I entered the interface file on IP1 adn added soem extra lines for a vmb1 setting that Ithought would give IP3 acces to interne.. or vica versa..
Can someone tell me why IP 3 do not work and what I have missunderstood of basic..

auto lo
iface lo inet loopback


# device: eth0






auto eth0
iface eth0 inet manual
address MAIN IP
netmask 255.255.255.192
broadcast 188.40.71.127
gateway 188.40.71.65
pointopoint 188.40.71.65
post-up mii-tool -F 100baseTx-FD eth0






auto vmbr0
iface vmbr0 inet static
address IP2
netmask 255.255.255.192
broadcast 188.40.71.127
gateway 188.40.71.65
bridge_ports eth0
bridge_stp off
bridge_fd 0
up ip route add IP2/32 dev vmbr0




auto vmbr1
iface vmbr1 inet static
address IP3
netmask 255.255.255.192
broadcast 188.40.71.127
gateway 188.40.71.65
bridge_ports eth0
bridge_stp off
bridge_fd 0
up ip route add IP3/32 dev vmbr1






Vmbr0 = ip2 works as before, I can ping IP3 but when I run install and setup on new virtual machine using vbmr1 (IP3) I am not able to do this


error
TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 101 2>/dev/null'' failed: exit code 1


Thx

Ole

Configure vmbr for the first time, help!

$
0
0
Hi, i installed Proxmox 3 for the first time, the installation works, i view correctly the web console but now i have to install 2 machines, both of which must be reachable from the outside, in which to install the web server and db server. I read in the guides that i have to configure the network interfaces (vmbr) but do not know how to do. The Proxmox server already has a fixed IP address, to have 2 more virtual machines I have other 2 fixed IP addresses? Otherwise, how can i connect to virtual machines via ssh or the web?

Thanks

Need Help on Proxmox 3.0-23 VM accidentally destroyed

$
0
0
Hi Proxmox Community,

I'm having problem on how to restore the destroyed VM on my Proxmox 3.0 which the VM was stored separated server (NAS), this NAS was mounted as NFS on the proxmox.

I have to Proxmox Version 1.9 and Version 3.0 and NAS

NAS was mounted as NFS on Proxmox 1.9 also in 3.0

I have lots of VMs on Proxmox 1.9 and also on Proxmox 3.0 that are stored on NAS (mounted as NFS) on this 2 proxmox server

Now, the problem is that the 3 VMs are running on Proxmox 3, and was accidentally destroyed on Proxmox 1.9 .. I think, the proxmox 1.9 has the same VID with 3.0 but this 2 proxmox are not clustered.

Those VMs are still active on Proxmox 3.0 but not running due to the cannot locate the vmdk file.

When I look at the NAS directory the VID folder was empty

Is there any possible way to restore the VMs on proxmox 1.9 ?


Please help.

Thank you very much.

QEMU Template questions

$
0
0
I have /var/lib/vz/template/qemu/ but i try sticking a .qcow2 file in there and I don't seem to see it in proxmox web interface, what is it for? do i use a different file type? can i use stacklet templates with KVM? If so, how? I tried googling a lot of this stuff but can't seem to locate any answers any help is appreciated.

PVE 3.2 ZFS plugin with Zfs-on-Linux

$
0
0
Hi,

I have a Debian server with Zfs-on-Linux and IET iscsi. I followed the wiki. here is storage.conf:

Code:

zfs: linux        blocksize 4k
        target iqn.2001-04.tr.xxx.xxx:elastics
        pool elastics
        iscsiprovider iet
        portal xxx.xxx.xxx.74
        content images

And Zfs list on storage server:

Code:

NAME                    USED  AVAIL  REFER  MOUNTPOINTelastics                782G  8.16T  318G  /elastics
elastics/backup          286G  8.16T  286G  /backup
elastics/logs          7.71G  8.16T  7.71G  /logs
elastics/mrtg          10.3M  8.16T  10.3M  /mrtg
elastics/vm-114-disk-1  34.0G  8.19T    72K  -
elastics/vm-114-disk-2  34.0G  8.19T    72K  -
elastics/vm-114-disk-3  34.0G  8.19T    72K  -
elastics/vm-114-disk-4  34.0G  8.19T    72K  -
elastics/vm-114-disk-5  34.0G  8.19T    72K  -

As you can see when I try to add a disk on this zfs storage it creates the zvol but gives error about iscsi target.: No such file or directory. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376. (500)
Code:

Mar 13 09:38:15 kvm47 pvedaemon[4411]: <root@pam> update VM 114: -virtio1 linux:32
Mar 13 09:38:16 kvm47 pvedaemon[4411]: WARNING: Use of uninitialized value $tid in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 371.

And on storage server here is the error logs:
Code:

ar 13 09:39:18 graylog2 kernel: [2504456.932896]  zd80: unknown partition table
Mar 13 09:39:19 graylog2 ietd: unable to create logical unit 0 in target 0: 2

So how can I solve this issue as I understand it can't create iscsi target on storage server?

Problems in the HA

$
0
0
Good afternoon.
Our company has a cluster based on proxmox, with purchased licenses for 6ti node, but we have always had problems with ha, periods cease to migrate virtual machines between nodes when trying to migrate from console get hands:



Quote:

root@cluster-1-1:/var/log# qm migrate 185 cluster-1-3 --online
Executing HA migrate for VM 185 to node cluster-1-3
Trying to migrate pvevm:185 to cluster-1-3...Target node dead / nonexistent
command 'clusvcadm -M pvevm:185 -m cluster-1-3' failed: exit code 244
root@cluster-1-1:/var/log#
Quote:

root@cluster-1-1:/var/log# clustat |more
Cluster Status for freecluster01 @ Thu Mar 13 11:34:30 2014
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
cluster-1-1 1 Online, Local, rgmanager
cluster-1-2 2 Online, rgmanager
cluster-1-3 3 Online
cluster-1-4 4 Online, rgmanager
cluster-1-5 5 Online, rgmanager
cluster-1-6 6 Online, rgmanager

Quote:

root@cluster-1-2:~# clustat |more
Cluster Status for freecluster01 @ Thu Mar 13 11:34:45 2014
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
cluster-1-1 1 Online, rgmanager
cluster-1-2 2 Online, Local, rgmanager
cluster-1-3 3 Online, rgmanager
cluster-1-4 4 Online, rgmanager
cluster-1-5 5 Online, rgmanager
cluster-1-6 6 Online, rgmanager

Service Name Owner (Last) State
Quote:

root@cluster-1-3:~# clustat |more
Cluster Status for freecluster01 @ Thu Mar 13 11:34:03 2014
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
cluster-1-1 1 Online, rgmanager
cluster-1-2 2 Online, rgmanager
cluster-1-3 3 Online, Local, rgmanager
cluster-1-4 4 Online, rgmanager
cluster-1-5 5 Online, rgmanager
cluster-1-6 6 Online, rgmanager

Service Name Owner (Last) State
Quote:

Как здесь видно, почему-то нода 1-1 не видит rgmanager на ноде 1-3, хотя другие ноды все видят, подскажите как это можно диагностировать, мы бы очень хотели чтобы наши проблемы с HA закончились и кластер начал жить, так как планируем приобрести еще 6ть лицензий.

Unfortunately the log files we could not find error reports.




Quote:

root@cluster-1-1:/var/log# group_tool ls
fence domain
member count 6
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2 3 4 5 6

dlm lockspaces
name rgmanager
id 0x5231f3eb
flags 0x00000000
change member 6 joined 1 remove 0 failed 0 seq 13,13
members 1 2 3 4 5 6

name storage02
id 0x72bc4bef
flags 0x00000008 fs_reg
change member 6 joined 1 remove 0 failed 0 seq 5,5
members 1 2 3 4 5 6

name storage01
id 0x5991182c
flags 0x00000008 fs_reg
change member 6 joined 1 remove 0 failed 0 seq 5,5
members 1 2 3 4 5 6

name clvmd
id 0x4104eefa
flags 0x00000000
change member 6 joined 1 remove 0 failed 0 seq 9,9
members 1 2 3 4 5 6

gfs mountgroups
name storage02
id 0xa14c9488
flags 0x00000008 mounted
change member 6 joined 1 remove 0 failed 0 seq 5,5
members 1 2 3 4 5 6

name storage01
id 0x8a61c74b
flags 0x00000008 mounted
change member 6 joined 1 remove 0 failed 0 seq 5,5
members 1 2 3 4 5 6

root@cluster-1-1:/var/log#

Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>