Quantcast
Channel: Proxmox Support Forum
Viewing all 171704 articles
Browse latest View live

Proxmox VE based on Debian,but vendor software for monitoring RAID is not installable

$
0
0
Hi all,

I am in the process of doing research about the capabilities of Proxmox VE to have it replace my HA farm of Xen hypervisors based on SLES10.
The SLES10 servers are based on HP server hardware and HP offers rpms for SLES (and Red Hat Enterprise servers) to monitor the RAID health for instance.
I depend on this software because my Zabbix monitoring software checks this software for the RAID health status. As soon as a RAID configuration suffers from a broken disk and get degraded, Zabbix informs me of that.
Obviously, when turning to Proxmox and thus to a Debian based setup, this software is not going to run.

How do you guys fix this? Do you buy third party RAID controllers that offer debian software? Or do you simply not monitor your VE hosts?

Looking out for your replies!

Cheers,

BC

Proxmox 3.1 - Can no longer start a VM

$
0
0
Hi all,

I'm having quite an emergency here as no VMs will start in a production environment. I joined a new node to a cluster, after doing so the web interface would not start. I shut down all VMs on the original node and rebooted it and now no VMs will start at all. The error when doing "wm start VMID" is

Code:

root@node0:~# qm start 103
Failed to start VNC server on `unix:/var/run/qemu-server/103.vnc,x509,password': No certificate path provided
start failed: command '/usr/bin/kvm -id 103 -chardev 'socket,id=qmp,path=/var/run/qemu-server/103.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/103.vnc,x509,password -pidfile /var/run/qemu-server/103.pid -daemonize -name SBS-2011 -smp 'sockets=2,cores=2' -nodefaults -boot 'menu=on' -vga std -no-hpet -cpu 'kvm64,hv_spinlocks=0xffff,hv_relaxed,+x2apic,+sep' -k en-us -m 6144 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/vg_0/vm-103-disk-1,if=none,id=drive-ide0,aio=native,cache=none' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap103i0,script=/var/lib/qemu-server/pve-bridge' -device 'e1000,mac=6A:B2:06:B7:CB:D8,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -netdev 'type=tap,id=net1,ifname=tap103i1,script=/var/lib/qemu-server/pve-bridge' -device 'e1000,mac=E6:FD:08:BD:34:BA,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

root@node0:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2


root@node0:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

Can anyone lend a hand.



Regards,
Terry

Not getting internet access within VM

$
0
0
So, I finally got my box up and running, just forgot the secure protocol (https)

I have managed to install a new VM with vmbr0 however the VM is not accepting internet access. How can I fix this?

Connection refused when connecting to itself

$
0
0
I have installed Proxmox 3.1 and OpenVZ container with apache. I have redirected traffic on port 80 to that instance with
Code:

iptables -t nat -A PREROUTING -p TCP -i eth0 --dport 80 -j DNAT --to 10.0.0.3:80
When I am logged on the host or in the container and execute telnet 1.1.1.1 80 where 1.1.1.1 is my public IP assigned to the host, I get "Connection refused".

If I am logged from another server and execute the same command, connection is established, so the problem is only when connection is from host to itself. The same thing happens with other ports too, but ssh connection with
telnet 1.1.1.1 22 is OK.

Is there some Proxmox settings that can cause this? Is there some software (firewall for example) in default Proxmox installations that can cause this?


Advice after node hardware failure - how to re-add server in cluster after reinstall

$
0
0
Hi all,

We had two disks on a server which failed in a raw, and we lost the Raid (Raid 10). I replaced the disks and reinstalled proxmox on the server, with the same IP and hostname (srv-virt2) than before, this is perhaps not the best option...

There are three nodes in the cluster (srv-virt1, srv-virt2 and srv-virt3), so the quorum is 2, and we have it (still two nodes). The failed node still appear in the the cluster :
Code:

# pvecm nodes
Node  Sts  Inc  Joined              Name
  1  M    308  2013-10-20 17:29:19  srv-virt1
  2  X    320                        srv-virt2
  3  M  14628  2013-12-07 17:35:29  srv-virt3

I can no more delete the VMs that were on the failed node. I have VM backups. I read the wiki on PVE 2.0 cluster, it seems I have to be careful. See :
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster

There seems to be two options possible : remove the node, which permanently remove it, and re-add the server as a new node (so new name ?). But I cannot delete the ghosts VMs. What will happen for these VMs IDs ? Is it possible to force delete the node.

Second option would be "Re-installing a cluster node", but even if I have a backup for /etc, I don't have a backup for /var/lib/pve-cluster, nor /root/.ssh, so it does not seem a viable option.

What would be the best procedure for me to be sure to have a sane cluster at the end ?

Promox version is 3.1 :
Code:

# pveversion
pve-manager/3.1-20/c3aa0f1a (running kernel: 2.6.32-26-pve)

Thanks for advices

Alain

Support for old versions and custom kernels

$
0
0
Hello Proxmox Community!

We are using Proxmox since 3 years ago and we are very satisfied by this Virtualization Enviroment. We have only some worries about support on the old versions.
Let me explain better. We are still using ProxmoxVE 2.1 on Debian Squeeze and we know that the repos are still online (http://download.proxmox.com/debian/d.../binary-amd64/) but we need some reassurances that this repo will be online for a least another year to eventually rebuild or install new cluster servers.

How long you will provide those packages online? I should create a mirror repo for us?

Question no. 2:

We are heavily thinking to subscribe to Proxmox Support Service for our clusters. Unfortunalely we had to recompile the pve-kernel to include OCFS2 support and so we have our custom kernel that need to be recompiled every upgrade. This is the only option possibile to use an iSCSI SAN without involving LVM that is not very efficent on the utilization of the free space (no sparse files, only fixed partitions, free space of the vm marked as used, etc..).
By the way, we're running OCFS2 for 3 years without any issues and we're very happy with it, even if redhat has dropped the support for this Clustered FS.

Can you provide the support service to us even with this custom kernel?

Running containers not detected correctly

$
0
0
I recently recognised an issue with one of the proxmox servers where openvz containers running on the host do not get recognised by the host node. Basically, if I perform vzlist, it shows them as running, but the proxmox web interface is showing them as offline. I have upgraded to the latest version of proxmox since I detected the problem, but it did not solve the issue.

Any help with this issue would be appreciated.

ZFS plugin works almost perfectly on Netgear ReadyDATA

$
0
0
Hi Proxmox team,

you may be interested to know that I successfully tested the new ZFS plugin (comstar provider, 64k blocksize) on a ReadyDATA 516
The ReadyDATA storage systems are running ReadyDATAOS but under the hood they are nexenta based.

root@Storage01:~# uname -a
SunOS Storage01 5.11 NexentaOS_134f i86pc i386 i86pc Solaris

So I thought it's worth a try.

Here's a list of things that were working out of the box:

- VM creation/deletion with multiple LUNs
- running multiple VMs under IO high load
- Snapshots: creation/deletion/reverting
- cloning
- backup/restore

all in all I must say it works pretty stable!


There is one flaw, though. All the changes that the ZFS plugin does on the storage device directly are not reflected in the GUI. I.e. from the GUI you cannote tell that a LUN has been created. You cannt even tell that storage space has been reserved.
I contacted the Netgear support in order to find out how this could happen. Actually I didn't even expect that they would reply anything but "unsupported scenario" from their side, however, they appear pretty helpful. Here's what they wrote me:

* I can see why Proxmox is pursuing the ZFS plugin (allows them to address a wide variety of ZFS storage products). A CLI-created volume does show up in the UI ("Volume01"), but you may have to follow our reservation scheme for shares and LUNs to show up in the UI. Do you know what command Proxmox is using to create LUNs?
* When the plug-in supports creation of snapshots, we can work with you on a naming scheme so that the snaps show in our UI. In short, you will need to prefix the name with "m_".

Would you possibly be willing to extend the ZFS plugin in order to support the Netgear ReadyDATA devices?
I would happily do the testing or provide any more necessary information you would need.



Best,

alex

eth1 LAN vmbr1 configuration while keeping eth0=>vmbr0 alive

$
0
0
Two nics ,
eth0 public IP => vmbr0
eth1 LAN from pfsense with public IP

The vmbr0 was set up automatically on install

How to properly set up vmbr1 for eth1 for use with VM's while keeping eth0 open - I have 10 TB bandwidth allowed on eth0

I have connection and all nics are up on both machines.

Thanks, Rick

Siimultaneous Spice Client Connections

$
0
0
Apparently experimental support has been in the spice server for this since spice 0.10

http://www.spice-space.org/page/Feat...ultipleClients

One just needs to set
export SPICE_DEBUG_ALLOW_MC=1

before launching the qemu vm.

Not sure where to set this on proxmox - I tried globally via /etc/profile but that didn't work.

Any suggestions?

Backup Failed

$
0
0
A few months ago, I started receiving an error message saying 1 or 2 VM's failed to backup 1-3 times a week. The error message is "ERROR: Backup of VM 105 failed - vma_co_write write error - Broken pipe". I found one thread with this problem and it seemed that updating proxmox to the latest version would correct it. I was on 3.0. I had a chance last week to shut everything down to upgrade to 3.1 and for the past week, my backups have all been running perfectly. This morning when I arrived and checked my email, I found that 1 VM failed to backup with the same error message as before. This VM had been backing up successfully ever night before the 3.1 upgrade.

Info about my backup: Backups are going to a locally attached USB hard drive. I added it to fstab as /dev/sdc1 /mnt/usb ext4 defaults 0 0

Error Log
Quote:

vzdump 105 --quiet 1 --mailto myemails@mydomain.com --mode snapshot --compress lzo --storage backup-5

105: Dec 06 22:30:04 INFO: Starting Backup of VM 105 (qemu)
105: Dec 06 22:30:04 INFO: status = running
105: Dec 06 22:30:05 INFO: update VM 105: -lock backup
105: Dec 06 22:30:06 INFO: backup mode: snapshot
105: Dec 06 22:30:06 INFO: ionice priority: 7
105: Dec 06 22:30:06 INFO: creating archive '/mnt/usb/prox1/dump/vzdump-qemu-105-2013_12_06-22_30_03.vma.lzo'
105: Dec 06 22:30:06 INFO: started backup task 'f22f1069-0d6a-42ab-b09b-b8abec046366'
105: Dec 06 22:30:09 INFO: status: 0% (250019840/375809638400), sparse 0% (89997312), duration 3, 83/53 MB/s
105: Dec 06 22:30:58 INFO: status: 1% (3785424896/375809638400), sparse 0% (164864000), duration 52, 72/70 MB/s
105: Dec 06 22:31:47 INFO: status: 2% (7590903808/375809638400), sparse 0% (207331328), duration 101, 77/76 MB/s
105: Dec 06 22:32:15 INFO: status: 3% (11331895296/375809638400), sparse 0% (310702080), duration 129, 133/129 MB/s
105: Dec 06 22:32:52 INFO: status: 4% (15192752128/375809638400), sparse 0% (346030080), duration 166, 104/103 MB/s
105: Dec 06 22:33:24 INFO: status: 5% (18821414912/375809638400), sparse 0% (377634816), duration 198, 113/112 MB/s
105: Dec 06 22:34:10 INFO: status: 6% (22567714816/375809638400), sparse 0% (408408064), duration 244, 81/80 MB/s
105: Dec 06 22:34:53 INFO: status: 7% (26409959424/375809638400), sparse 0% (442224640), duration 287, 89/88 MB/s
105: Dec 06 22:35:32 INFO: status: 8% (30146428928/375809638400), sparse 0% (478523392), duration 326, 95/94 MB/s
105: Dec 06 22:35:58 INFO: status: 9% (33929953280/375809638400), sparse 0% (522891264), duration 352, 145/143 MB/s
105: Dec 06 22:36:25 INFO: status: 10% (37644795904/375809638400), sparse 0% (538243072), duration 379, 137/137 MB/s
105: Dec 06 22:36:52 INFO: status: 11% (41412460544/375809638400), sparse 0% (590237696), duration 406, 139/137 MB/s
105: Dec 06 22:37:04 INFO: status: 12% (45524647936/375809638400), sparse 1% (4047892480), duration 418, 342/54 MB/s
105: Dec 06 22:37:10 INFO: status: 13% (49427906560/375809638400), sparse 2% (7951151104), duration 424, 650/0 MB/s
105: Dec 06 22:37:15 INFO: status: 14% (52835385344/375809638400), sparse 3% (11358629888), duration 429, 681/0 MB/s
105: Dec 06 22:37:37 INFO: status: 15% (56504877056/375809638400), sparse 3% (12237139968), duration 451, 166/126 MB/s
105: Dec 06 22:38:13 INFO: status: 16% (60152479744/375809638400), sparse 3% (12245917696), duration 487, 101/101 MB/s
105: Dec 06 22:39:25 INFO: status: 17% (63930105856/375809638400), sparse 3% (12271726592), duration 559, 52/52 MB/s
105: Dec 06 22:40:36 INFO: status: 18% (67694755840/375809638400), sparse 3% (12275392512), duration 630, 53/52 MB/s
105: Dec 06 22:41:41 INFO: status: 19% (71419559936/375809638400), sparse 3% (12283990016), duration 695, 57/57 MB/s
105: Dec 06 22:42:47 INFO: status: 20% (75197972480/375809638400), sparse 3% (12284067840), duration 761, 57/57 MB/s
105: Dec 06 22:43:50 INFO: status: 21% (78964326400/375809638400), sparse 3% (12284067840), duration 824, 59/59 MB/s
105: Dec 06 22:44:59 INFO: status: 22% (82699681792/375809638400), sparse 3% (12284067840), duration 893, 54/54 MB/s
105: Dec 06 22:46:08 INFO: status: 23% (86468329472/375809638400), sparse 3% (12284125184), duration 962, 54/54 MB/s
105: Dec 06 22:47:17 INFO: status: 24% (90241630208/375809638400), sparse 3% (12301291520), duration 1031, 54/54 MB/s
105: Dec 06 22:48:50 INFO: status: 25% (93997367296/375809638400), sparse 3% (12331380736), duration 1124, 40/40 MB/s
105: Dec 06 22:49:54 INFO: status: 26% (97736720384/375809638400), sparse 3% (12404240384), duration 1188, 58/57 MB/s
105: Dec 06 22:50:58 INFO: status: 27% (101480726528/375809638400), sparse 3% (12412190720), duration 1252, 58/58 MB/s
105: Dec 06 22:52:05 INFO: status: 28% (105269035008/375809638400), sparse 3% (12415393792), duration 1319, 56/56 MB/s
105: Dec 06 22:53:20 INFO: status: 28% (108476366848/375809638400), sparse 3% (12415586304), duration 1394, 42/42 MB/s
105: Dec 06 22:53:20 ERROR: vma_co_write write error - Broken pipe
105: Dec 06 22:53:20 INFO: aborting backup job
105: Dec 06 22:53:26 ERROR: Backup of VM 105 failed - vma_co_write write error - Broken pipe

Serial over Lan Hangs boot until keypress on console

$
0
0
So I'm having a weird issue I have yet to be able to resolve. In redirecting my console over Serial over Lan my boot process hangs. Below is the output of it hanging right after some IRQ stuff;

Waiting for /dev to be fully populated...shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
EDAC MC: Ver: 2.1.0 Oct 14 2013
ioatdma: Intel(R) QuickData Technology Driver 4.00
ioatdma 0000:00:16.0: PCI INT A -> GSI 43 (level, low) -> IRQ 43
igb 0000:05:00.0: DCA enabled
igb 0000:05:00.1: DCA enabled
ioatdma 0000:00:16.1: PCI INT B -> GSI 44 (level, low) -> IRQ 44
igb 0000:06:00.0: DCA enabled
igb 0000:06:00.1: DCA enabled
ioatdma 0000:00:16.2: PCI INT C -> GSI 45 (level, low) -> IRQ 45
ioatdma 0000:00:16.3: PCI INT D -> GSI 46 (level, low) -> IRQ 46
ioatdma 0000:00:16.4: PCI INT A -> GSI 43 (level, low) -> IRQ 43
ioatdma 0000:00:16.5: PCI INT B -> GSI 44 (level, low) -> IRQ 44
ioatdma 0000:00:16.6: PCI INT C -> GSI 45 (level, low) -> IRQ 45
ioatdma 0000:00:16.7: PCI INT D -> GSI 46 (level, low) -> IRQ 46
i801_smbus 0000:00:1f.3: PCI INT C -> GSI 18 (level, low) -> IRQ 18
iTCO_vendor_support: vendor-support=0
iTCO_wdt: Intel TCO WatchDog Timer Driver v1.07rh
iTCO_wdt: Found a ICH10R TCO device (Version=2, TCOBASE=0x0860)
iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
input: PC Speaker as /devices/platform/pcspkr/input/input5
done.

^ here it hangs until I hit a key on local keyboard, then it goes about it's business without a problem.

Everything else redirects happily. BIOS, grub, and ttyS0 spawns me a shell...

I've just cloned the source for the kernel to see if this patch has been applied and/or to look for other clues
http://marc.info/?l=linux-serial&m=123438692624982

Anyone have any ideas? Anything at all at this point...


dmesg
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
00:0a: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
00:0b: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A

inittab
t0:123:respawn:/sbin/getty -L ttyS0 115200 vt100

/etc/default/grub
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"
GRUB_TERMINAL="serial console"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"

dmidecode
Manufacturer: Supermicro
Product Name: X8DTU-LN4+
Handle 0x0060, DMI type 38, 18 bytes
IPMI Device Information
Interface Type: KCS (Keyboard Control Style)
Specification Version: 2.0
I2C Slave Address: 0x00
NV Storage Device: Not Present
Base Address: 0x0000000000000CA2 (I/O)
Register Spacing: Successive Byte Boundaries

Support this raid card?

$
0
0
Hello.

I have a question about this raid card:
http://www.scsi4me.com/lsi-logic-lsisas3041e-r.html
The proxmox support it?

And anyone can help me to choose the best raid controller for the proxmox with in ~100$

I want to use raid 10 for better i/o performance. Now i use software raid 1. I/O ~ 3-6%.

(Sorry for my bad eng.)

Best setup for Debian,CentOS,FreeBSD,WinSvr2003R2,WinSvr2008R2

$
0
0
Hello,

I run vms with this os: Debian,CentOS,FreeBSD,WinSvr2003R2,WinSvr2008R2
Every VM with KVM but i think i do something wrong becouse i get low performace.
Anyone can help me what is the "best" setup for this os's?
I "want to know":
HDD Bus type
HDD Format
HDD Cache
CPU Type
Memory (fix size or auto)
Network model

Thank you!

(Sorry for my bad english.)

Offsite Disaster Recovery

$
0
0
Hi all,

I have a scenario that i'm starting to plan and architect, however I want to get someones input into this as I couldn't find any information on it on the forum or the mailing list. I think its a solution that will work, but I'd like to hear how other people might be doing this.

I have the following setup configured:
Live Environment
• 3 Node cluster with IPMI fencing and high availability (10GB Ethernet backend)
• Backend storage: QNAP NAS 2U 879 Redundant Power with 10GB ethernet backend + RAID 10

DR Environment
• 2 Node Cluster with IPMI (no high availability) and 2GB Link Agg (802.3ad) backend
• Backend Storage: QNAP NAS 1U 479 Redundant Power with 2GB Link Agg (802.3ad) backend

Firewall
• Kerio Control x2 - DR Firewall configured with same rule sets as main live environment, however rules are disabled (except for basic ones such as HTTPS, SSH, RTRR (8899)


I've tested this on an internal network and it seems to work perfectly fine. The scenario that i have and would implement would be should the live environment ever fail, I would have the last synced data from the live qnap environment.

I have written a script which copies the contents of the /etc/pve/qemu-server and /etc/pve/openvz folders over WAN to the DR servers and places the files in the same location. The VM's appear on the powered down servers and in worse case scenario can be brought up online. I would probably script the server to copy the vm config files weekly and or even nightly since they are small and the RTRR can sync on the fly or at a specified time.

I am attacking this with proxmox in the same sort of way that VMWare does it. This isn't a joined cluster. The 2 clusters do not know anything about each other. The only thing that knows about each other are the storage arrays and copying data from live location to DR location.

How are other people doing this sort of thing? This type of deployment seems ideal to me, however some things might need to be resolved. Obviously if the DR site has to be brought up online, the firewall rules on the DR firewall need to be activated, and the external WAN DNS needs to be modified to point to the servers that have edge facing requirements (mail, ftp, http, etc)

The servers and storage at the disaster recovery site would have the exact same configuration as that of the live environment (same IP addressing, network subnet, DNS Servers, etc) for both the public and private network.

I look at the half million costs of doing this with VMWare and their associated storage and proxmox just seems to be so much simpler and cost effective. I can build a solution like this for far less and 30K, and thats with some fairly tricked out hardware, such as the QNAP Series line of servers and SuperMicro TWIN servers with 4+ nodes per chassis and onboard 10GB Ethernet.

Can anybody point it out to me that what I am doing is completely wrong and I'm stupid because of reason A. B. C. that you see as causing more grief than good? I've included an image of how I foresee this setup working.

Thanks for any help. Love this product
Attached Images

VM start problem

$
0
0
Hello, I have a problem trying to start the vm. I configure an external storage, as shown in the error, the folder is created successfully and the virtual machine but not start. And keep checking be enabled in the bios kvm and active.
Sorry for the English, I Use google translator.

Code:

kvm: -drive file=/mnt/windows/Ivan/DiscosVirtuales/images/100/vm-100-disk-1.vmdk,if=none,id=drive-sata0,format=vmdk,aio=native,cache=none: could not open disk image /mnt/windows/Ivan/DiscosVirtuales/images/100/vm-100-disk-1.vmdk: Invalid argument
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name Windows -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+x2apic,+sep -k en-us -m 2048 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'file=/dev/cdrom,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/mnt/windows/Ivan/DiscosVirtuales/images/100/vm-100-disk-1.vmdk,if=none,id=drive-sata0,format=vmdk,aio=native,cache=none' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=user,id=net0,hostname=Windows' -device 'e1000,mac=CE:6B:35:4C:AB:02,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime'' failed: exit code 1

CT Exclamatioon Mark

$
0
0
Hi guys,

I am getting a big bold yellow triangle with an exclamation mark for the 2 containers I operate (image attached). What exactly that means? And what can be done to find out what the problem might be.

I can enter the containers from the host node and can log in to ssh without probs. But I can't see the Stop, Start, Shutdown.... buttons that would normally display at the top right-hand of the containers list. Although, I can see those options when I right-click the container.

If I double-click the container, the yellow triangle gets transferred to the server vew, as can be seen in the second image.

Both containers are reachable from the outside world.

Regards

Norman
Attached Images

Problem with SAS storage

$
0
0
Hi all!
I conected to proxmox one storage via lc and one via sas. And in proxmox i don't see the second storage. My server is dell r910 and my sas controller - dell perc h800. Trying to resolve this problem, i install dell open manage. omreport storage controller show me that proxmox see the controller:
ID : 1
Status : Non-Critical
Name : PERC H800 Adapter
Slot ID : PCI Slot 2
State : Degraded
Firmware Version : 12.10.4-0001
Latest Available Firmware Version : 12.10.5-0001
Driver Version : 00.00.06.14-rh1
Minimum Required Driver Version : Not Applicable
Storport Driver Version : Not Applicable
Minimum Required Storport Driver Version : Not Applicable
Number of Connectors : 2
Rebuild Rate : 30%
BGI Rate : 30%
Check Consistency Rate : 30%
Reconstruct Rate : 30%
Alarm State : Not Applicable
Cluster Mode : Not Applicable
SCSI Initiator ID : Not Applicable
Cache Memory Size : 512 MB
Patrol Read Mode : Auto
Patrol Read State : Stopped
Patrol Read Rate : 30%
Patrol Read Iterations : 0
Abort Check Consistency on Error : Disabled
Allow Revertible Hot Spare and Replace Member : Enabled
Load Balance : Auto
Auto Replace Member on Predictive Failure : Disabled
Redundant Path view : Not Applicable
CacheCade Capable : Not Applicable
Persistent Hot Spare : Disabled
Encryption Capable : Yes
Encryption Key Present : No
Encryption Mode : None
Preserved Cache : Not Applicable
Spin Down Unconfigured Drives : Disabled
Spin Down Hot Spares : Disabled
Spin Down Configured Drives : Not Applicable
Automatic Disk Power Saving (Idle C) : Not Applicable

But fdisk -l don't show me the second storage

What i do wrong?

P.S. Sorry for my pure english

how use *.vma.gz in virtual box?

$
0
0
How can I use (vma.gz) in virtualbox?
or this image only for proxmox?

GlusterFS tar symblink bug

$
0
0
Hello,
I setup a proxmox cluster with a gluster cluster. I installed a glusterfs-server on each node and i used proxmox to mount the fuse client.
I think i have the gluster symbolic link to empty file tar bug :
http://www.gluster.org/pipermail/glu...ly/036513.html

When a deploy a template many file are incorrect and the CT don't boot nicely.

Even i see some way to resolve the bug with "cluster.read-hash-mode = 2"or "cluster.choose-local = True"
(from http://www.gluster.org/community/doc...nted#Replicate)

but i alwaye have the bug...

Anybody can help ?
Regards.
Viewing all 171704 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>