January 22, 2015, 9:28 am
Hi,
we recently switched pve-kernel-3.10.0-5-pve from pve-no-subscription repository because ceph documentation states that more recent kernels are better for ceph-performance.
Unfortunately live migration is not reliable when migrating vms between proxmox hosts running pve-kernel-3.10.0-5-pve.
This can easily be reproduced. Run 4 live migrations, at least 1 wil fail, sometimes they succeed, sometimes they don't.
The vms will hang and the vm-statusis shown as internal-error. the interface will say migration succeeded.
As soon as we reboot the same servers with kernel pve-kernel-2.6.32-34-pve live migrations are work reliably
What is the status of 3.10 kernel in proxmox, is it supported or not ?
Any idea how I can reliably live migrate my vms from machines running pve-kernel-3.10.0-5 back to hosts running 2.6.32 ? I do not want to reboot my vms.
Thanks
Christoph
↧
January 22, 2015, 9:41 am
Hi,
I have an issue with the task viewer in my gui after a pvedeamon restart. When I double click on any task most recent than the restart, nothing happen. Have you an idea of how I can solve this ?
↧
↧
January 22, 2015, 11:16 am
Hi,
anyone Implement Bandwidth limit for Proxmox with A simple and stable solution?
suggestions?
↧
January 22, 2015, 12:13 pm
The documentation here:
https://openvz.org/Common_Networking_HOWTOs#Venet
states:
Quote:
After [adding a venet] the host should be able to ping the VE.
Why would the host be able to ping the venet? I thought the idea of a venet was to provide connectivity between CTs but not between external networks?
Second (related) question:
Is there any way a venet IP could conflict with an IP outside of the Proxmox infrastructure[1]? It's been suggested that I shouldn't use 10.10.10.xxx addresses since they are routable on our LAN. My response was that venet NICs are essentially firewalled from the LAN. Who's correct?
[1] I understand that if ip-forwarding and masquerading are enabled then we'd essentially have a NAT, but the IPs would still be isolated from the LAN, right?
↧
January 22, 2015, 1:56 pm
Hi,
Currently trying to setup a proxmox cluster with live migrations however i am having issues with the failover IP's my host (online.net) have provided. I assign the VM the mac address from the failover IP but with how their system works i have to say which physical server to point the failover IP at, i can get a internet connection fine (But can't ping for some reason) on the VM when the failover IP is pointed at the physical machine but when i point it to the other machine (As if the server had migrated) the connection drops completely.
Is there any tunneling setup or such i can use in order to get this working? Migrations is my focus, not a huge concern on redundancy.
↧
↧
January 22, 2015, 8:15 pm
Today I've discovered that sometimes a trivial bash curl call takes too long. The request was performed from one our server to another and further investigations showed that the destination server is quick and responds in milliseconds after a request received.
Then I found that it's a curl call that is slow.
Armed with strace I've noticed that according to its log it's a clone() syscall that is slow.
The relevant strace looks like:
Code:
1421983762.595237 clone(child_stack=0x7f5dd957af70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f5dd957b9d0, tls=0x7f5dd957b700, child_tidptr=0x7f5dd957b9d0) = 235121421983762.595298 clock_gettime(CLOCK_MONOTONIC, {3740303, 713692778}) = 0
1421983762.595337 clock_gettime(CLOCK_MONOTONIC, {3740303, 713721600}) = 0
1421983762.595369 clock_gettime(CLOCK_MONOTONIC, {3740303, 713754163}) = 0
1421983762.595415 clock_gettime(CLOCK_MONOTONIC, {3740303, 713799226}) = 0
1421983762.595464 clock_gettime(CLOCK_MONOTONIC, {3740303, 713849127}) = 0
1421983762.595493 clock_gettime(CLOCK_MONOTONIC, {3740303, 713878231}) = 0
1421983762.595530 clock_gettime(CLOCK_MONOTONIC, {3740303, 713915333}) = 0
1421983762.595592 clock_gettime(CLOCK_MONOTONIC, {3740303, 713977699}) = 0
1421983762.595616 clock_gettime(CLOCK_MONOTONIC, {3740303, 713999637}) = 0
1421983762.595636 clock_gettime(CLOCK_MONOTONIC, {3740303, 714018267}) = 0
1421983762.595657 clock_gettime(CLOCK_MONOTONIC, {3740303, 714040341}) = 0
1421983762.595678 clock_gettime(CLOCK_MONOTONIC, {3740303, 714061528}) = 0
1421983762.595698 clock_gettime(CLOCK_MONOTONIC, {3740303, 714081372}) = 0
1421983762.595718 clock_gettime(CLOCK_MONOTONIC, {3740303, 714102478}) = 0
1421983762.595744 clock_gettime(CLOCK_MONOTONIC, {3740303, 714126708}) = 0
1421983762.595766 poll(0, 0, 4) = 0 (Timeout)
1421983762.599899 clock_gettime(CLOCK_MONOTONIC, {3740303, 718282522}) = 0
1421983762.599925 clock_gettime(CLOCK_MONOTONIC, {3740303, 718308152}) = 0
1421983762.599946 clock_gettime(CLOCK_MONOTONIC, {3740303, 718328954}) = 0
1421983762.599966 clock_gettime(CLOCK_MONOTONIC, {3740303, 718354308}) = 0
1421983762.599993 clock_gettime(CLOCK_MONOTONIC, {3740303, 718376131}) = 0
1421983762.600013 clock_gettime(CLOCK_MONOTONIC, {3740303, 718395814}) = 0
1421983762.600041 clock_gettime(CLOCK_MONOTONIC, {3740303, 718425356}) = 0
1421983762.600063 clock_gettime(CLOCK_MONOTONIC, {3740303, 718446401}) = 0
1421983762.600084 poll(0, 0, 8) = 0 (Timeout)
1421983762.608179 clock_gettime(CLOCK_MONOTONIC, {3740303, 726563969}) = 0
1421983762.608207 clock_gettime(CLOCK_MONOTONIC, {3740303, 726590821}) = 0
1421983762.608231 clock_gettime(CLOCK_MONOTONIC, {3740303, 726614935}) = 0
1421983762.608253 clock_gettime(CLOCK_MONOTONIC, {3740303, 726636738}) = 0
1421983762.608276 clock_gettime(CLOCK_MONOTONIC, {3740303, 726660186}) = 0
1421983762.608299 clock_gettime(CLOCK_MONOTONIC, {3740303, 726682901}) = 0
1421983762.608321 clock_gettime(CLOCK_MONOTONIC, {3740303, 726705342}) = 0
1421983762.608355 clock_gettime(CLOCK_MONOTONIC, {3740303, 726738892}) = 0
1421983762.608418 poll(0, 0, 16) = 0 (Timeout)
1421983762.624584 clock_gettime(CLOCK_MONOTONIC, {3740303, 742970236}) = 0
1421983762.624616 clock_gettime(CLOCK_MONOTONIC, {3740303, 743000712}) = 0
1421983762.624644 clock_gettime(CLOCK_MONOTONIC, {3740303, 743028790}) = 0
1421983762.624672 clock_gettime(CLOCK_MONOTONIC, {3740303, 743057226}) = 0
1421983762.624701 clock_gettime(CLOCK_MONOTONIC, {3740303, 743086058}) = 0
1421983762.624729 clock_gettime(CLOCK_MONOTONIC, {3740303, 743113308}) = 0
1421983762.624756 clock_gettime(CLOCK_MONOTONIC, {3740303, 743141082}) = 0
1421983762.624789 clock_gettime(CLOCK_MONOTONIC, {3740303, 743173852}) = 0
1421983762.624816 poll(0, 0, 32) = 0 (Timeout)
1421983762.656914 clock_gettime(CLOCK_MONOTONIC, {3740303, 775300475}) = 0
1421983762.656947 clock_gettime(CLOCK_MONOTONIC, {3740303, 775331344}) = 0
1421983762.656975 clock_gettime(CLOCK_MONOTONIC, {3740303, 775359920}) = 0
1421983762.657003 clock_gettime(CLOCK_MONOTONIC, {3740303, 775388013}) = 0
1421983762.657033 clock_gettime(CLOCK_MONOTONIC, {3740303, 775417392}) = 0
1421983762.657080 clock_gettime(CLOCK_MONOTONIC, {3740303, 775482768}) = 0
1421983762.657127 clock_gettime(CLOCK_MONOTONIC, {3740303, 775512398}) = 0
1421983762.657157 clock_gettime(CLOCK_MONOTONIC, {3740303, 775542220}) = 0
1421983762.657186 poll(0, 0, 64) = 0 (Timeout)
1421983762.721323 clock_gettime(CLOCK_MONOTONIC, {3740303, 839710329}) = 0
1421983762.721357 clock_gettime(CLOCK_MONOTONIC, {3740303, 839742343}) = 0
1421983762.721386 clock_gettime(CLOCK_MONOTONIC, {3740303, 839771544}) = 0
1421983762.721416 clock_gettime(CLOCK_MONOTONIC, {3740303, 839801534}) = 0
1421983762.721446 clock_gettime(CLOCK_MONOTONIC, {3740303, 839831862}) = 0
1421983762.721475 clock_gettime(CLOCK_MONOTONIC, {3740303, 839860312}) = 0
1421983762.721504 clock_gettime(CLOCK_MONOTONIC, {3740303, 839889163}) = 0
1421983762.721533 clock_gettime(CLOCK_MONOTONIC, {3740303, 839918009}) = 0
1421983762.721561 poll(0, 0, 128) = 0 (Timeout)
1421983762.849793 clock_gettime(CLOCK_MONOTONIC, {3740303, 968179530}) = 0
1421983762.849825 clock_gettime(CLOCK_MONOTONIC, {3740303, 968210363}) = 0
1421983762.849854 clock_gettime(CLOCK_MONOTONIC, {3740303, 968238959}) = 0
1421983762.849884 clock_gettime(CLOCK_MONOTONIC, {3740303, 968268828}) = 0
1421983762.849913 clock_gettime(CLOCK_MONOTONIC, {3740303, 968298108}) = 0
1421983762.849941 clock_gettime(CLOCK_MONOTONIC, {3740303, 968325991}) = 0
1421983762.849968 clock_gettime(CLOCK_MONOTONIC, {3740303, 968353342}) = 0
1421983762.849996 clock_gettime(CLOCK_MONOTONIC, {3740303, 968381281}) = 0
1421983762.850024 poll(0, 0, 256) = 0 (Timeout)
1421983763.106354 clock_gettime(CLOCK_MONOTONIC, {3740304, 224740902}) = 0
1421983763.106390 clock_gettime(CLOCK_MONOTONIC, {3740304, 224775769}) = 0
1421983763.106432 clock_gettime(CLOCK_MONOTONIC, {3740304, 224817051}) = 0
1421983763.106461 clock_gettime(CLOCK_MONOTONIC, {3740304, 224858181}) = 0
1421983763.106503 clock_gettime(CLOCK_MONOTONIC, {3740304, 224888272}) = 0
1421983763.106530 clock_gettime(CLOCK_MONOTONIC, {3740304, 224915158}) = 0
1421983763.106564 clock_gettime(CLOCK_MONOTONIC, {3740304, 224948951}) = 0
1421983763.106591 clock_gettime(CLOCK_MONOTONIC, {3740304, 224975842}) = 0
1421983763.106618 poll(0, 0, 1000) = 0 (Timeout)
1421983764.107699 clock_gettime(CLOCK_MONOTONIC, {3740305, 226087274}) = 0
1421983764.107750 clock_gettime(CLOCK_MONOTONIC, {3740305, 226135432}) = 0
1421983764.107780 clock_gettime(CLOCK_MONOTONIC, {3740305, 226164925}) = 0
1421983764.107812 clock_gettime(CLOCK_MONOTONIC, {3740305, 226197837}) = 0
1421983764.107845 clock_gettime(CLOCK_MONOTONIC, {3740305, 226230087}) = 0
1421983764.107883 clock_gettime(CLOCK_MONOTONIC, {3740305, 226267983}) = 0
1421983764.107912 clock_gettime(CLOCK_MONOTONIC, {3740305, 226296562}) = 0
1421983764.107940 clock_gettime(CLOCK_MONOTONIC, {3740305, 226325472}) = 0
1421983764.107968 poll(0, 0, 1000) = 0 (Timeout)
1421983765.109082 clock_gettime(CLOCK_MONOTONIC, {3740306, 227470940}) = 0
1421983765.109122 clock_gettime(CLOCK_MONOTONIC, {3740306, 227507330}) = 0
1421983765.109150 clock_gettime(CLOCK_MONOTONIC, {3740306, 227535425}) = 0
1421983765.109181 clock_gettime(CLOCK_MONOTONIC, {3740306, 227566130}) = 0
1421983765.109212 clock_gettime(CLOCK_MONOTONIC, {3740306, 227597925}) = 0
1421983765.109241 clock_gettime(CLOCK_MONOTONIC, {3740306, 227626474}) = 0
1421983765.109270 clock_gettime(CLOCK_MONOTONIC, {3740306, 227654569}) = 0
1421983765.109297 clock_gettime(CLOCK_MONOTONIC, {3740306, 227682015}) = 0
1421983765.109325 poll(0, 0, 1000) = 0 (Timeout)
1421983766.110429 clock_gettime(CLOCK_MONOTONIC, {3740307, 228816238}) = 0
1421983766.110464 clock_gettime(CLOCK_MONOTONIC, {3740307, 228849036}) = 0
1421983766.110492 clock_gettime(CLOCK_MONOTONIC, {3740307, 228876715}) = 0
1421983766.110522 clock_gettime(CLOCK_MONOTONIC, {3740307, 228907443}) = 0
1421983766.110552 clock_gettime(CLOCK_MONOTONIC, {3740307, 228937543}) = 0
1421983766.110580 clock_gettime(CLOCK_MONOTONIC, {3740307, 228965264}) = 0
1421983766.110608 clock_gettime(CLOCK_MONOTONIC, {3740307, 228992842}) = 0
1421983766.110635 clock_gettime(CLOCK_MONOTONIC, {3740307, 229020102}) = 0
1421983766.110663 poll(0, 0, 1000) = 0 (Timeout)
1421983767.111744 clock_gettime(CLOCK_MONOTONIC, {3740308, 230133260}) = 0
1421983767.111781 clock_gettime(CLOCK_MONOTONIC, {3740308, 230166850}) = 0
1421983767.111807 clock_gettime(CLOCK_MONOTONIC, {3740308, 230192508}) = 0
1421983767.111837 clock_gettime(CLOCK_MONOTONIC, {3740308, 230223800}) = 0
1421983767.111868 clock_gettime(CLOCK_MONOTONIC, {3740308, 230253545}) = 0
1421983767.111893 clock_gettime(CLOCK_MONOTONIC, {3740308, 230277546}) = 0
1421983767.111917 clock_gettime(CLOCK_MONOTONIC, {3740308, 230302330}) = 0
1421983767.111942 clock_gettime(CLOCK_MONOTONIC, {3740308, 230327252}) = 0
1421983767.111967 poll(0, 0, 1000) = 0 (Timeout)
1421983768.113054 clock_gettime(CLOCK_MONOTONIC, {3740309, 231444805}) = 0
1421983768.113128 clock_gettime(CLOCK_MONOTONIC, {3740309, 231515664}) = 0
1421983768.113159 clock_gettime(CLOCK_MONOTONIC, {3740309, 231544618}) = 0
1421983768.113185 clock_gettime(CLOCK_MONOTONIC, {3740309, 231569508}) = 0
1421983768.113209 clock_gettime(CLOCK_MONOTONIC, {3740309, 231593925}) = 0
1421983768.113240 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
1421983768.113289 setsockopt(3, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
1421983768.113329 setsockopt(3, SOL_TCP, TCP_KEEPIDLE, [60], 4) = 0
1421983768.113357 setsockopt(3, SOL_TCP, TCP_KEEPINTVL, [60], 4) = 0
1421983768.113385 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR)
1421983768.113411 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
1421983768.113435 clock_gettime(CLOCK_MONOTONIC, {3740309, 231819619}) = 0
1421983768.113461 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("xxx.xxx.xxx.xxx")}, 16) = -1 EINPROGRESS (Operation now in progress)
In this example it took ~6 seconds to complete. But during experiments this delay varied from 2-3 up to 15-20 seconds.
Googling didn't reveal any obvious issues with it.
The command call is as easy as
curl
http://domain_name/request
↧
January 23, 2015, 4:52 am
Hello,
i would like to delte some snapeshots, i don't need them any more. VM state is running (winSBS2011) Imageformat qcow2 and i have to snapshots. See picture attached. So fist i clicked on the last snapeshot (without ram). After... i don't know... about 5 Minutes i get an errormessage:
Code:
command '/usr/bin/qemu-img snapshot -d vor_popcon /var/lib/vz/images/100/vm-100-disk-1.qcow2' failed: exit code 1
But the new state is deleted, and when i do an qemu info ist says only one snapshot.
Code:
image: /var/lib/vz/images/100/vm-100-disk-1.qcow2
file format: qcow2
virtual size: 465G (499289948160 bytes)
disk size: 518G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 vor_Entfernung_von_DELL_apps 0 2014-12-01 18:15:09 246:17:54.294
Format specific information:
compat: 1.1
lazy refcounts: false
So what was going wrong, i never had this problem before, and I'am not on the testingrepo. So can i remove the other snapshot? It has 18GB on the harddrive... or should/must i fix the problem with the last snapshot first?
Code:
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Thanks and best Regards
↧
January 23, 2015, 6:40 am
Hola a todos
Soy nuevo en el foro, así que sirva este post como presentación. Gracias por recibirme.
Ahí va mi pregunta
Tengo un servidor Proxmox 3 en el que alojo tres máquinas virtuales:
- Virtual1: un Windows Server 2008 R2 64bit que lo he migrado de otro Proxmox y que funciona perfectamente el el viejo.
- Virtual2: un Ubuntu Server 12.04 en las mismas condiciones
- Virtual3: un Windows Server 2008 64bit que proviene de una migración de un VirtualBox.
El problema viene cuando tengo encendido el Virtual3 junto con otro: se apaga solo. El Virtual3 se apaga constantemente, tres o cuatro veces al día. El Virtual2 funciona siempre, pero cuando lo tengo encendido junto el Virtual3 este último se apaga.
Cuando tengo encendidos Virtual1 y Virtual3 se apagan los dos alternativamente.
He probado todo el Hardware, he realizado instalación limpia en ese mismo hardware y en otro y sigue haciendo lo mismo.
¿qué esto haciendo mal? ¿dónde puedo buscar indicios del problema?
No sé si me he explicado bien.
Espero su ayuda.
Gracias
↧
January 23, 2015, 8:16 am
Hi, seems I'm having some similar issues similar to this post:
http://forum.proxmox.com/threads/205...irewall-VE-3-3
I had proxmox fw working fine with only 2 basic rules allowing ssh & webgui access to hostnode from a specific IP. All was working fine until I added an additional eth/vmbrx interface in the networking>interfaces configuration file. After doing a reboot I'm not able to access gui or ssh. Seems like fw just stops working. I tried to disable firewall via cluster.fw file and still not changes. Hmm just looks like the proxmox fw seems unstable in making some changes. Hope to get some help to resolve or undertand what the issue might be.
--Joe
↧
↧
January 23, 2015, 8:52 am
Hello all,
I have two nodes in a cluster.
The main node has a public IP address 198.0.x.x
The joined node has an internal IP address 10.1.x.x
When I try and migrate a VM from the main node it causes my entire network to crater.
My network is set up thusly:
Comcast modem - switch - node1 node2
I looked through logs but I couldn't find anything really telling. I am wondering if this is caused by traversing different subnets. Any help would be appreciated.
↧
January 23, 2015, 9:22 am
↧
January 23, 2015, 10:50 am
I've been playing with proxmox and absolutely love it.
If we goto production with it I am curious how others are setting up their storage. I started out creating seperate iSCSI disks on my nas, Prod, Test, Report and attaching those to proxmox. There were each designated for individual vm's. That worked ok and was extremely fast. Then read through the forums that some of my misgivings about this setup would be resolved by creating LVM's over top of the iscsi targets. Then reading the forums again i found i could easily resize my lvms using pvresize.
How are others utilizing this? Is there a problem with multiple iscsi connections to a pm ve server? The disadvantage I didn't like about multiple iscsi disks was the chance of high network usage causing latency from what could be 4 open connections to the nas. Oh yeah, I'm new to iSCSI so i'm still learning about that as well. NFS failed quite a bit so I didn't try to use it all as iSCSI worked great from the get go.
Just looking to get a feel for how others setup storage and what they like and dislike. Thanks.
↧
January 23, 2015, 10:55 am
How can I configure network in a guest VM (CentOS 6.6) that was instantiated with KVM in "bridged mode" (with vmbr0).
I have a dedicated server in OVH with multiple dedicated IP ranges, many OpenVZ containers and everything is fine. But the instructions in OVH docs and Proxmox forums posts didn't help me to get network working correctly in my first KVM machine.
In the vm, it didn't recognize the the vmbr0 automatically. Do I need to add it manually and, later, the eth0? Or just an eth0 interface is enough?
(I really need to buy a subscription, but I can't afford it in an annual payment)
↧
↧
January 23, 2015, 12:33 pm
we've migrated our complete environment to proxmox.
One of the virtual machines is a Windows 2008 R2 Terminal Server.
The MS-SQL is also running in a VM on the same host.
Performance is fine ... including disc and network ;)
Only the video performance is not the best.
f.e.: Results of a database search displays "line by line" ...
Our old 2003 R2 Terminal didn't had that problem.
Is that due to the video driver !?
Which one should I use?
Currently I'm using "Standard VGA".
Any ideas?
regards
Rico
↧
January 23, 2015, 1:34 pm
Hello,
I'm totally new to ProxMox and am trying to get my bearings. I installed it on my home network behind my ISP router. I'm having trouble accessing the Web Admin from a different computer on the same network.
Observations:
- As I understand it, ProxMox does not by default get a DHCP address.
- It does setup an IP address on a the vmbr0 interface with value of 192.168.100.2
- I'm able to ping itself locally at the 192.168.100.2 address on the ProxMox machine on the console.
- I'm able to successfully do a wget for https://192.168.100.2:8006 on the ProxMox machine on the console. I can see it complaining about the SSl certificate as expected.
- I am able to ping 192.168.100.2 from a different computer on my network fine.
- I'm unable to do a wget for https://192.168.100.2:8006 from the different computer on my network. It just hangs there.
How can I access the web admin?
- As far as I can tell, locally I can only access the web admin via the command line using wget. How would someone bring up a web browser etc. locally on the ProxMox machine to view the Web Admin?
- Then, there's the question of accessing it from a different computer on the network?
Thanks!
↧
January 23, 2015, 1:58 pm
I'm installing a Proxmox 3.3 host machine which will later go onto another network. So, I need to be able to change its IP address.
I thought I could just put the new network config into /etc/network/interfaces. However, when I reboot, I can't get through to the management interface on the new IP. I have to use the old address.
Is there some other config I need to change as well?
↧
January 23, 2015, 3:14 pm
↧
↧
January 23, 2015, 8:02 pm
Hi,
This is subash and am a FOSS Technologist. I am a beginner to proxmox. I have learned to install Proxmox VE in a Bare metal and have started using it for the past 6 months. Now I would like to build my own Proxmox Server.
I need to power up 50 VM's. My budget is 1000 USD. Please suggest me in buying appropriate components. I am looking to build a Mini ITX Server. I don't want tower chassis.
Please help me in buying right and quality components to build a MiniITX Proxmox server.
Thanks in advance.
Best Regards,
Subash V
↧
January 24, 2015, 1:59 am
I configured everything and I can see the storage but when trying to add a new VM I get the following error,
TASK ERROR: create failed - No configuration found. Install istgt on 192.168.1.249 at /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm line 99.
I also tried to migrate one of my vm's disks to the storage then I get the following error.
create full clone of drive ide0 (local2:100/vm-100-disk-1.qcow2)
TASK ERROR: storage migration failed: No configuration found. Install istgt on 192.168.1.249 at /usr/share/perl5/PVE/Storage/LunCmd/Istgt.pm line 99.
The zfs storage I'm using is FreeNAS, so its FreeBSD on the other side.
Is something wrongly configured on my side or are these known problems?
↧
January 24, 2015, 2:32 am
hi
i have standalone proxmox 3.3-1
CPU 48x Intel(R) Xeon(R) E5-2697 v2@ 2.7GHz (2sockets)
CPU usage always be 20%
but add VM linux Centos 5.5 with Oracle DB
this VM always 100% CPU usage 4socket 2core make oracle down
can someone help
↧