Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Del of non-exist NFS and VMs

$
0
0
Hi!

Proxmox is a realy nice piece auf software.

I have testet desaster recovery -> NFS Storage is defect and another storage is connected. In my case i would delete the nonexisting (defect) NFS Storage and the VMs. Thats not realy easy! Proxmox says: "TASK ERROR: mount error: mount.nfs: Connection timed out" when i would delete a VM and delete of the NFS share says: "mount error: mount.nfs: requested NFS version or transport protocol is not supported"...

Yes, i can do it in the console, but i thing this things can go over the webinterface too.

Ibm x3650 server

$
0
0
Hello i have IBM X3650 Server and i make i vm with 1 cpu 1 core default kvm64,
i try to install from physical dvd SERVER std 2012 but i take again and again
error msg.
ERROR CODE 0x0000005D
Parameters
0x00000000078BF3FD
0x000000002181ABFD
0x0000000000000000
0x0000000000000000

I try to disable the vitrualize but still exist the problem any idea to install it

QLOGIC QLE2460 fiber chanel driver

$
0
0
Hi
I installed Proxmox 3.0 on a HP DL380 G5 server. I added a fiber channel card QLOGIC QLE2460 (2432 based).

Where can I find the driver QLA2XIP compiled for Proxmox to let the card work over IP?
Any suggestion??

thanks

Marco

RRD API response to image

$
0
0
Helo i'm writing a script based in the API . all well done, the only probleme i have is how to convert the response data of rrd to image (png)


in response i have array

filname -> /var/......./xxx.png
image -> xqsd sdbfsd fs (some trange caractere)

i have tried

HTML Code:

<img scr="'.$response[image].'.png" alt="cpu chart" *>
but not working .what is the correct format ?

Proxmox-VE 2.3 to 3 upgrade gone wrong ... somehow stuck

$
0
0
First of all, yes I have google everything I could think of and searched the Forum as well. I guess part of the Problem is that I do not understand the Problem. Well I do see whatit is telling me, but it is not making sense.

Original 2.3 Install was direktly from Proxmox-VE 2.3 Installer CD / ISO. To upgrade I used the script provided in the wiki. Made backup of my containers and ensured they where all powered down and made sure all apt-get updates where fully handled.

Ran the script and failed somewhere.

The Problem seems to trace to the following error:

[....] Restarting pve cluster filesystem: pve-cluster/usr/bin/pmxcfs: symbol lookup error: /usr/bin/pmxcfs: undefined symbol: g_mutex_lock
failed!


Sorry for not being quite so bright, but WTF is a mutex lock ?? "mutex" should be some sort of mechanism checking that two or more things are not interfereing with each other. So a "lock" of that would tell me that there IS something blocking something else ... but that's all theory without any real meaning.

/usr/bin/pmxcfs sems to exist (it's a binary)

uname -a currently says:
Linux proxmox-one 2.6.32-19-pve #1 SMP Wed May 15 07:32:52 CEST 2013 x86_64 GNU/Linux

Using kickstart files with the qm create command?

$
0
0
Hello experts,

So after consulting the manual and google, I've found that I should be able to create a new vm, using a kickstart file, using the qm create [...] --args command. Unfortunately I'm having trouble getting this to work correctly.

I'm currently trying:

Code:

qm create 143 --name testKS1 --net0 e1000 --bootdisk ide0 --ostype l26 -ide0 lvmGroup1:4 format=raw --onboot no --sockets 1 --args -append ks=url/path/to/file.ks
Which fails as such:

Code:

400 wrong number of arguments
So how should I be formatting my command to successfully pass along a ks file to the installation? Is there an alternative, perhaps better, way to accomplish this?

There is a ton of information on how to accomplish what I want with virt-install, but I understand virt is not compatible with proxmox. Also I wish to use kickstart files for vm creation in this way such that I can recycle some scripts I had previously used on my xen machine.

(The best source of info I have found, and what I've based my current method on can be found here: http://stackoverflow.com/questions/1...on-proxmox-2-x)

Company info/name on Proxmox Web GUI

$
0
0
Is it possible to add Company name or info on the Proxmox Web GUI. I am not asking about rebranding Proxmox with company logo or anything. Proxmox Logo, Version info for sure should stay. But it would be great if we could add our company info so when client logs in they could also see the company they are logging into?

[SOLVED] How to backup when storage isn't visible


cut power to a node

$
0
0
Good afternoon,

I installed Proxmox 3.0, creating a cluster with two servers dell R520, working with drbd and fencing, which has idrac cards 7.
I used the manuals, which are the site, creating a cluster with 2 nodes, configuration and settings drbd fencing with IPMI agent. I had no problem in the installation and operation of the cluster, all ok.
I have begun to realize the testing, failure of a node, and the virtual machine (KVM), switch to the other node automatic, achieving this goal.
I have checked the reference manual: https://alteeve.ca/w/2-Node_Red_Hat_...uster_Tutorial, with regard to the test of fencing.
I managed to get the virtual machine (KVM), switch to automatic to another node, when I cut the power electico to a node, or root disconnect the electrical service, in the log I see the other node, the activation of the fence agent, but it fails, because the node offline, is without electric power, so idrac card, is also without feeding. Fencing there any settings that allows for automatic change to another node, the virtual machine (KVM), with this type of failure.
Below left my cluster.conf settings.

<?xml version="1.0"?>
<cluster config_version="38" name="VM-SHIPCOM">
<fence_daemon clean_start="0" post_fail_delay="60" post_join_delay="30"/>
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<clusternodes>
<clusternode name="proxmox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fence_shipcom1"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fence_shipcom2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="idrac1" lanplus="1" login="root" name="fence_shipcom1" passwd="XXXX" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="idrac2" lanplus="1" login="root" name="fence_shipcom2" passwd="XXXX" power_wait="5"/>
</fencedevices>
<rm>
<pvevm autostart="1" vmid="100"/>
<failoverdomains>
<failoverdomain name="failovercluster" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="proxmox1" priority="1"/>
<failoverdomainnode name="proxmox2" priority="1"/>
</failoverdomain>
</failoverdomains>
</rm>
</cluster>

PROXMOX node halts randomly.

$
0
0
Hi all,
Another PROXMOX noob here. My host was getting to a "halt" state in the past few days. There's no specific timing, sometimes it halts after 18 hours, sometimes in 2 days, but I've noticed that after rebooting the system and checking the logs, the last logged lines are always related with journal rotations. During these situations, the only solution is to manually press the reset button. I'm only hosting KVM guests.


I've found some threads from some guys with similar issues and they also reported logs with journal rotation tasks logged just at the end of their logs.

Here is the relevant portion of my syslog: (I've marked in red the last line before and the first one after the reboot)

Aug 12 00:17:01 pmx3host /USR/SBIN/CRON[94720]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 00:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 00:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 00:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376279184.400364
Aug 12 00:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376271984.400472
Aug 12 00:49:53 pmx3host pvedaemon[80670]: <root@pam> successful auth for user 'crioboo@pve'
Aug 12 00:58:06 pmx3host pvedaemon[80656]: <root@pam> successful auth for user 'crioboo@pve'
Aug 12 01:17:01 pmx3host /USR/SBIN/CRON[98222]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 01:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 01:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 01:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376282784.400572
Aug 12 01:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376275584.400502
Aug 12 02:17:01 pmx3host /USR/SBIN/CRON[101315]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 02:41:38 pmx3host pvedaemon[80670]: <root@pam> successful auth for user 'crioboo@pve'
Aug 12 02:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 02:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 02:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376286384.400474
Aug 12 02:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376279184.400364
Aug 12 02:51:25 pmx3host pvedaemon[80670]: <crioboo@pve> starting task UPID:pmx3host:0001940D:011E1FB7:520877DD:qmstop:10 1:crioboo@pve:
Aug 12 02:51:25 pmx3host pvedaemon[103437]: stop VM 101: UPID:pmx3host:0001940D:011E1FB7:520877DD:qmstop:10 1:crioboo@pve:
Aug 12 02:51:25 pmx3host kernel: vmbr0: port 2(tap101i0) entering disabled state
Aug 12 02:51:25 pmx3host kernel: vmbr0: port 2(tap101i0) entering disabled state
Aug 12 02:51:25 pmx3host ntpd[1776]: Deleting interface #11 tap101i0, fe80::9834:6ff:feee:ff20#123, interface stats: received=0, sent=0, dropped=0, active_time=186600 secs
Aug 12 02:51:26 pmx3host pvedaemon[80670]: <crioboo@pve> end task UPID:pmx3host:0001940D:011E1FB7:520877DD:qmstop:10 1:crioboo@pve: OK
Aug 12 02:51:37 pmx3host pvedaemon[103455]: start VM 101: UPID:pmx3host:0001941F:011E247C:520877E9:qmstart:1 01:crioboo@pve:
Aug 12 02:51:37 pmx3host pvedaemon[81161]: <crioboo@pve> starting task UPID:pmx3host:0001941F:011E247C:520877E9:qmstart:1 01:crioboo@pve:
Aug 12 02:51:38 pmx3host kernel: device tap101i0 entered promiscuous mode
Aug 12 02:51:38 pmx3host kernel: vmbr0: port 2(tap101i0) entering forwarding state
Aug 12 02:51:38 pmx3host pvedaemon[81161]: <crioboo@pve> end task UPID:pmx3host:0001941F:011E247C:520877E9:qmstart:1 01:crioboo@pve: OK
Aug 12 02:51:48 pmx3host kernel: tap101i0: no IPv6 routers present
Aug 12 02:56:25 pmx3host ntpd[1776]: Listen normally on 13 tap101i0 fe80::ac1c:d3ff:fece:89ed UDP 123
Aug 12 03:17:01 pmx3host /USR/SBIN/CRON[104871]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 03:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 03:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 03:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376289984.400500
Aug 12 03:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376282784.400572
Aug 12 04:17:01 pmx3host /USR/SBIN/CRON[107908]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 04:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 04:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 04:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376293584.424462
Aug 12 04:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376286384.400474
Aug 12 05:17:01 pmx3host /USR/SBIN/CRON[110882]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 05:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 05:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 05:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376297184.400686
Aug 12 05:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376289984.400500
Aug 12 06:17:01 pmx3host /USR/SBIN/CRON[113834]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 12 06:25:01 pmx3host /USR/SBIN/CRON[114264]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
06:25:02 pmx3host rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1672" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'.
Aug 12 06:25:02 pmx3host rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1672" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'.
Aug 12 06:46:24 pmx3host rrdcached[1817]: flushing old values
Aug 12 06:46:24 pmx3host rrdcached[1817]: rotating journals
Aug 12 06:46:24 pmx3host rrdcached[1817]: started new journal /var/lib/rrdcached/journal//rrd.journal.1376300784.400659
Aug 12 06:46:24 pmx3host rrdcached[1817]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1376293584.424462
Aug 12 19:07:29 pmx3host kernel: imklog 4.6.4, log source = /proc/kmsg started.
Aug 12 19:07:29 pmx3host rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1835" x-info="http://www.rsyslog.com"] (re)start
Aug 12 19:07:29 pmx3host kernel: Initializing cgroup subsys cpuset
Aug 12 19:07:29 pmx3host kernel: Initializing cgroup subsys cpu
Aug 12 19:07:29 pmx3host kernel: Linux version 2.6.32-19-pve (root@maui) (gcc version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Wed May 15 07:32:52 CEST 2013
Aug 12 19:07:29 pmx3host kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-19-pve root=UUID=ef09aef3-8fdd-4738-80df-f5eb4caec0a5 ro quiet
Aug 12 19:07:29 pmx3host kernel: KERNEL supported cpus:
Aug 12 19:07:29 pmx3host kernel: Intel GenuineIntel
Aug 12 19:07:29 pmx3host kernel: AMD AuthenticAMD
Aug 12 19:07:29 pmx3host kernel: Centaur CentaurHauls
Aug 12 19:07:29 pmx3host kernel: BIOS-provided physical RAM map:
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 0000000000000000 - 000000000009c000 (usable)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000000009c000 - 00000000000a0000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 0000000000100000 - 000000008c012000 (usable)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008c012000 - 000000008c0f0000 (ACPI NVS)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008c0f0000 - 000000008c4fb000 (ACPI data)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008c4fb000 - 000000008d8fb000 (ACPI NVS)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008d8fb000 - 000000008f602000 (ACPI data)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f602000 - 000000008f64f000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f64f000 - 000000008f6e4000 (ACPI data)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f6e4000 - 000000008f6ee000 (ACPI NVS)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f6ee000 - 000000008f6f1000 (ACPI data)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f6f1000 - 000000008f7cf000 (ACPI NVS)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f7cf000 - 000000008f800000 (ACPI data)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 000000008f800000 - 0000000090000000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 00000000a0000000 - 00000000b0000000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 00000000fc000000 - 00000000fd000000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 00000000fed1c000 - 00000000fed45000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 00000000ff800000 - 0000000100000000 (reserved)
Aug 12 19:07:29 pmx3host kernel: BIOS-e820: 0000000100000000 - 0000000c70000000 (usable)
Aug 12 19:07:29 pmx3host kernel: DMI 2.5 present.
Aug 12 19:07:29 pmx3host kernel: SMBIOS version 2.5 @ 0xF0440
Aug 12 19:07:29 pmx3host kernel: DMI: Intel Corporation S5520HC/S5520HC, BIOS S5500.86B.01.00.0060.090920111354 09/09/2011
Aug 12 19:07:29 pmx3host kernel: e820 update range: 0000000000000000 - 0000000000001000 (usable) ==> (reserved)
Aug 12 19:07:29 pmx3host kernel: e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
Aug 12 19:07:29 pmx3host kernel: last_pfn = 0xc70000 max_arch_pfn = 0x400000000
Aug 12 19:07:29 pmx3host kernel: MTRR default type: write-back
Aug 12 19:07:29 pmx3host kernel: MTRR fixed ranges enabled:
Aug 12 19:07:29 pmx3host kernel: 00000-9FFFF write-back
Aug 12 19:07:29 pmx3host kernel: A0000-BFFFF uncachable
Aug 12 19:07:29 pmx3host kernel: C0000-DFFFF write-through
Aug 12 19:07:29 pmx3host kernel: E0000-FFFFF write-protect
Aug 12 19:07:29 pmx3host kernel: MTRR variable ranges enabled:
Aug 12 19:07:29 pmx3host kernel: 0 base 00C0000000 mask FFC0000000 uncachable
Aug 12 19:07:29 pmx3host kernel: 1 base 00A0000000 mask FFE0000000 uncachable
Aug 12 19:07:29 pmx3host kernel: 2 base 0090000000 mask FFF0000000 uncachable
Aug 12 19:07:29 pmx3host kernel: 3 base 00B0000000 mask FFFF000000 write-combining
Aug 12 19:07:29 pmx3host kernel: 4 disabled
Aug 12 19:07:29 pmx3host kernel: 5 disabled
Aug 12 19:07:29 pmx3host kernel: 6 disabled
Aug 12 19:07:29 pmx3host kernel: 7 disabled
Aug 12 19:07:29 pmx3host kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
Aug 12 19:07:29 pmx3host kernel: last_pfn = 0x8c012 max_arch_pfn = 0x400000000
Aug 12 19:07:29 pmx3host kernel: initial memory mapped : 0 - 20000000
Aug 12 19:07:29 pmx3host kernel: init_memory_mapping: 0000000000000000-000000008c012000
Aug 12 19:07:29 pmx3host kernel: 0000000000 - 008c000000 page 2M
Aug 12 19:07:29 pmx3host kernel: 008c000000 - 008c012000 page 4k
Aug 12 19:07:29 pmx3host kernel: kernel direct mapping tables up to 8c012000 @ 8000-d000
Aug 12 19:07:29 pmx3host kernel: init_memory_mapping: 0000000100000000-0000000c70000000
Aug 12 19:07:29 pmx3host kernel: 0100000000 - 0c70000000 page 2M
Aug 12 19:07:29 pmx3host kernel: kernel direct mapping tables up to c70000000 @ b000-3e000
Aug 12 19:07:29 pmx3host kernel: RAMDISK: 3703f000 - 37fef1f6
Aug 12 19:07:29 pmx3host kernel: ACPI: RSDP 00000000000f0410 00024 (v02 INTEL )
Aug 12 19:07:29 pmx3host kernel: ACPI: XSDT 000000008f7fd120 0009C (v01 INTEL S5520HC 00000000 01000013)
Aug 12 19:07:29 pmx3host kernel: ACPI: FACP 000000008f7fb000 000F4 (v04 INTEL S5520HC 00000000 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: DSDT 000000008f7f4000 06531 (v02 INTEL S5520HC 00000003 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: FACS 000000008f6f1000 00040
Aug 12 19:07:29 pmx3host kernel: ACPI: APIC 000000008f7f3000 001A8 (v02 INTEL S5520HC 00000000 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: MCFG 000000008f7f2000 0003C (v01 INTEL S5520HC 00000001 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: HPET 000000008f7f1000 00038 (v01 INTEL S5520HC 00000001 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: SLIT 000000008f7f0000 00030 (v01 INTEL S5520HC 00000001 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: SRAT 000000008f7ef000 00430 (v02 INTEL S5520HC 00000001 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: SPCR 000000008f7ee000 00050 (v01 INTEL S5520HC 00000000 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: WDDT 000000008f7ed000 00040 (v01 INTEL S5520HC 00000000 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: SSDT 000000008f7d2000 1AFC4 (v02 INTEL SSDT PM 00004000 INTL 20061109)
Aug 12 19:07:29 pmx3host kernel: ACPI: SSDT 000000008f7d1000 001D8 (v02 INTEL IPMI 00004000 INTL 20061109)
Aug 12 19:07:29 pmx3host kernel: ACPI: HEST 000000008f7d0000 000A8 (v01 INTEL S5520HC 00000001 INTL 00000001)
Aug 12 19:07:29 pmx3host kernel: ACPI: BERT 000000008f7cf000 00030 (v01 INTEL S5520HC 00000001 INTL 00000001)
Aug 12 19:07:29 pmx3host kernel: ACPI: ERST 000000008f6f0000 00230 (v01 INTEL S5520HC 00000001 INTL 00000001)
Aug 12 19:07:29 pmx3host kernel: ACPI: EINJ 000000008f6ef000 00130 (v01 INTEL S5520HC 00000001 INTL 00000001)
Aug 12 19:07:29 pmx3host kernel: ACPI: DMAR 000000008f6ee000 001C8 (v01 INTEL S5520HC 00000001 MSFT 0100000D)
Aug 12 19:07:29 pmx3host kernel: ACPI: Local APIC address 0xfee00000
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 0 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 16 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 2 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 18 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 4 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 20 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 6 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 22 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 1 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 17 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 3 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 19 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 5 -> Node 0
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 1 -> APIC 21 -> Node 1
Aug 12 19:07:29 pmx3host kernel: SRAT: PXM 0 -> APIC 7 -> Node 0





































































































































































Here is my pveversion -v

# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-19-pve: 2.6.32-96
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1
#


Any ideas of what could be happening?

Regards,

Feature Suggestion-Network View

$
0
0
Not sure if there is a section to suggest features in Proxmox, so i am going to post it here.

Would "Network View" be a good feature to add in Proxmox much like Server/Storage view? At a glance it would allow to see how many Bridges are setup, how many and which VMs are connected with which bridges/bond/nic etc.

Proxmox Compatible to Android

$
0
0
Current Proxmox 2 or 3 still not yet compatible with Android:
- Horizontal Scroll Bar not appear (this can easily to fix by force to show Horz Scrol Bar)
- JRE not yet available for Android
- Proxmox very slow on Anroid even using 2 Core @ 1.5 GHz, RAM 1.5 GB

Big Problem sda sdb

$
0
0
hallo,

I have Proxmox 3 in use.
The installation is on a SATA HDD as sda.
Once I of binding a second drive "USB, SATA" is my sda to sdb.

All experiments with fstab UUID, etc. usbmount failed.
Could someone please give me a hint?

thank you

Network configuration problem with NAT and OpenVZ

$
0
0
Hi !

I have a server with Proxmox. I have an OpenVZ configuration with many VE with NAT.
But I have a problem with my network configuration !

We have an Apache on a VE (prod-web-1) and I can't get access to a domain (tutu.fr for example) hosted on this VE (using the public IP A.B.C.D) from this VE :
Code:

root@prod-web-1:~# telnet tutu.fr 80
Trying A.B.C.D...

It's ok when we use localhost or private ip of the VE (192.168.0.101) :
Code:

root@prod-web-1:~# telnet localhost 80
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Code:

root@prod-web-1:~# telnet 192.168.0.101 80
Trying 192.168.0.101...
Connected to 192.168.0.101.
Escape character is '^]'.


It works from the node (tanenbaum) :
Code:

root@tanenbaum:~# telnet tutu.fr 80
Trying A.B.C.D...
Connected to tutu.fr.
Escape character is '^]'.

and from others VE (prod-bdd-1 for example) :
Code:

root@prod-bdd-1:~# telnet tutu.fr 80
Trying A.B.C.D...
Connected to tutu.fr.
Escape character is '^]'.

My configuration :

NAT :
Code:

root@tanenbaum:~# iptables -L -v -t nat
Chain PREROUTING (policy ACCEPT 69 packets, 4311 bytes)
 pkts bytes target    prot opt in    out    source              destination
  86  4584 DNAT      tcp  --  any    any    anywhere            srv1.toto.fr      tcp dpt:http to:192.168.0.101:80
    0    0 DNAT      tcp  --  eth0  any    anywhere            srv1.toto.fr      tcp dpt:ftp to:192.168.0.101:21
    0    0 DNAT      tcp  --  eth0  any    anywhere            srv1.toto.fr      tcp dpts:4242:4300 to:192.168.0.101

Chain POSTROUTING (policy ACCEPT 247 packets, 27946 bytes)
 pkts bytes target    prot opt in    out    source              destination
  13  819 SNAT      all  --  any    any    192.168.0.0/24      !192.168.0.0/24      to:A.B.C.D

Chain OUTPUT (policy ACCEPT 117 packets, 20722 bytes)
 pkts bytes target    prot opt in    out    source              destination
    0    0 DNAT      tcp  --  any    any    anywhere            srv1.toto.fr      tcp dpt:http to:192.168.0.101:80
    0    0 DNAT      tcp  --  any    any    anywhere            srv1.toto.fr      tcp dpt:ftp to:192.168.0.101:21
    0    0 DNAT      tcp  --  any    any    anywhere            srv1.toto.fr      tcp dpts:4242:4300 to:192.168.0.101

Filter :
Code:

root@tanenbaum:~# iptables -L -v
Chain INPUT (policy DROP 3 packets, 152 bytes)
 pkts bytes target    prot opt in    out    source              destination
  44  3710 ACCEPT    all  --  lo    any    anywhere            anywhere
  437 34317 ACCEPT    all  --  any    any    anywhere            anywhere            state RELATED,ESTABLISHED
    0    0 ACCEPT    tcp  --  any    any    anywhere            anywhere            tcp dpt:https state NEW
    0    0 ACCEPT    tcp  --  any    any    anywhere            anywhere            tcp dpt:6984 state NEW
    0    0 ACCEPT    tcp  --  eth0  any    cache.ovh.net        anywhere            tcp dpt:ssh
    0    0 ACCEPT    tcp  --  any    any    anywhere            anywhere            tcp dpt:8006 state NEW
  12  952 ACCEPT    icmp --  any    any    anywhere            anywhere
    0    0 ACCEPT    tcp  --  any    any    torvalds.toto.fr  anywhere            tcp dpt:mysql state NEW
    0    0 ACCEPT    all  --  any    any    192.168.0.0/24      anywhere

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target    prot opt in    out    source              destination
15265 9481K ACCEPT    all  --  any    any    192.168.0.0/24      anywhere
 1714  568K ACCEPT    all  --  any    any    anywhere            192.168.0.0/24

Chain OUTPUT (policy ACCEPT 156 packets, 26429 bytes)
 pkts bytes target    prot opt in    out    source              destination
  44  3710 ACCEPT    all  --  any    lo      anywhere            anywhere
  415  156K ACCEPT    all  --  any    any    anywhere            anywhere            state RELATED,ESTABLISHED


Ip forwarding is enabled :
Code:

root@tanenbaum:~# cat /proc/sys/net/ipv4/ip_forward
1

Network configuration :
Code:

root@tanenbaum:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
        address E.F.G.H
        netmask 255.255.255.0
        network E.F.G.0
        broadcast E.F.G.255
        gateway E.F.G.254
        # IP Failover
        post-up /sbin/ifconfig eth0:0 A.B.C.D netmask 255.255.255.255 broadcast A.B.C.D
        post-down /sbin/ifconfig eth0:0 down
        post-up /sbin/ifconfig eth0:1 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255
        post-down /sbin/ifconfig eth0:1 down

The routing table from the node :
Code:

root@tanenbaum:~# route
Table de routage IP du noyau
Destination    Passerelle      Genmask        Indic Metric Ref    Use Iface
prod-infra-1    *              255.255.255.255 UH    0      0        0 venet0
bck-bdd-1      *              255.255.255.255 UH    0      0        0 venet0
prod-bdd-1      *              255.255.255.255 UH    0      0        0 venet0
prod-mail-1    *              255.255.255.255 UH    0      0        0 venet0
prod-web-1      *              255.255.255.255 UH    0      0        0 venet0
E.F.G.0    *              255.255.255.0  U    0      0        0 eth0
default        E.F.G.254  0.0.0.0        UG    0      0        0 eth0

I tried a lot of things but I'm still not able to have a full access to my sites from my Apache VE (prod-web-1) :(
Do you have an idea ?

Thanks !
Romain

Best way to use qcow2 on an iscsi storage

$
0
0
Hi, we have an iscsi storage system already installed. We're using iscsi as lvm group but we would like to use qcow2 files on iscsi storage which only can be used on local storages. What we try is to use iscsi storage as direct lun and then format it, mount to a directory and define the directory as a storage. Then we can use the qcow2 formatted disk images for the kvm machines.

Is this the best (or only) method to use or anybody using any other methods for qcow2 files on an iscsi storage?

Thanks
Gokalp

VE3.x: KVM Disk configuration for 14TB and a single VM

$
0
0
hi,

I configured a ISCSI Volumes (14TB) and created on our VE3 cluster (3 nodes) a volume group. Now I want to assign the whole 14TB to one single VM (Bacula Backup Server). What is the best solution. Should I create one single 14878,72Gb image, or should I create several disk images (for example 7x2TB images)?

cu denny

enterprise.proxmox.com 401, Can I ignore this error safely?

$
0
0
Code:

Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages
  The requested URL returned error: 401
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en_US
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en
W: Failed to fetch https://enterprise.proxmox.com/debian/dists/wheezy/pve-enterprise/binary-amd64/Packages  The requested URL returned error: 401


E: Some index files failed to download. They have been ignored, or old ones used instead.

Can I ignore this error safely?

failing backups

$
0
0
I noticed my backups recently started failing:

Code:

INFO: starting new backup job: vzdump --quiet 1 --mailto  proxmox@example.com --mode snapshot --compress gzip --storage  backup0 --all 1
INFO: Starting Backup of VM 100 (openvz)
INFO: CTID 100 exist mounted running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: trying to remove stale snapshot '/dev/pve/vzsnap-proxmox1-0'
INFO: umount: /mnt/vzsnap0: not mounted
ERROR: command 'umount /mnt/vzsnap0' failed: exit code 1
INFO:  /dev/pve/vzsnap-proxmox1-0: read failed after 0 of 4096 at 184771608576: Input/output error
INFO:  /dev/pve/vzsnap-proxmox1-0: read failed after 0 of 4096 at 184771665920: Input/output error
INFO:  /dev/pve/vzsnap-proxmox1-0: read failed after 0 of 4096 at 0: Input/output error
INFO:  /dev/pve/vzsnap-proxmox1-0: read failed after 0 of 4096 at 4096: Input/output error
INFO:  Logical volume "vzsnap-proxmox1-0" successfully removed
INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-proxmox1-0')
INFO:  Logical volume "vzsnap-proxmox1-0" created
INFO: creating archive '/var/lib/backup0/dump/vzdump-openvz-100-2013_08_10-03_00_02.tar.gz'
INFO: gzip: stdout: No space left on device
INFO: lvremove failed - trying again in 8 seconds
INFO: lvremove failed - trying again in 16 seconds
INFO: lvremove failed - trying again in 32 seconds
ERROR: command 'lvremove -f /dev/pve/vzsnap-proxmox1-0' failed: exit code 5
ERROR:  Backup of VM 100 failed - command '(cd /mnt/vzsnap0/private/100;find .  '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed  's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion  --one-file-system --null -T -|gzip)  >/var/lib/backup0/dump/vzdump-openvz-100-2013_08_10-03_00_02.tar.dat'  failed: exit code 1
INFO: Backup job finished with errors
TASK ERROR: job errors

This may have to do with me recently upgrading to proxmox3. or maybe it's just a coincidence.

I have confirmed that /mnt/vzsnap0 is a directory and nothing is mounted on it.

was something managing my backups by removing old dumps that is broken now? or was this just a coincidence that my backups are breaking because of space issues ?

ProxMox 3.0 IMS fencing problem

$
0
0
I have found a problem on our IMS system.
Fencing will not work. When I will try to fence node timo:
Code:

fence_node timo -vv
I will get:
Code:

fence timo dev 0.0 agent fence_intelmodular result: error from agentagent args: port=3 nodename=timo agent=fence_intelmodular ipaddr=xxx.xxx.xxx.xxx login=snmpv3user passwd=xxxxxxxxx snmp_auth_prot=SHA snmp_sec_level=auth snmp_version=3
fence timo failed

And in the /var/log/messages:
Code:

Aug 13 19:24:44 sint fence_intelmodular: Parse error: Ignoring unknown option 'nodename=timo
It looks like something is going wrong with the translation of the node name to the port number.

I have searched the internet and this forum, but I can't find it. Also the code will not provide me the answer, because this code is new for me. Does anyone have a pointer where to start?

Additional information:
/etc/pve/cluster.conf:
Code:

<?xml version="1.0"?><cluster config_version="102" name="IMS-EVO">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="fence_intelmodular" ipaddr="xxx.xxx.xxx.xxx" login="snmpv3user" name="ims" passwd="xxxxxxxxx" snmp_auth_prot="SHA" snmp_sec_level="auth" snmp_version="3"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="sint" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="ims" port="1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="rembo" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="ims" port="2"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="timo" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device name="ims" port="3"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm/>
</cluster>

clustat:
Code:

Cluster Status for IMS-EVO @ Tue Aug 13 19:54:54 2013
Member Status: Quorate


 Member Name                                                    ID  Status
 ------ ----                                                    ---- ------
 sint                                                                1 Online, Local
 rembo                                                              2 Online
 timo                                                                3 Online

(no rgmanager, but I havn't got any HA VM at this point because of this.)

pvecm
Code:

Version: 6.2.0
Config Version: 102
Cluster Name: IMS-EVO
Cluster Id: 9351
Cluster Member: Yes
Cluster Generation: 94868
Membership state: Cluster-Member
Nodes: 3
Expected votes: 3
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 6
Flags:
Ports Bound: 0 177
Node name: sint
Node ID: 1
Multicast addresses: 239.192.36.171
Node addresses: 10.221.184.60

pveversion -v
Code:

pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

fence_tool ls
Code:

fence domain
member count  3
victim count  0
victim now    0
master nodeid 2
wait state    none
members      1 2 3

VLAN Proxmox 1.9

$
0
0
Hello, my datacenter help me!!!! THanksss
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>