Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

network not responding on proxmox 3.0

$
0
0
Hi ,

I am using proxmox 3.0 i am having 4 interface eth0 as vmbr0 and eth1, eth2, eth3 as vmbr1 in bond.

frequently i am not able to connect to proxmox server and vm's one restarted the network it start working. and i am not able to ping localhost host also i am getting "connect: No buffer space available"
changed some sysctl variables also

net.ipv4.neigh.default.gc_interval = 3600
net.ipv4.neigh.default.gc_stale_time = 3600
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024

syslog error:

Mar 18 16:00:05 proxmox1 kernel: __ratelimit: 3705 callbacks suppressed
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:05 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: __ratelimit: 3317 callbacks suppressed
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.
Mar 18 16:00:10 proxmox1 kernel: Neighbour table overflow.



regards,
asaguru

VNC connection

$
0
0
Can anyone help point me in the correct direction for enabling users beside root@pam to connect via external VNC client?

One of the last relevant posts I found regarding this issue was from several years back and I cannot find anything further. Post
is here: http://forum.proxmox.com/threads/715...nal-VNC-Client

I also followed the protocol from the wiki for old VNC clients, which works and I would like to use, though I cannot figure out how to implement the password. When I add the pw
to the end of the config (nano /etc/pve/local/qemu-server/100.conf), the pw simply does not take effect when attempting connection, even after restarting services, vm and host. As well,
this method seems to disable the use of the web gui console vnc connection...
https://pve.proxmox.com/wiki/Vnc_2.0

Much appreciated,
the nub

Max number of nodes in a PVE 3.2 cluster

$
0
0
Hello,

I'm looking to build a larger cluster using PVE V3.2 with ceph and wanted to know what the max size is that I can go to.

We are currently looking at 18 nodes with:

- Two 8 core AMD CPUs
- 128Gig ram
- 2 smaller SAS disks for base OS
- 4 x 4gig sas drives as ceph OSDs. (dedicated to ceph and may expand to 8 4TB disks)
- 2 x 10GBit Ethernet nics bonded
- due to cost, the ceph journals will probably go on each spinner...

I had heard that the max cluster size for PVE was 16. Is this still the case? If so, I guess I need to split this into two clusters of 9 nodes each.... :(

Thanks,

-Glen

-Glen

On PVE V3.2, can I install a ceph MDS for CephFS?

$
0
0
Hello,

I was able to install and test a small cluster using PVE v3.2 and set up a ceph cluster. I would also like to setup an MDS so I can start testing CephFS.

I know cephfs is not yet considered production ready but I need to do some testing to be ready for when it does become production ready.

I don't see a way to setup an MDS using the new pveceph installation tool. I'm guessing I could do it by hand but want to make sure I don't do something wrong.

Is it possible and what do I need to be careful of?

Thanks,

-Glen

Archive RBL / Blacklisted emails for product evaluation

$
0
0
Hello,

Is it possible to archive blacklisted emails for inspection, they seem to go in a black hole despite changing the action to archive from block?

OpenVZ: adding a parameter to GUEST KERNEL.

$
0
0
Hello,

a very short question: how can I add "--verbose" to my GUEST' kernel?

I've already tried by inserting this option in /etc/default/grub file yet I cannot run update-grub because it exit with the following output:

root@VMLAMP01 ~# update-grub
Generating grub.cfg ...
Cannot find list of partitions! (Try mounting /sys.)
done

I build up this GUEST upon a template, ubuntu-10.04-turnkey-lamp_11.3-1_i386.tar.gz.

Please, help me.

Best Regards.

Network Problem

$
0
0
Hi,

I come from Vietnam. And i speak english not well. I have problem network on my CT. Please read content below:
Quote:

before when i create CT just input IP address xxx.244 (venet) and my CT working fine have internet connectivity. dont need create vMAC and assign it to my CT. earlier, i missed create vMAC on OVH Manager with that IP address xxx.244 and i deleted vMACs. But now my CT not have internet connectivity and i dont change anything configure my CT

So, what problem i getting? From OVH or my Node?
I really need fix it asap.

Regards,

vzrestore from Ubuntu to CentOS breaks networking

$
0
0
Hello,

I noticed that if I have an Ubuntu container, and later do a vzrestore of a CentOS container, the OSTEMPLATE variable is not changed in the container config. The side effect of that is that when I start the new container, I get:

Code:

root@mc02:~# vzctl start 102
Starting container ...
Container is mounted
Adding IP address(es): 10.0.0.100
/bin/bash: line 504: /etc/network/interfaces: No such file or directory
grep: /etc/network/interfaces: No such file or directory
/bin/bash: line 517: /etc/network/interfaces: No such file or directory
/bin/bash: line 540: /etc/network/interfaces: No such file or directory
/bin/bash: line 547: /etc/network/interfaces: No such file or directory
cp: cannot stat `/etc/network/interfaces': No such file or directory
/bin/bash: line 571: /etc/network/interfaces.bak: No such file or directory
mv: cannot stat `/etc/network/interfaces.bak': No such file or directory
Setting CPU units: 1000
Setting CPUs: 1
Container start in progress...

Of course, /etc/network/interfaces is where network settings are stored in Ubuntu. vzctl is attempting to check the interfaces file, but being on CentOS, it doesn't exist.

If I change:
Code:

OSTEMPLATE="ubuntu-12.04-standard_12.04-1_i386.tar.gz"
to:
Code:

OSTEMPLATE="centos-6-standard_6.3-1_i386.tar.gz"
everything then works perfectly.

Is it possible to get the variable in the config file updated when we pass the -ostemplate parameter? I think that this would be the ideal behavior.

Thanks!

Is relying on nextid for vmid via API safe?

$
0
0
Via the API, there doesn't appear to be a way to create a container without specifying the vmid. To get the vmid, I would do GET /api2/json/cluster/nextid. My concern with this approach is that in a situation where there are multiple people requesting resources at the same time, the same vmid could be requested. I know that the first person wins, and that subsequent create requests with the same vmid will fail, but is there a way to avoid that altogether? What workarounds are available for this situation, if any?

Thanks!

KVM Console

$
0
0
I cannot console in to any KVM instances I even tried restarting my host the error is:

Network error: remote side closed connection please help I can console in to VZ instances fine!

vlans in Proxmox 3.2 not functioning as expected

$
0
0
Hi there,

I just started playing around with Proxmox 3.2 and observed a rather odd behavior regarding vlans.

My setup is pretty simple:
eth0: external interface to network switch
vmbr0: bridges eth0 (created from Proxmox GUI)

On the switch port that eth0 is connected to I have several tagged vlans (assume vlan ids 2,3,4) besides a native vlan (let's assume 1).

I then create a VM id 100 with one non-tagged network interface.
brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.0022195d7538 no eth0
tap100i0

Thereafter I start the VM and want it to boot from PXE. The switch has a DHCP helper set that redirects any requests to a defined IP.
The VM does receive a DHCP lease for IP 10.10.11.100/24 with gateway 10.10.11.1 together with TFTP information and starts trying to access the TFTP Server at given IP 10.10.10.10 but does only receive a timeout.

I investigated further and for that purpose I set up tcpdump listening on interfaces eth0 and tap100i0.
Outcome: After the DHCP ack the VM sends an ARP-who-has request for it's gateway 10.10.11.1 in order to get access to TFTP IP 10.10.10.10 which is outside of own subnet. The requests are observed on both interfaces.
BUT: the ARP replies are only seen on interface eth0 and never make it to the VM interface tap100i0.

I tried anything imaginable to find the cause. Turns out once I create a second VM 101 with an tagged interface for each vlan configured on the switch port connecting to eth0 AND after I had stopped and started VM 100 again the ARP replies are being sent to tap100i0 interface.

Out of curiosity I tried to replicate this behavior and removed the VM 101. I confirmed that Proxmox deleted all the associated bridges and only vmbr0 was left. I stopped and started VM 100 once again but ARP is still being received.

So I start all over again with a fresh Proxmox installation with the exact same initial config (one VM 100 with just one untagged interface) and ARP is not received on the VM anew. Once I add tagged interfaces for any tagged vlans it's working. Once I add another tagged vlan on the switchport it stops functioning. Once I remove tagged vlans on the switchport it is working.

In a typical production environment where each VM has just a unique tagged vlan for separation and where those VMs are spread over multiple Proxmox servers that have tagged all vlans on eth0 to allow network VM being functional after migration this could cause a lot of trouble.

Unfortunately I was not able yet to find out what exactly changes after the dummy interfaces are created. Of course there is a new bridge being added for every tagged vlan but after removal of the interfaces the bridges are gone as well and brctl looks the same as before. But still there must be some difference / state change left.

Maybe dietmar can shed some light on this?
What else does differentiate the config after the tagged bridges have been created once?
Does Proxmox 3.2 introduce any changes to the bridging components or virtual network adapter firmwares? For the above tests I utilized virtio network driver.

ACPI Problems? Bug?

$
0
0
My KVMs fail to start sometimes and it only works if I re-try it several times.
I am using the latest Proxmox server, installed with ISO.


I have found the several warnings in dmesg:

dmesg | grep ACPI


BIOS-e820: 00000000cca9e000 - 00000000ccaaf000 (ACPI data)
BIOS-e820: 00000000ccaaf000 - 00000000ccbd4000 (ACPI NVS)
BIOS-e820: 00000000cd7f5000 - 00000000cd838000 (ACPI NVS)
ACPI: RSDP 00000000000f0490 00024 (v02 ALASKA)
ACPI: XSDT 00000000ccaa1078 0006C (v01 ALASKA A M I 01072009 AMI 00010013)
ACPI: FACP 00000000ccaac560 0010C (v05 ALASKA A M I 01072009 AMI 00010013)
ACPI Warning: FADT (revision 5) is longer than ACPI 2.0 version, truncating leng th 0x10C to 0xF4 (20090903/tbfadt-288)
ACPI: DSDT 00000000ccaa1180 0B3D9 (v02 ALASKA A M I 00000022 INTL 20051117)
ACPI: FACS 00000000ccbd2080 00040
ACPI: APIC 00000000ccaac670 00092 (v03 ALASKA A M I 01072009 AMI 00010013)
ACPI: FPDT 00000000ccaac708 00044 (v01 ALASKA A M I 01072009 AMI 00010013)
ACPI: MCFG 00000000ccaac750 0003C (v01 ALASKA A M I 01072009 MSFT 00000097)
ACPI: HPET 00000000ccaac790 00038 (v01 ALASKA A M I 01072009 AMI. 00000005)
ACPI: SSDT 00000000ccaac7c8 0036D (v01 SataRe SataTabl 00001000 INTL 20091112)
ACPI: SSDT 00000000ccaacb38 009AA (v01 PmRef Cpu0Ist 00003000 INTL 20051117)
ACPI: SSDT 00000000ccaad4e8 00A92 (v01 PmRef CpuPm 00003000 INTL 20051117)
ACPI: DMAR 00000000ccaadfd8 000B0 (v01 INTEL SNB 00000001 INTL 00000001)
ACPI: Local APIC address 0xfee00000
ACPI: PM-Timer IO Port: 0x408
ACPI: Local APIC address 0xfee00000
ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
ACPI: IRQ0 used by override.
ACPI: IRQ2 used by override.
ACPI: IRQ9 used by override.
Using ACPI (MADT) for SMP configuration information
ACPI: HPET id: 0x8086a701 base: 0xfed00000
ACPI: Core revision 20090903
PM: Registering ACPI NVS region at ccaaf000 (1200128 bytes)
PM: Registering ACPI NVS region at cd7f5000 (274432 bytes)
ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
ACPI: bus type pci registered
ACPI: EC: Look up EC in DSDT
ACPI: Executed 1 blocks of module-level executable AML code
ACPI Error (psargs-0359): [RAMB] Namespace lookup failure, AE_NOT_FOUND
ACPI Exception: AE_NOT_FOUND, Could not execute arguments for [RAMW] (Region) (2 0090903/nsinit-347)
ACPI: Interpreter enabled
ACPI: (supports S0 S3 S4 S5)
ACPI: Using IOAPIC for interrupt routing
ACPI: Power Resource [FN00] (off)
ACPI: Power Resource [FN01] (off)
ACPI: Power Resource [FN02] (off)
ACPI: Power Resource [FN03] (off)
ACPI: Power Resource [FN04] (off)
ACPI: No dock devices found.
PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and repo rt a bug
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP01._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEG0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.RP05._PRT]
pci0000:00: Requesting ACPI _OSC control (0x1d)
pci0000:00: ACPI _OSC control (0x18) granted
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 *4 5 6 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 *5 6 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 *10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKH] (IRQs *3 4 5 6 10 11 12 14 15)
PCI: Using ACPI for IRQ routing
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active)
pnp 00:01: Plug and Play ACPI device, IDs PNP0c01 (active)
pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active)
pnp 00:03: Plug and Play ACPI device, IDs INT0800 (active)
pnp 00:04: Plug and Play ACPI device, IDs PNP0103 (active)
pnp 00:05: Plug and Play ACPI device, IDs PNP0c02 (active)
pnp 00:06: Plug and Play ACPI device, IDs PNP0b00 (active)
pnp 00:07: Plug and Play ACPI device, IDs INT3f0d PNP0c02 (active)
pnp 00:08: Plug and Play ACPI device, IDs PNP0c02 (active)
pnp 00:09: Plug and Play ACPI device, IDs PNP0c02 (active)
pnp 00:0a: Plug and Play ACPI device, IDs PNP0c04 (active)
pnp 00:0b: Plug and Play ACPI device, IDs PNP0303 PNP030b (active)
pnp 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
pnp 00:0d: Plug and Play ACPI device, IDs PNP0c01 (active)
pnp: PnP ACPI: found 14 devices
ACPI: ACPI bus type pnp unregistered
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
ACPI: Power Button [PWRB]
ACPI: Power Button [PWRF]
ACPI: Fan [FAN0] (off)
ACPI: Fan [FAN1] (off)
ACPI: Fan [FAN2] (off)
ACPI: Fan [FAN3] (off)
ACPI: Fan [FAN4] (off)
ACPI: acpi_idle yielding to intel_idleACPI: SSDT 00000000cca4b018 0083B (v01 Pm Ref Cpu0Cst 00003001 INTL 20051117)
ACPI: SSDT 00000000cca4ca98 00303 (v01 PmRef ApIst 00003000 INTL 20051117)
ACPI: SSDT 00000000cca4dc18 00119 (v01 PmRef ApCst 00003000 INTL 20051117)
ACPI: Thermal Zone [TZ00] (28 C)
ACPI: Thermal Zone [TZ01] (30 C)
ata1.00: ACPI _SDD failed (AE 0x5)
ata2.00: ACPI _SDD failed (AE 0x5)
ata1.00: ACPI _SDD failed (AE 0x5)
ata1.00: ACPI: failed the second time, disabled
ata2.00: ACPI _SDD failed (AE 0x5)
ata2.00: ACPI: failed the second time, disabled
ACPI: resource 0000:00:1f.3 [io 0xf040-0xf05f] conflicts with ACPI region SMBI [io 0xf040-0xf04f]
ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no)

----

dmesg | grep Warning
ACPI Warning: FADT (revision 5) is longer than ACPI 2.0 version, truncating length 0x10C to 0xF4 (20090903/tbfadt-288)



Any suggestions?

PVE 3.2 : no split button for console on one node

$
0
0
Hi,

I have a four nodes cluster. It was updated recently to pVE 3.2. On node 1, I cannot connect on VMs with spice. On the three other nodes, It is working fine. I can even connect on VMs on this node with Spice from other nodes.

On this node, I have no option 'Console viewer' in web interface. So I have no split button for console on that node. I have a split button on all other nodes. So it seems that something was not upgraded correctly on this node. But with pveversion, I don't see any difference between packages available on different nodes. VNC is working fine, and I already rebooted the server. I don't see anything special in the logs.
Any idea ?

Code:

root@srv-virt1# pveversion -v
proxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-95
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Any way to let user change Display type only (PVEVMUser modifying Display) ?

$
0
0
Hello, as there i new SPICE display option, i 'd like users to have right decide which display to use and change Display from "default" to "SPICE". But is seems that in Virtual machine related privileges as in here https://pve.proxmox.com/wiki/User_Management is no options VM.Config.Display or something like this. Is there any way to let user configure only display and no other hardware, as PVEVMUser with permission to modify Display settings? Regards RH

fence_apc issues

$
0
0
Hey all we are using some AP7920's as a fence device. This has been working out pretty good until our latest cluster. It seems to only effect clusters with pve 3.2, I don't have any 3.1 installs anymore, but I have 7 pve 2.3's and none of those nodes have issues. On our new cluster we get this when trying to access the pdu with fence_apc root@fpracprox1:~# fence_apc -x -l device -p medent168 -a 10.80.5.48 -o status -n 1 -vvUnable to connect/login to fencing deviceThe Unable to connect/login to fencing device returns within 1 second of issuing the command, adding an additional timeout doesn't help at all. I have also tried disabling sshv1, disabling sshv2, and having them both enabled, all with the same results. Now I can get on another server here in house and run the same command. This server is running the exact same version of pve and packages as the one above.root@supprox2:~# fence_apc -x -l device -p medent168 -a 10.80.5.48 -o status -n 1 -vvdevice@10.80.5.48's password:As you can see I get the password prompt it provides the password and I get a status of OK, exactly what I want to see. I then move onto the second node of this very same cluster and get the Unable to connect issue. Just like before it returns almost instantly so I don't think its a timeout issue.root@supprox1:~# fence_apc -x -l device -p medent168 -a 10.80.5.48 -o status -n 1 -vvUnable to connect/login to fencing deviceOne thing I have pinned down is this is only an issue on pve 3.2 clusters. I have 7 pve 2.3 clusters and not a single node has an issue with fence_apc. I am stumped at this point, unsure what to try next, but I do feel its something realted to proxmox. We have unboxed/configured 3 brand new PDU's all with the same results. I think we have 10 or so in stock to try, but I don't think its a PDU issue.

Snapshot backup can cause KVM guest freeze on proxmox 3.2

$
0
0
Hello. I am using proxmox 3.2. Currently I am using manual snapshot backups for my KVM guests. All my guests are KVM on pve LVM volume. I am using CIFS share mounted to /mnt/backup via fstab. Today my backup server was overloaded and there were some connection problems. I was expecting backup failure, but backup was still running (standing on some percentage) BUT one of my guest stopped working well. It was web server guest and some virtualhosts were not working. After some research I found some file system exceptions inside this guest (in dmesg). No issues with proxmox host in dmesg so probably not hardware issue. This problem was solved by manually stopping running snapshot backup. After that everything works well.

Is this issue related to "out of order" snapshot backup in Proxmox? Should I use NFS instead of CIFS?

Raider is a tool to automate linux software raid conversion

$
0
0
Just found this nice tool: Raider is a tool to automate linux software raid conversion
See: http://raider.sourceforge.net/

Hope this tool supports Proxmox VE. Anyone have try this tool?

Installing from USB Drive issues

$
0
0
Hello all,I work at a data center and would like to get the 3 current versions of ProxMox to install off of a usb thumb drive. It is hard to say WHAT program i use to load all my ISO's with, not sure if it is fira disk or grub4dos but it all comes from easy2boot. I have found some threads here where they got it to work, not sure HOW tho. So i am asking here if anyone might be able to help me so that i can get this to workThe USB Drive boots, but after some time I get a boot screen with:...testing cdrom /dev/sr0mount: mounting /dev/sr0 on /mnt failed: Invalid argumentumont: can't umount /mnt: Invalid argument....This message is repeated a few times and ends with no cdrom found - unable to continue (type exit ot CTRL-D to reboot)Thanks for the help,

Installing from USB Drive issues

$
0
0
Hello all,I work at a data center and would like to get the 3 current versions of ProxMox to install off of a usb thumb drive. It is hard to say WHAT program i use to load all my ISO's with, not sure if it is fira disk or grub4dos but it all comes from easy2boot. I have found some threads here where they got it to work, not sure HOW tho. So i am asking here if anyone might be able to help me so that i can get this to workThe USB Drive boots, but after some time I get a boot screen with:...testing cdrom /dev/sr0mount: mounting /dev/sr0 on /mnt failed: Invalid argumentumont: can't umount /mnt: Invalid argument....This message is repeated a few times and ends with no cdrom found - unable to continue (type exit ot CTRL-D to reboot)Thanks for the help,

virtual SME 8.1 64bit Kernel panic - not syncing: IO-APIC + timer doesn't work!

Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>