Quantcast
Channel: Proxmox Support Forum
Viewing all 172238 articles
Browse latest View live

unable to mount raw file

$
0
0
Hello,

We do have a VM (windows server 2008) running under proxmox node (version 2.3). The problem is that it stopped working with a blue screen error (we have not done any modification recently in the server). We tried reboots but it ends with bluescreen again. Now the server is not accessible and we are unable to retrieve the data. We tried safe mode, repair, recovery etc and nothing is working for us.

We need at least to get the data from C drive. So we created another windows VM in the same node and tried to mount the raw file (.raw) of the VM (C drive) to the new VM as a secondary drive, but its not mounting. It seems that file system has been corrupted for C drive. We can see that "fvevol.sys" is not loading when we tried to start with safe mode.

Is there any way to mount this .raw file (C drive) on the same VPS or another VPS, or alternatively recover the data from .raw file?

Please help us as we are under pressure now.

Regards
Rahul

PCIeX SSD doesn't work in Proxmox v3.1 [OCZ RevoDrive3]

$
0
0
Hi, i have troubles with my newly purchased SSD device OCZ RevoDrive 3 (RVD3-FHPX4-120G). I've installed it into the PCIeX raiser card slot, and booted my server up. As long as i can see, the device is listed in #lspci output.

Code:

root@ve1-ua:~# lspci | grep -i ocz 02:00.0 SCSI storage controller: OCZ Technology Group, Inc. Device 1021 (rev 02)
But i don't have any block device initialized. Completely nothing in dmesg, related to OCZ or SSD or sdX devices.
I've downloaded official divers from the http://ocz.com/enterprise/download/drivers (Z-Drive R4 Linux Drivers). The version i chose download, was Ubuntu 10.04 LTS 2.6.32 62.7KB, because it suites my kernel most. I've installed it using dpkg -i oczpcie-ubuntu10.04-v4.0.551.x86_64.deb.
Later on, i modprobbed oczpcie oczvca modules into the kernel, but still have no device initialized in. I tried to load/unload mvsas module as well. Still no go.

Also this article says kernel 2.6.32 has the definitions for RevoDrive disks: http://cateee.net/lkddb/web-lkddb/SCSI_MVSAS.html

But my proxmox hardware node doesn't even try to initialize the PCIeX device.

Code:

root@ve1-ua:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

I tried to install Ubuntu-server LTS 12.04.3 and my device works there:
Code:

root@vox2-clu-ua:~# uname -a
Linux vox2-clu-ua 3.8.0-34-generic #49~precise1-Ubuntu SMP Wed Nov 13 18:05:00 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Code:

root@vox2-clu-ua:~# hdparm -I /dev/sdd

/dev/sdd:

ATA device, with non-removable media
        Model Number:      OCZ-REVODRIVE3                         
        Serial Number:      OCZ-OD3Q24W51H54MCVF
        Firmware Revision:  2.25   
        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
        Used: unknown (minor revision code 0x0110)
        Supported: 8 7 6 5
        Likely used: 8
Configuration:
        Logical        max    current
        cylinders      16383  16383
        heads          16      16
        sectors/track  63      63
        --
        CHS current addressable sectors:  16514064
        LBA    user addressable sectors:  117231408
        LBA48  user addressable sectors:  117231408
        Logical  Sector size:                  512 bytes
        Physical Sector size:                  512 bytes
        Logical Sector-0 offset:                  0 bytes
        device size with M = 1024*1024:      57241 MBytes
        device size with M = 1000*1000:      60022 MBytes (60 GB)
        cache/buffer size  = unknown
        Nominal Media Rotation Rate: Solid State Device
Capabilities:
        LBA, IORDY(can be disabled)
        Queue depth: 32
        Standby timer values: spec'd by Standard, no device specific minimum
        R/W multiple sector transfer: Max = 16  Current = 16
        Advanced power management level: 254
        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
            Cycle time: min=120ns recommended=120ns
        PIO: pio0 pio1 pio2 pio3 pio4
            Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
        Enabled Supported:
          *    SMART feature set
                Security Mode feature set
          *    Power Management feature set
          *    Write cache
                Look-ahead
          *    Host Protected Area feature set
          *    WRITE_BUFFER command
          *    READ_BUFFER command
          *    NOP cmd
          *    DOWNLOAD_MICROCODE
          *    Advanced Power Management feature set
                Power-Up In Standby feature set
          *    SET_FEATURES required to spinup after power up
          *    48-bit Address feature set
          *    Mandatory FLUSH_CACHE
          *    FLUSH_CACHE_EXT
          *    SMART error logging
          *    SMART self-test
          *    General Purpose Logging feature set
          *    WRITE_{DMA|MULTIPLE}_FUA_EXT
          *    64-bit World wide name
          *    IDLE_IMMEDIATE with UNLOAD
                Write-Read-Verify feature set
          *    {READ,WRITE}_DMA_EXT_GPL commands
          *    Segmented DOWNLOAD_MICROCODE
          *    Gen1 signaling speed (1.5Gb/s)
          *    Gen2 signaling speed (3.0Gb/s)
          *    Gen3 signaling speed (6.0Gb/s)
          *    Native Command Queueing (NCQ)
          *    Host-initiated interface power management
          *    Phy event counters
          *    unknown 76[14]
          *    unknown 76[15]
                DMA Setup Auto-Activate optimization
          *    Software settings preservation
          *    SMART Command Transport (SCT) feature set
          *    SCT Data Tables (AC5)
          *    SET MAX SETPASSWORD/UNLOCK DMA commands
          *    Data Set Management TRIM supported (limit 1 block)
          *    Deterministic read data after TRIM
Security:
        Master password revision code = 65534
                supported
        not    enabled
        not    locked
        not    frozen
        not    expired: security count
        not    supported: enhanced erase
        2min for SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 5e83a976b45d4d03
        NAA            : 5
        IEEE OUI        : e83a97
        Unique ID      : 6b45d4d03
Checksum: correct

Code:

Dec 19 14:59:31 vox2-clu-ua kernel: [    8.144328] scsi0 : mvsas
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152940] sas: phy-0:2 added to port-0:0, phy_mask:0x4 ( 200000000000000)
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152956] sas: phy-0:3 added to port-0:1, phy_mask:0x8 ( 300000000000000)
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152973] sas: DOING DISCOVERY on port 0, pid:253
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152975] sas: DONE DISCOVERY on port 0, pid:253, result:0
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152979] sas: DOING DISCOVERY on port 1, pid:253
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152991] sas: DONE DISCOVERY on port 1, pid:253, result:0
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.152996] sas: Enter sas_scsi_recover_host busy: 0 failed: 0
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.153001] sas: ata7: end_device-0:0: dev error handler
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.319865] ata7.00: ATA-8: OCZ-REVODRIVE3, 2.25, max UDMA/133
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.328236] ata7.00: 117231408 sectors, multi 16: LBA48 NCQ (depth 31/32)
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.339861] ata7.00: configured for UDMA/133
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.348378] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.364553] scsi 0:0:0:0: Direct-Access    ATA      OCZ-REVODRIVE3  2.25 PQ: 0 ANSI: 5
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.373342] sas: Enter sas_scsi_recover_host busy: 0 failed: 0
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.373387] sas: ata7: end_device-0:0: dev error handler
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.373418] sas: ata8: end_device-0:1: dev error handler
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.539942] ata8.00: ATA-8: OCZ-REVODRIVE3, 2.25, max UDMA/133
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.548693] ata8.00: 117231408 sectors, multi 16: LBA48 NCQ (depth 31/32)
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.559941] ata8.00: configured for UDMA/133
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.568661] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.584451] scsi 0:0:1:0: Direct-Access    ATA      OCZ-REVODRIVE3  2.25 PQ: 0 ANSI: 5
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593382] sd 0:0:0:0: [sdd] 117231408 512-byte logical blocks: (60.0 GB/55.8 GiB)
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593400] sd 0:0:0:0: Attached scsi generic sg3 type 0
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593631] sd 0:0:1:0: [sde] 117231408 512-byte logical blocks: (60.0 GB/55.8 GiB)
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593653] sd 0:0:1:0: Attached scsi generic sg4 type 0
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593691] sd 0:0:1:0: [sde] Write Protect is off
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593692] sd 0:0:1:0: [sde] Mode Sense: 00 3a 00 00
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.593698] sd 0:0:1:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.594423]  sde: unknown partition table
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.594518] sd 0:0:1:0: [sde] Attached SCSI disk
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.663613] sd 0:0:0:0: [sdd] Write Protect is off
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.672274] sd 0:0:0:0: [sdd] Mode Sense: 00 3a 00 00
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.672292] sd 0:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.683178]  sdd: unknown partition table
Dec 19 14:59:31 vox2-clu-ua kernel: [    8.692024] sd 0:0:0:0: [sdd] Attached SCSI disk

Code:

root@vox2-clu-ua:~# modinfo mvsas
filename:      /lib/modules/3.8.0-34-generic/kernel/drivers/scsi/mvsas/mvsas.ko
license:        GPL
version:        0.8.16
description:    Marvell 88SE6440 SAS/SATA controller driver
author:        Jeff Garzik <jgarzik@pobox.com>
srcversion:    8C2F663CC279D2FBBDE5EDD
alias:          pci:v00001B85d00001084sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001083sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001080sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001044sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001043sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001042sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001041sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001040sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001022sv*sd*bc*sc*i*
alias:          pci:v00001B85d00001021sv*sd*bc*sc*i*
alias:          pci:v00001B4Bd00009485sv*sd00009480bc*sc*i*
alias:          pci:v00001B4Bd00009445sv*sd00009480bc*sc*i*
alias:          pci:v00001B4Bd00009480sv*sd00009480bc*sc*i*
alias:          pci:v00001103d00002760sv*sd*bc*sc*i*
alias:          pci:v00001103d00002744sv*sd*bc*sc*i*
alias:          pci:v00001103d00002740sv*sd*bc*sc*i*
alias:          pci:v00001103d00002722sv*sd*bc*sc*i*
alias:          pci:v00001103d00002721sv*sd*bc*sc*i*
alias:          pci:v00001103d00002720sv*sd*bc*sc*i*
alias:          pci:v00001103d00002710sv*sd*bc*sc*i*
alias:          pci:v00009005d00000450sv*sd*bc*sc*i*
alias:          pci:v000017D3d00001320sv*sd*bc*sc*i*
alias:          pci:v000017D3d00001300sv*sd*bc*sc*i*
alias:          pci:v000011ABd00009180sv*sd*bc*sc*i*
alias:          pci:v000011ABd00009480sv*sd*bc*sc*i*
alias:          pci:v000011ABd00006485sv*sd*bc*sc*i*
alias:          pci:v000011ABd00006440sv*sd*bc*sc*i*
alias:          pci:v000011ABd00006440sv*sd00006480bc*sc*i*
alias:          pci:v000011ABd00006340sv*sd*bc*sc*i*
alias:          pci:v000011ABd00006320sv*sd*bc*sc*i*
depends:        libsas,scsi_transport_sas
intree:        Y
vermagic:      3.8.0-34-generic SMP mod_unload modversions
parm:          collector:
        If greater than one, tells the SAS Layer to run in Task Collector
        Mode.  If 1 or 0, tells the SAS Layer to run in Direct Mode.
        The mvsas SAS LLDD supports both modes.
        Default: 1 (Direct Mode).
 (int)

Questions regarding Proxmox and Dell Power Edge VRTX

$
0
0
Hello,

I am currently using Proxmox 1.9 on a clone desktop (acting as a server for the company) and we are very happy if the functionality it offers. The time has come to acquire machines that will act as qualified servers and with it we would like to set up a high availabity enviroment.

So we set out to bid and one of the hardware providers has lent us a shared infrastructure server ( http://www.dell.com/us/business/p/poweredge-vrtx/pd) for us to carry out test to see if it compatible with proxmox 3.1. We managed to install two instance of proxmox that are to be added to the cluster but when we want to add a storage in the webui we cannot detect the virtual disk that is to be used as the shared storage. On a related note, we did a test where we installed openfiller on a different machine and were successful in adding a iscsi storage; in other words, when we added a storage of type iscsi and set the ip address to that of the openfiller server the proxmox as sucessful in finding a target.

The questions:

1. How likely is it that there exists an incompatability between proxmox 3.1 and the poweredge vrtx that is preventing us from detecting shared storage ? (This systems uses PERC8 to control the disk drives).

2. What should be the type of storage (LVM, NFS, iSCSI,..) that is to be used for SAN ?

3. What questions could I ask the manufacturer of the hardware to better understand if Proxmox is a suitable candidate for this architecture ?

Regards,

p.s: The concept of high availability is new to me so if i have said anything that doesnt make sense or if more clarification is needed feel free to correct me. And well, I appreciate the proxmox community and have the full intention of acquiring the due support once I have the ground firmly established.

Many VGAs Assigned to One VM

$
0
0
I have multiple VGA cards that are used for data mining (these are not on pve), they are the work-horse of the operation. I understand that I can use "passthrough" to assign a specific pci card to a specific VM, but can I assign 50 cards from 10 mobos to one VM? I need the cards to all be available to one client for easy management.

Brand new to proxmox

$
0
0
I am testing out ProxMox (currently use VMWare) and wanted to set up a small utility server to see what ProxMox had to offer. Anyway, I have installed and booted the software and so far it looks promising. However I am not sure how to add my second harddrive for VM storage. I have a HP ML370 G6 using a smart P410i. I created 1 mirrored drive for the OS and ISO's etc... 136G and a second Raid 5 drive with 683GB of storage formy VM's. In VMWare I would have just added the storage, but I not being able to figure it out with ProxMox.

I can see the drive via SSH to the server via fdisk -l

root@proxmox:~# fdisk -l


Disk /dev/sda: 146.8 GB, 146778685440 bytes
255 heads, 63 sectors/track, 17844 cylinders, total 286677120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002957a


Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1048575 523264 83 Linux
/dev/sda2 1048576 286676991 142814208 8e Linux LVM


Disk /dev/sdb: 733.9 GB, 733909245952 bytes
255 heads, 63 sectors/track, 89226 cylinders, total 1433416496 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdb doesn't contain a valid partition table

Do I just create a partition and mount it some where on the server? Or is there a "ProxMox" way to do it via the Web Interface so that I can run snapshots etc...?

Thanks in advance. Also finding the Subscription thing a bit confusing, can I continue to run it with out the subscription? or will I eventually have to get one of some kind?

migrate vm with local devices ?

$
0
0
Hello everybody,

I have a VM with a local device. it's a pci passtrough with hostpci argument.
I would to apply HA on it.

If a migrate it, i have this : "can't migrate VM which uses local devices".

the qm manual show a -force option to do migrate of vm with local device :
http://pve.proxmox.com/wiki/Manual:_qm

Quote:

qm migrate <vmid> <target> [OPTIONS]
Migrate virtual machine. Creates a new migration task.

<vmid> integer (1 - N)

The (unique) ID of the VM.

<target> string

Target node.

-force boolean

Allow to migrate VMs which use local devices. Only root may
use this option.

-online boolean
Use online/live migration.
it'is possible to apply the -force option to a VM with HA ?

thank you for your answer.
Eric.

Problem during Backup

$
0
0
Hi,

I recently move my VM's from an old 2.1 Proxmox VE to a 3.1.
All worked as expected for few days.

Tonight during the backup a problem happened: backup failed and the worst thing is that one of my VM shutdown itself.
I don't understand why a VM is shutted down if the backup was in Snapshot mode...

This is the log:

Code:

INFO: starting new backup job: vzdump 104 --quiet 1 --mailto support@xxx.it --mode snapshot --compress lzo --storage BKGINFO: Starting Backup of VM 104 (qemu)
INFO: status = running
INFO: update VM 104: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/BKG/dump/vzdump-qemu-104-2013_12_20-00_30_02.vma.lzo'
INFO: started backup task 'c081f877-b954-4c8b-a9cf-78890b7960a5'
INFO: status: 0% (687865856/214748364800), sparse 0% (102694912), duration 3, 229/195 MB/s
INFO: status: 1% (2164326400/214748364800), sparse 0% (158879744), duration 10, 210/202 MB/s
INFO: status: 2% (4330094592/214748364800), sparse 0% (228446208), duration 22, 180/174 MB/s
INFO: status: 3% (6522339328/214748364800), sparse 0% (290975744), duration 76, 40/39 MB/s
INFO: status: 4% (8654553088/214748364800), sparse 0% (341929984), duration 89, 164/160 MB/s
INFO: status: 5% (10876289024/214748364800), sparse 0% (400453632), duration 102, 170/166 MB/s
INFO: status: 6% (12996444160/214748364800), sparse 0% (455823360), duration 143, 51/50 MB/s
INFO: status: 7% (15148384256/214748364800), sparse 0% (510820352), duration 169, 82/80 MB/s
INFO: status: 8% (17256939520/214748364800), sparse 0% (565186560), duration 198, 72/70 MB/s
INFO: status: 9% (19383386112/214748364800), sparse 0% (622477312), duration 212, 151/147 MB/s
INFO: status: 10% (21637234688/214748364800), sparse 0% (682991616), duration 234, 102/99 MB/s
INFO: status: 11% (23711973376/214748364800), sparse 0% (742555648), duration 262, 74/71 MB/s
INFO: status: 12% (25849757696/214748364800), sparse 0% (804048896), duration 275, 164/159 MB/s
INFO: status: 13% (28090957824/214748364800), sparse 0% (871718912), duration 289, 160/155 MB/s
INFO: status: 14% (30204493824/214748364800), sparse 0% (930516992), duration 310, 100/97 MB/s
INFO: status: 15% (32221298688/214748364800), sparse 0% (988852224), duration 348, 53/51 MB/s
INFO: status: 16% (34396569600/214748364800), sparse 0% (1050648576), duration 363, 145/140 MB/s
INFO: status: 17% (36668899328/214748364800), sparse 0% (1123053568), duration 404, 55/53 MB/s
INFO: status: 18% (38687080448/214748364800), sparse 0% (1185247232), duration 416, 168/162 MB/s
INFO: status: 19% (40854749184/214748364800), sparse 0% (1251323904), duration 467, 42/41 MB/s
INFO: status: 20% (43012259840/214748364800), sparse 0% (1312514048), duration 494, 79/77 MB/s
INFO: status: 21% (45172588544/214748364800), sparse 0% (1379643392), duration 512, 120/116 MB/s
INFO: status: 22% (47313977344/214748364800), sparse 0% (1447792640), duration 539, 79/76 MB/s
INFO: status: 23% (49392910336/214748364800), sparse 0% (1518026752), duration 549, 207/200 MB/s
INFO: status: 24% (51562610688/214748364800), sparse 0% (1585049600), duration 592, 50/48 MB/s
INFO: status: 25% (53786902528/214748364800), sparse 0% (1655934976), duration 603, 202/195 MB/s
INFO: status: 26% (56023646208/214748364800), sparse 0% (1722159104), duration 649, 48/47 MB/s
INFO: status: 27% (58039402496/214748364800), sparse 0% (1784651776), duration 676, 74/72 MB/s
INFO: status: 28% (60133212160/214748364800), sparse 0% (1849843712), duration 686, 209/202 MB/s
INFO: status: 29% (62394859520/214748364800), sparse 0% (1920684032), duration 736, 45/43 MB/s
INFO: status: 30% (64470253568/214748364800), sparse 0% (1983164416), duration 754, 115/111 MB/s
INFO: status: 31% (66623307776/214748364800), sparse 0% (2045063168), duration 771, 126/123 MB/s
INFO: status: 32% (68754079744/214748364800), sparse 0% (2107920384), duration 821, 42/41 MB/s
INFO: status: 33% (70889046016/214748364800), sparse 1% (2174889984), duration 839, 118/114 MB/s
INFO: status: 34% (73121529856/214748364800), sparse 1% (2245496832), duration 856, 131/127 MB/s
INFO: status: 35% (75184865280/214748364800), sparse 1% (2306945024), duration 879, 89/87 MB/s
INFO: status: 36% (77325402112/214748364800), sparse 1% (2374275072), duration 908, 73/71 MB/s
INFO: status: 37% (79635087360/214748364800), sparse 1% (2445668352), duration 919, 209/203 MB/s
INFO: status: 38% (81740496896/214748364800), sparse 1% (2512973824), duration 929, 210/203 MB/s
INFO: status: 39% (83772178432/214748364800), sparse 1% (2580090880), duration 974, 45/43 MB/s
INFO: status: 40% (86012002304/214748364800), sparse 1% (2650001408), duration 1007, 67/65 MB/s
INFO: status: 41% (88061837312/214748364800), sparse 1% (2721865728), duration 1016, 227/219 MB/s
INFO: status: 42% (90372767744/214748364800), sparse 1% (2793082880), duration 1059, 53/52 MB/s
INFO: status: 43% (92432498688/214748364800), sparse 1% (2854957056), duration 1069, 205/199 MB/s
INFO: status: 44% (94556127232/214748364800), sparse 1% (2921684992), duration 1109, 53/51 MB/s
INFO: status: 45% (96711475200/214748364800), sparse 1% (2988335104), duration 1119, 215/208 MB/s
INFO: status: 46% (98835824640/214748364800), sparse 1% (3055144960), duration 1166, 45/43 MB/s
INFO: status: 47% (101075255296/214748364800), sparse 1% (3130011648), duration 1178, 186/180 MB/s
INFO: status: 48% (103113555968/214748364800), sparse 1% (3190652928), duration 1189, 185/179 MB/s
INFO: status: 49% (105307766784/214748364800), sparse 1% (3256688640), duration 1233, 49/48 MB/s
INFO: status: 50% (107542806528/214748364800), sparse 1% (3330097152), duration 1244, 203/196 MB/s
INFO: status: 51% (109539753984/214748364800), sparse 1% (3398017024), duration 1285, 48/47 MB/s
INFO: status: 52% (111764307968/214748364800), sparse 1% (3472330752), duration 1297, 185/179 MB/s
INFO: status: 53% (113908711424/214748364800), sparse 1% (3556327424), duration 1329, 67/64 MB/s
INFO: status: 54% (116074217472/214748364800), sparse 1% (3666141184), duration 1342, 166/158 MB/s
INFO: status: 55% (118662103040/214748364800), sparse 2% (4723671040), duration 1350, 323/191 MB/s
INFO: status: 56% (120261902336/214748364800), sparse 2% (5278289920), duration 1358, 199/130 MB/s
ERROR: VM 104 not running
INFO: aborting backup job
ERROR: VM 104 not running
ERROR: Backup of VM 104 failed - VM 104 not running
INFO: Backup job finished with errors
TASK ERROR: job errors

Unable to login ?

$
0
0
Unable to login

I just installed a fresh version of proxmox and i am able to login via ssh and on the terminal to the server.

But when i try to login through the webUI it returns a 401 error (Connection error 401: No ticket).

Can anyone explain what possibly i am doing wrong ?

Mail Rejected, but Sender Domain in Whitelist

$
0
0
I have the same mail approx 50 times in the proxmox tracking center from a sender, who is in the whitelist (bmd.at)

I get this message :

Dec 19 01:16:31 smtpd connect from mailgate.bmd.at[213.33.78.204]
Dec 19 01:16:31 smtpd NOQUEUE: reject: RCPT from mailgate.bmd.at[213.33.78.204]: 450 4.1.1
<XXEditedXX>: Recipient address rejected: undeliverable
address: host 192.168.222.11[192.168.222.11] said: 550 5.1.1 User unknown (in
reply to RCPT TO command); from=<XXEditedXXbmd.at>
to=<XXEditedXX> proto=ESMTP helo=<webmail.bmd.at>
Dec 19 01:16:31 smtpd disconnect from mailgate.bmd.at[213.33.78.204]




What is wrong?

Mail Rejected, but Sender Domain in Whitelist

$
0
0
Strange, i wrote this thread 30 min before, it vanished...

I have the problem with only one user and one email

Dec 20 10:05:40 smtpd connect from mailgate.bmd.at[213.33.78.204]
Dec 20 10:05:40 smtpd NOQUEUE: reject: RCPT from mailgate.bmd.at[213.33.78.204]: 450 4.1.1
<xEditedx.at>: Recipient address rejected: undeliverable
address: host 192.168.222.11[192.168.222.11] said: 550 5.1.1 User unknown (in
reply to RCPT TO command); from=<xEditedx@bmd.at>
to=<xEditedx.at> proto=ESMTP helo=<webmail.bmd.at>
Dec 20 10:05:40 smtpd disconnect from mailgate.bmd.at[213.33.78.204]



sender is in whitelist, how can this happen, i get this message every 5 minutes in the tracking center..

cluster level after-backups job/script: would it make sense? howto?

$
0
0
Hi,

I was just wondering if there was a way to automate with a script a job to copy nightly backup files (pve cluster defined job) to another storage.
I know single backups can hook a script but I thought it would make sense to execute it after last job has finished (ie: when pve sends backup job mail report), jusat don't know if there is a way...

any thoughts?

Marco

KSM sharing not working

$
0
0
Hello,

I download .iso and install (today), but KSM sharing not working
CPU: 2x L5520

root@t90:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

i try restart service
root@t90:~# /etc/init.d/ksmtuned restart
Stopping KSM control daemon: ksmtuned.
Starting KSM control daemon: ksmtuned.
root@t90:~# cat /sys/kernel/mm/ksm/pages_sharing
0

root@t90:~# free -m
total used free shared buffers cached
Mem: 36193 20743 15450 0 1060 362
-/+ buffers/cache: 19319 16874
Swap: 35839 0 35839
root@t90:~#



Please help :)

Froxmox Ctrl Key Issue cccccccc

$
0
0
Hello

I work with Mint virtual machine in Proxmox 3.0

Whenever I click Ctrl key (Ctrl+C , Ctrl+X, Ctrl-P) in command prompt, it automatically inputs the characters such as cccccccc, xxxxxxxx, ppppppp

I tested different OS systems Ubuntu, Windows all the same.

Anyone has any idea?

Thank you

Trouble connecting to Ceph cluster from Proxmox cluster

$
0
0
I have a fully working Ceph cluster but I am unable to add one of my RBD pools to my Proxmox cluster. I used the Ceph page in the Proxmox wiki for reference but I just get a "rbd error: rbd: couldn't connect to the cluster! (500)" error when I add it and go in to the Content tab. Here is my /etc/pve/storage.cfg

Code:

rbd: prox-pool1       
        monhost 192.168.1.11:6789
        pool prox-pool1
        content images
        username ceph

I named the storage ID prox-pool1, this is also the pool name in my Ceph cluster. I specified username ceph although I don't really know what this is referring to since there is no password required. I copied my ceph.client.admin.keyring file off the first node in my Ceph cluster into /etc/pve/priv/ceph and renamed it prox-pool1.keyring as the Wiki instructs as well.

Backup Trouble after upgrade 3.0 -> 3.1

$
0
0
Hi, we moved forward from 3.0 to 3.1 because of the spice feature.
Unfortunally now the old backup-events wont' work anymore.
I used local storage on the same RAID1 as the vms are and a cifs-share for external storage.
Both went fine. After the upgrade, each backup-job has started, but never has endet. No backup-data has been written neiter on local storage nor on cifs-share.
Some days ago i plugged an 3TB USB Drive on the server, now some vms could got a backup, some not.
I'm highly confused, because since starting with V2.x Backup was a fire-and-forget thingi for me?

Any hints welcome, because of the coming holydays i may have time for testing.

help understanding vlan in a proxmox cluster scenario

$
0
0
Hi all,,

I have read it over and over but i just can't get my head around it....

http://pve.proxmox.com/wiki/Network_Model

Quote:

Configuring VLAN in a cluster

For the simplest way to create VLAN follow the link: VLAN
Goal:

  • Have two separate network on the same NIC
  • Another host (firewall) manage the routing and rule to access to these VMs (out of this doc)

Suppose this scenario:

  • A cluster with two nodes
  • Each node have two NIC
  • We want bonding the NIC
  • We use two network: one untagged 192.168.1.0/24 and one tagged (VLanID=53) 192.168.2.0/24, we must configure the switch with port vlan.
  • We want separate these network at layer 2

Create bond0

First of all we create the bond0 (switch assisted 802.3ad) at the proxmox web interface, follow the video.
At the end we have a /etc/network/interface like this:
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 802.3ad

auto vmbr0
iface vmbr0 inet static
address 192.168.1.1
netmask 255.255.255.0
gateway 192.168.1.250
bridge_ports bond0
bridge_stp off
bridge_fd 0
Configure your switch appropriately. If you're using a bond of multiple links, you need to tell this to your switch and put the switch ports in a Link Aggregation Group or Trunk.
Create VLAN

We have two methods to follow:
First explicit method

auto vlan53
iface vlan53 inet manual
vlan_raw_device bond0
Second method

We can use direct the NIC dot VLAN ID, like bond0.53
I prefer the first one!
Create manually the bridge

Now we create manually the second bridge.
auto vmbr1
iface vmbr1 inet static
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
bridge_ports vlan53
bridge_stp off
bridge_fd 0
post-up ip route add table vlan53 default via 192.168.2.250 dev vmbr1
post-up ip rule add from 192.168.2.0/24 table vlan53
post-down ip route del table vlan53 default via 192.168.2.250 dev vmbr1
post-down ip rule del from 192.168.2.0/24 table vlan53
NOTE:

  • We must not indicate the gateway, we must manually modify the routing table use ip route 2
  • The whole configuration must replicate on the other cluster's node, the only change is the IP of the node.

Create the table in ip route 2

We must change the file /etc/iproute2/rt_tables, add the following line:
# Table for vlan53
53 vlan53
use these commands to add:
echo "# Table for vlan53" >> /etc/iproute2/rt_tables
echo "53 vlan53" >> /etc/iproute2/rt_tables
Create the vlan on switch

For example on a HP Procurve 52 ports we use the following instructions to create the vlan.
Suppose:

  • Ports 47-48 trunk (switch assisted 802.3ad) for gateway
  • Ports 1-2 trunk (switch assisted 802.3ad) for the first node of cluster proxmox
  • Ports 3-4 trunk (switch assisted 802.3ad) for the second node

Enter in configuration mode and type:
trunk 1-2 Trk1 LACP
trunk 3-4 Trk2 LACP
trunk 47-48 Trk3 LACP
vlan 2
name "Vlan2"
untagged Trk1-Trk3
ip address 192.168.1.254 255.255.255.0
exit
vlan 53
name "Vlan53"
tagged Trk1-Trk3
exit
Test the configuration

Reboot the cluster node one by one for testing this configuration.
Unsupported Routing

Physical NIC (eg., eth1) cannot currently be made available exclusively for a particular KVM / Container , ie., without bridge and/or bond.
Naming Conventions

  • Ethernet devices: eth0 - eth99
  • Allowable bridge names: vmbrn, where 0 ≤ n ≤ 4094
  • Bonds: bond0 - bond9
  • VLANs: Simply add the VLAN number to the ethernet device name, separated by a period. For example "eth0.50"


I have a 3 node cluster...

I would like the administration network to not be a in VLAN. The administration network is where i connect to the proxmox nodes using either web or ssh for setups..

I will be using this 3 node cluster for hosting and it could be that i will be hosting other clients infrastructure like virtual file, dc and mail server and so on. Therefore i need to know how to use vlans for virtual machines since each client will have to be on his or hers own VLAN...

My problem is i just can't get my head around the network model documentation..

My plan is to create a sub interface for each client in my firewall. Each subinterface will be assigned a vlan tag. from here I will have two switches in a LACP configuration that will carry the VLAN tag to the 3 proxmox nodes.

My assumption is that the only thing i need to configure is to have say VLAN 10 tagged from the firewall through the switches and tagged to each cluster node NOT configuring anything in any of the cluster nodes interface files. and than configure VLAN 10 on the different VMs virtual network cards...

Is this assumption right? and will it work in a failover scenario? Or do i have to set something up in the cluster nodes interface file?

THANKS

Casper

Failover domain not working

$
0
0
Hello,

I have set up a 3 node cluster:
- node1 : for quorum
- node2 : hoster
- node3 : hoster

I have created a failover domain between node2 and node3, here is the clusters.conf:
Code:

<?xml version="1.0"?>
<cluster config_version="11" name="local">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu"/>
  <clusternodes>
    <clusternode name="node1" nodeid="1" votes="1">
    </clusternode>
    <clusternode name="node2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="fence2"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="node3" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device name="fence3"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <fencedevices>
    <fencedevice agent="test" ipaddr="192.168.48.202" login="root" name="fence2" passwd="root"/>
    <fencedevice agent="test" ipaddr="192.168.48.203" login="root" name="fence3" passwd="root"/>
  </fencedevices>
  <rm>
    <failoverdomains>
      <failoverdomain name="Node2-Node3" nofailback="1" ordered="1" restricted="1">
        <failoverdomainnode name="node2" priority="2"/>
        <failoverdomainnode name="node3" priority="1"/>
      </failoverdomain>
    </failoverdomains>
    <pvevm autostart="1" vmid="100"/>
  </rm>
</cluster>

I don't want the VM going to node1 so I have not declared node1 in the failoverdomain.

When the VM is on node2, and I crash node2, the VM goes on node3 ==> OK
When the VM is on node3, and I crash node3, the VM goes on node1 ==>NOK, it should goes on node2

Do you have any ideas ?

Thank you

HA migration on node failure restarts VMs

$
0
0
Hello,

I am trying to make a setup of Two-Node HA ( https://pve.proxmox.com/wiki/Two-Nod...bility_Cluster ).
I have 2 identical machines (Dell R720 with idrac7) and I have setup a PVE Cluster with these 2 and a quorum disk via an iscsi target from a third-machine.
Although everything seems to work fine: I can do live migration with no packet loss, if I manually fence a node
or crash it on purpose the VM running on the "broken" node is getting moved to the operational one but I get this in the logs:
Code:

Dec 22 02:01:11 rgmanager State change: proxmox2 DOWN
Dec 22 02:01:34 rgmanager Marking service:gfs2-2 as stopped: Restricted domain unavailable
Dec 22 02:01:34 rgmanager Starting stopped service pvevm:101
Dec 22 02:01:34 rgmanager [pvevm] VM 100 is running
Dec 22 02:01:35 rgmanager [pvevm] Move config for VM 101 to local node
Dec 22 02:01:36 rgmanager Service pvevm:101 started
==
Dec 22 02:01:11 fenced fencing node proxmox2
Dec 22 02:01:33 fenced fence proxmox2 success
==

I have 2 VMs (100,101) both are CentOS 6 (actually 101 is a clone of 100 ) with which I ran these tests.
The setup consists in a drbd session between the 2 nodes on top of which I run gfs2 ( no lvm involved ). I had a hard time
mounting this resource at startup and my cluster.conf looks like this now:
Code:

<?xml version="1.0"?>
<cluster config_version="39" name="Cluster">
  <cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
  <quorumd allow_kill="0" interval="1" label="cluster_qdisk" tko="10" votes="1"/>
  <totem token="1000"/>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.162.90" login="fence" name="proxmox1-drac" passwd="123456" secure="1"/>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.162.91" login="fence" name="proxmox2-drac" passwd="123456" secure="1"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="proxmox1" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="proxmox1-drac"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="proxmox2-drac"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
  <failoverdomains>
        <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1">
            <failoverdomainnode name="proxmox1"/>
        </failoverdomain>
        <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1">
            <failoverdomainnode name="proxmox2"/>
        </failoverdomain>
    </failoverdomains>
  <resources>
    <clusterfs name="gfs2" mountpoint="/gfs2" device="/dev/drbd0" fstype="gfs2" force_unmount="1" options="noatime,nodiratime,noquota"/>
  </resources>
  <service autostart="1" name="gfs2-1" domain="node1" exclusive="0">
    <clusterfs ref="gfs2"/>
  </service>
  <service autostart="1" name="gfs2-2" domain="node2" exclusive="0">
    <clusterfs ref="gfs2"/>
  </service>
    <pvevm autostart="1" vmid="100"/>
    <pvevm autostart="1" vmid="101"/>
  </rm>
</cluster>

( all the mess with failoverdomains and 2 services was the only solution I found to use the cluster to mount drbd0 )
So the question is why does the VM get restarted? as I see in rgmanager.log it says it was stopped..

Thank you,

Teodor

Subscription Key transfer to new proxmox install on the same server

$
0
0
Hello!

Due to a hard drive crash, I have re-installed on the same server Proxmox. How can I transfer my Subscription Key now.

Error Message: "Invalid Server ID"

changing vm from ide to virtio

$
0
0
Hi,
I am new with proxmox.
I created a vm runningDebian 7.
For HDD I used ide emulution. Is it possible to change from ide to virtio without installing the vw new.


Thanks for help
Sven
Viewing all 172238 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>