Quantcast
Channel: Proxmox Support Forum
Viewing all 180694 articles
Browse latest View live

ISO Image Storage for a couple of servers -- how to integrate

$
0
0
HI,

we have a huge storage for all ISOs we ever got for installing. I would like to use this directly for installing new VMs.
But howto ?

the way seems to be to define a directory as root share this as ISO storage and afterwards mount the ISO Storage into ./template/iso

Immo

Proxmox Firewall for NAT

$
0
0
Hello
it is possible to use the Proxmox Firewall as NAT?

I would imagine to use the (1) WAN IP Port and translate it to (x) Private IP Address on different Container and KVM.
If my provider would change my Router IP - I would have a down time - but this is no Problem. I think as soon I have changed Internic Nameserver IP to the new Router IP and changed the Firewall Rule it should be working again, after a delay.

I hope my thoughts would work.

have a nice day
vinc

issues when trying to type something

$
0
0
a friend and i purchased a dell poweredge 2950 for the purposes of having a remote file storage sever and to have an environment to do some testing in. after doing some research on type 1 hyper-visors i chose proxmox because i did not want to have to use windows to manage a linux server (esxi) and it was pretty much the only alternative for a type 1.
i really liked the interface and how everything worked, for the most part. the biggest issue i had with it was when trying to type something in i constantly had an issue with keys "sticking" for a lack of better description. this was on anything from a login screen to a terminal. if i was typing something in like say password for instance, if i would type normally 75% of the time it would come out something like this: paaaaaaaassworddddddddd. so i would try to correct it backspace a couple times and the whole thing would disappear. after applying a system update it did improve it a little but i lost my capability to use the ipad to view vm's and could only see the proxmox environment. i did not subscribe to get the latest updates and have no intention on doing so for a server that is just for personal entertainment basically. is there a setting or a tweak that needs to happen to alleviate the issue or has anyone else experienced this?

too low open file limit

$
0
0
Hi all,

I'm installing a log software in one of my KVM VM with plenty of RAM and disk space but getting the following error:

There are ElasticSearch nodes in the cluster that have a too low open file limit. (below 64000) This will be causing problems that can be hard to diagnose. Read how to raise the maximum number of open files in the documentation.

In the VM it seems that the open file limit is not an issue:

root@logserver1:~/graylog2-setup# cat /proc/sys/fs/file-max
818835


Where do I need to set this limit? I searched in various places but no luck.

Any help is much appreciated.

Corosync memory leak.

$
0
0
Hi. I have set up what i thought it was a fair system design but is proving to be very dumb. I have two proxmox server.

One is the main server and the other is the backup one. The backup server, just stays on about three hours at night to rsync the first server.

These two servers are in a cluster, the cluster is not functional at all. I first did this to take advantages of centralized management of the virtual machines and transferring machines in one click in the webui.

However the fact that the second server is not always on, destroys the stability of the cluster. When i want to use the mentioned functionality i need to restart the cman service and the pve-cluster service in both nodes.

However the problem is not this, but recently server 1, which stays on all day, began to fai approximadlty every two weeks because of corosync, that has a memory leak and at some point it uses near 75% of the memory. It this point the users note that the server is slow, they notify me and i restart the corosync process. Everything is back to normal.

Im in my way of moving this dumb cluster setup, but in the meantime.... Corosync shouldn´t behave like that even under this circumstances.

Any idea how to debug the issue?.

Proxmox-VE with debian wheezy. Everything up to date.

URGENT SSH / TCP Wrappers active by default on proxmox

$
0
0
Some of my nodes are under attack via ssh. I am trying to block access.

I am trying to only allow SSH to certain IP's on my proxmox nodes v3.3-1.

hosts.allow looks like where X.X.X.X is my IP and Y.Y.Y.Y is the cluster network

sshd: X.X.X.X Y.Y.Y.Y/255.255.255.0

hosts.deny

sshd: ALL

For some reason this is not working. Is TCP Wrappers turned on by default or am I missing something?

Also, if I want to change SSH ports do I just set change Port XXX in /etc/ssh/ssh_config and sshd_config?

Thanks,
Eric

Proxmox VE Subscription Repos

Windows 2008r2 and 2012r2 change from virtio-blk to virtio-scsi does not work

$
0
0
Hello,

we would like to change the driver in Windows from virtio-blk to best driver virtio-scsi: http://www.ovirt.org/Features/Virtio-SCSI
What have i done?

Changed in the pve webinterface vm --> Option --> SCSI Controllertype to virtio.
Add an new harddrive with BUStype "SCSI"
Add SCSI Driver from VirtiodriverISO to Windows --> new drive works.
Shutdown Windowsserver and change BUS Virtio from Bootdrive to SCSI
Windows can't boot anymore

but...

with a new Windowsinstallation it works fine, on Windows 2008r2 and 2012r2.

Any suggestions?

Best Regards
Fireon

Proxmox 3.3 on Dell Poweredge VRTX

$
0
0
Hi guys...
The problem is this: vrtx poweredge have SPERC8 driver for the internal storage, and no linux kernel at least until 3.14 have driver inside for working. Then, there is a proxmox kernel around at 3.14 that we can use? There is anyone that have recompiled this kernel for working with dell VRTX? Or there is a package that we can integrate for PERC8 inside a 2.6 kernel?
Tnx a lot

Three serial ports for guest

$
0
0
Hi all,
I am running PVE node 3.3-1/a06c9f73. On this node I have three serial ports - real, USB, and virtual one using socat (/dev/ttyS0, /dev/ttyUSB0, /dev/pts/2). I need to propagate all these three COM ports into KVM machine with Windows 2003 Server 32bit. I use this qemu machine conf :


serial0: /dev/ttyS0
serial1: /dev/ttyUSB0
serial2: /dev/pts/2

Problem is, that only two of these ports are working. In Windows guest there are only COM1 and COM2 ports. Their order depends on their order in config file, and they work - I am able to send some data using putty through them. But manual add of another COM port in widnows is not working, and I see only COM1 and COM2.
I also tried to specify all ports using

-args serial /dev/ttyS0

and I also tried some combinations using both above methods (using serial option together with args). But none of this worked.

Is there any way how to propagate three serial devices as COM ports to windows virtual machine?

Thank you.

Change IP on hypervisor

$
0
0
Hi,

I have read all threads here on how to change IP on a PVE server node. They all boil down to the same thing, the .members file should update itself
once the correct entry is added to /etc/hosts. I have tried this several times, but my first node always remain on the old wrong IP.
I have changed the subnet, the first node picked up it's correct IP but the other one refuses to update.
pvecm status reports the correct IP for each host. The clusterconf has the correct names. /etc/hosts and DNS has the correct entries, and resolv.conf checks hosts first.
drbd works, I can live migrate from the first node to the second, but not the other way since the IP is wrong...

Expected IPs are
beast 10.6.5.46
beauty 10.6.5.47

But beast always remain on the old IP 10.6.6.46.

What more can I check? Is there a step I have missed?

# pveversion
pve-manager/3.3-5/bfebec03 (running kernel: 2.6.32-32-pve)

root@beast:/etc/pve# pvecm status
Version: 6.2.0
Config Version: 8
Cluster Name: sthlm-bb
Cluster Id: 28546
Cluster Member: Yes
Cluster Generation: 76
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: beast
Node ID: 1
Multicast addresses: 239.192.111.241
Node addresses: 10.6.5.46

root@beauty:/etc/pve# pvecm status
Version: 6.2.0
Config Version: 8
Cluster Name: sthlm-bb
Cluster Id: 28546
Cluster Member: Yes
Cluster Generation: 76
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: beauty
Node ID: 2
Multicast addresses: 239.192.111.241
Node addresses: 10.6.5.47

root@beauty:/etc/pve# cat .members
{
"nodename": "beauty",
"version": 9,
"cluster": { "name": "sthlm-proxmox", "version": 8, "nodes": 2, "quorate": 1 },
"nodelist": {
"beast": { "id": 1, "online": 1, "ip": "10.6.6.46"},
"beauty": { "id": 2, "online": 1, "ip": "10.6.5.47"}
}
}

root@beauty:/etc/pve# cat cluster.conf
<?xml version="1.0"?>
<cluster name="sthlm-proxmox" config_version="8">


<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>


<clusternodes>
<clusternode name="beast" votes="1" nodeid="1"/>
<clusternode name="beauty" votes="1" nodeid="2"/></clusternodes>


</cluster>







Best regards,
Johan

Backup failed on VM with error "backup_complete_cb -5"

$
0
0
Hello,

First of all, I am a newbie in Proxmox (and generally VMs), taking over after someone who handled me the baby and left the country (literally).

I tried to understand how it all worked, and think I have the layout figured out.

We run proxmox 2.3.13 on a cluster of 2 servers, all VMs are on Server1. The server uptime is over 600 days (!)

We have 2 backup jobs:

- one doing a daily lzo snapshot at minight of the essential VMs, retaining only 2 backups
- one doing a weekly gzip snapshot sunday at noon of the others.

The backups are stored on an nfs mount with 2TB in use and 1.4TB free.

The past 4 days, our Active Directory VM couldn't be backed up. I checked the qemu log file and i see this:

Quote:

Nov 21 00:00:02 INFO: Starting Backup of VM 100 (qemu)
Nov 21 00:00:02 INFO: status = running
Nov 21 00:00:02 INFO: backup mode: snapshot
Nov 21 00:00:02 INFO: bandwidth limit: 1000000 KB/s
Nov 21 00:00:02 INFO: ionice priority: 7
Nov 21 00:00:02 INFO: skip unused drive 'local:100/vm-100-disk-4.raw' (not included into backup)
Nov 21 00:00:02 INFO: creating archive '/mnt/pve/backup/dump/vzdump-qemu-100-2014_11_21-00_00_02.vma.lzo'
Nov 21 00:00:02 INFO: started backup task '976dd1ea-74e2-4b39-8dde-f475dc663196'
Nov 21 00:00:05 INFO: status: 0% (33751040/1039382085632), sparse 0% (10899456), duration 3, 11/7 MB/s
Nov 21 00:00:13 INFO: status: 0% (33882112/1039382085632), sparse 0% (10899456), duration 11, 0/0 MB/s
Nov 21 00:00:13 ERROR: backup_complete_cb -5
Nov 21 00:00:13 INFO: aborting backup job
Nov 21 00:00:14 ERROR: Backup of VM 100 failed - backup_complete_cb -5


Being a newbie, and this VM being the DC and containing the network shares, it is in use most of the day, so I don't know what to do to check, without interrupting or breaking anything.

This error only occurs on this particular VM, and only since 4 days. All other VMs backup fine.

I looked around on the forums, on Google and all, but I can't seem to find something to clear that up for me.

Can anyone help me out? Im sorry if I sound like a beginner, which I am in this particular field, but I'd really like your input.

Thanks in advance.

passthrough: onboard sound card to container

$
0
0
Hi.
I have proxmox on a Gigabyte Z87X-D3H + Core i7-4770 + 16GB ram
Code:

# pveversion -v
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)
pve-kernel-2.6.32-33-pve: 2.6.32-138
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-35
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-9
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

I'm trying to passthrough onboard audio card to an openvz container, but it doesn't work.
This motherboard has haswell microarchitecture, I don't know if this is a problem.
Proxmox detects hardware as it load modules
Code:

# lspci -v
[...]
00:03.0 Audio device: Intel Corporation Haswell HD Audio Controller (rev 06)
    Subsystem: Intel Corporation Device 2010
    Flags: bus master, fast devsel, latency 0, IRQ 35
    Memory at f0434000 (64-bit, non-prefetchable) [size=16K]
    Capabilities: [50] Power Management version 2
    Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit-
    Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
    Kernel driver in use: snd_hda_intel
[...]

# tree /dev/snd
/dev/snd
├── by-path
│  ├── pci-0000:00:03.0 -> ../controlC1
│  └── platform-pcspkr -> ../controlC0
├── controlC0
├── controlC1
├── hwC1D0
├── pcmC0D0p
├── pcmC1D3p
├── pcmC1D7p
├── pcmC1D8p
└── timer

Steps I made:
0. googled and read every entry about "openvz sound card passthrough"
1. Created container (debian wheezy template)
2. edited container conf to add DEVNODES="snd/controlC1:rw snd/hwC1D0:rw snd/pcmC1D3p:rw snd/pcmC1D7p:rw snd/pcmC1D8p:rw snd/timer:rw"
3. installed and configured into container x11vnc, xvfb, openbox, xterm, alsa* to get a minimal desktop (accesible through vnc)
4. alsamixer doesn't recognize sound card
5. /dev/snd inside container are created but root.root ownership and 600 permissions
6. tried to disable udev inside container
7. tried pulseaudio instead alsa
8. tried udev rules to change ownership (GROUP=audio MODE=0666)

I've tried following wiki instructions about qemu+passthrough and it works, but guest performance is not good.
Still looking for making work audio inside a container...
Any thoughts?

Unable to install PROXMOX 3.x on HP DL380 G9 (UEFI)

$
0
0
Hi All:

Just joined the forum, i am trying to install Proxmox on HP DL380 G9 but can't, seems to be a UEFI compatibility issue, have tried by burning the image to CD as well as USB. have tried 3.2, 3.3. Is Proxmox UEFI compatible.

Trying to meet deadline, Would appriciate a quick reply.

Regards

Waseem

Proxmox HA cluster with 2 Proxmox nodes + Storage cluster with 2 GlusterFS nodes

$
0
0
I have 2 Proxmox 3 and 2 debian with GlusterFS.
I'd like to have a HA cluster with Proxmox. I know Proxmox need three nodes to have a fully HA configuration but i'd like to know if it's possible to use one of my Debian to make a "fake node" just for replication and HA.
I found https://pve.proxmox.com/wiki/Two-Nod...bility_Cluster but it's about Proxmox 2 and I'm not sure that cluster configuration is the same under Proxmox 3.

Could not access KVM kernel module: No such file or directory

$
0
0
Code:

root@xray:~# qm start 101
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=e7c0a8aa-46b3-4cd7-8e03-6675c332c691' -name teste -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 512 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:974c66ee34e8' -drive 'file=/var/lib/vz/template/iso/discos-2.0.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz/images/101/vm-101-disk-1.raw,if=none,id=drive-sata0,format=raw,aio=native,cache=none,detect-zeroes=on' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=D6:CF:BB:D7:83:EE,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

Quote:

root@xray:~# modprobe kvm-intel
ERROR: could not insert 'kvm_intel': Operation not supported
Quote:

root@xray:~# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Quote:

CPU flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm

PROXMOX MGMT behind Load Balancer

$
0
0
Hello all. Below is what I have. My overall goal is to have the ability to have a module from my web site still communicate within the cluster if a node goes down. Also utilizing dnat I can still use the proxmox firewall effectively. I have already reached out to support for the zen load balancer, but wanted to see if anyone here could offer anything else I might be missing to get this 100% working

External block of IP's coming in through Charter (nat'ed to subinterfaces/vlans)
Cisco 1841 with sub interface 10 and 11
Cisco 3560G vlans 10 and 11
4 proxmox servers trunked to 3560G for vlans 10 and 11
prox1 (eth0 10.10.10.201 / no gateway mgmt network) (eth1.11 10.10.11.201 / gw 10.10.11.254 backend)
prox2 (eth0 10.10.10.202 / no gateway mgmt network) (eth1.11 10.10.11.202 / gw 10.10.11.254 backend)

prox3 (eth0 10.10.10.203 / no gateway mgmt network) (eth1.11 10.10.11.203 / gw 10.10.11.254 backend)

prox4 (eth0 10.10.10.204 / no gateway mgmt network) (eth1.11 10.10.11.204 / gw 10.10.11.254 backend)



2 zen load balancers setup in cluster mode (virtual machines inside the proxmox cluster)
proxlb1 10.10.10.211 eth0 mgmt ip
proxlb2 10.10.10.212 eth0 mgmt ip


10.10.10.213 vip for cluster services


proxlb1 eth1 10.10.11.252
proxlb2 eth1 10.10.11.253


10.10.11.254 vip for gateway for backend computers


10.10.10.221 vip for farm lx4nat with dnat


I have tried to use 10.10.10.221 and 10.10.11.254 to access my web gui. Both pass me to https://proxmoxip:8006 and I can log in and do things. The fact that I had to set a gateway of the VIP from the LB (10.10.11.254) on the nics for proxmox has caused so much trouble. HA does not seem to be working at all and when i simulate a failed server and turn it back on, it refuses to communicate in the cluster. Below are some commands for troubleshooting.

root@prox1:~# clustat
Cluster Status for StratoCluster1 @ Fri Nov 21 13:45:55 2014
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
prox1 1 Online, Local, rgmanager
prox2 2 Online, rgmanager
prox3 3 Online, rgmanager
prox4 4 Online


Service Name Owner (Last) State
------- ---- ----- ------ -----
pvevm:100 (prox4) stopped
pvevm:103 (prox4) stopped

cluster.conf
<?xml version="1.0"?>
<cluster config_version="16" name="StratoCluster1">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<rm>
<failoverdomains>
<failoverdomain name="proxfailover" nofailback="0" ordered="0" restricted="1">
<failoverdomainnode name="prox1"/>
<failoverdomainnode name="prox2"/>
<failoverdomainnode name="prox3"/>
<failoverdomainnode name="prox4"/>
</failoverdomain>
</failoverdomains>
<pvevm autostart="1" vmid="103" domain="proxfailover" recovery="relocate"/>
<pvevm autostart="1" vmid="100" domain="proxfailover" recovery="relocate"/>
</rm>
<clusternodes>
<clusternode name="prox1" nodeid="1" votes="1"/>
<clusternode name="prox2" nodeid="2" votes="1"/>
<clusternode name="prox3" nodeid="3" votes="1"/>
<clusternode name="prox4" nodeid="4" votes="1"/>
</clusternodes>
</cluster>

root@prox1:~# fence_tool ls
fence domain
member count 4
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2 3 4

root@prox4:~# service rgmanager status
rgmanager (pid 3388 3387) is running...

root@prox4:~# ps 3388
PID TTY STAT TIME COMMAND
3388 ? D<l 0:00 rgmanager
root@prox4:~# ps 3387
PID TTY STAT TIME COMMAND
3387 ? S<Ls 0:00 rgmanager

The machine prox4 has been rebooted multiple times. rgmanager shows that it is running in one place but not the other. I can ssh to it from prox1 but in the gui i get connection refused for prox4

Any help is greatly appreciated!









Exclude directories during backup

$
0
0
Is there any way to exclude directories from being backed up by the built-in backup facilities? This would be very useful for backing up VMs that contain a large amount of data, especially if the data doesn't change much over time. An example would be a virtualized NAS server used to store archival video. The NAS itself might only be 1 GB in size, but might hold hundreds of GB of video. The server would occasionally be updated with software patches, so it would make sense to back it up, but it would make no sense to include the video files - it would make more sense to backup those files with regular backup software like Bacula.

Or, am I making a big, wrong assumption that the filesystems that hold the video files would be stored in the VM itself? Would it make more sense to have a directory managed directly by Proxmox, expose it via the virtio driver to the VM, and then mount the directory inside the VM? I've not actually tried this myself, but assume it could work.

Maybe then the better question is how best to manage storage in a virtualized environment - inside the VMs vs in Proxmox itself. Comments?

How to configure cluster 2 nodes with DRBD

$
0
0
Hi, this is my firts post

I'm going to configure a Cluster of 2-node using DRBD, I first need to configure the node1 and two weeks later configure the node2, here my problem: I don't know how to configure the service drbd for first node and later the second node.

When i try start the service drbd i got this error:

Quote:

root@nodo1:~# service drbd start
Starting DRBD resources:drbd.d/r0.res:1: resource r0 in:
Missing section ' on <PEER>{...}'.
resource r0: cannot change network config without knowing my peer.
[0]: State change failed: (- 2) Need access to UpToDate data
Command ' / sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (- 2) Need access to UpToDate data
Command ' / sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (- 2) Need access to UpToDate data
Command ' / sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (- 2) Need access to UpToDate data
Command ' / sbin/drbdsetup 0 primary' terminated with exit code 17
0: State change failed: (- 2) Need access to UpToDate data
Command ' / sbin/drbdsetup 0 primary' terminated with exit code 17
root@nodo1:~#
When i try create the PV, i got this error:

Quote:

root@nodo1:~#pvcreate /dev/drbd0
Device /dev/drbd0 not found (or ignored by filtering).
root@nodo1:~#
My file lvm.conf (filter) is configured:
Quote:

filter = ["r|/dev/sdb1|", "r|/dev/disk / |", "a|.]*|" ]

Surely my problem is the file global drbd or r0.res because it cannot connect the node2. What should I do to be able to start the service using only the node1 and after configure the node2?

Thank you very much for the time and help

PVE: backup cron don't work on a node

$
0
0
Hi all,

I have a PVE cluster with 3 nodes.
All is working well, but from 2 days a backup cron job for 2 VM running on one node doesn't start and no error is reported...

Any idea ?

Thanks
PieroB
Viewing all 180694 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>