Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

No Network Connection after install

$
0
0
Hello, I'm new to Proxmox and linux in general so sorry if this is a stupid question. After I install proxmox I have no network connection. I have 3 network adapters, 2 on the motherboard and 1 intel nic. I have the internet plugged into the nic, and after the install I have no network connection. I can't ping google, and my computer can't access the web interface (time out). There is no blinking light on the adapter when proxmox is running, and I'm thinking that it's trying to use the wrong network adapter.

I tried installing Proxmox onto debian, as I can choose which network adapter to use. I'm able to use the internet, install packages, ping, etc. but following this guide (i can't post links yet, sorry to make you google it) /wiki/Install_Proxmox_VE_on_Debian_Jessie I can't add the proxmox VE repository. When I enter the echo command to add the proxmox ve repository (can't paste it here because it has a link lol) nothing happens, it simply prompts me to enter a new command as if I didn't enter the last one.


I'm thinking my solution might be to manually put the files in the drive where it would have normally downloaded, and I'll try this when I get home from work, but I would love some advice. Is there a way to choose which adapter to use? Maybe after install? Or is that not the problem I'm having.

Thank you!

Bandwidth Reports

$
0
0
Is there a place in proxmox that has some type of bandwidth summery?

I know that you have the graph under summary, but would be nice to have a report of the amount of bytes sent/recv'd in proxmox without having to log into the vm/ct and look at ifconfig.

Something that would show in number of bytes would be great. Maybe by the same summery list Hour / Day / Week / Month / Year ... or something like Today / Yesterday / Last 7 Days / This Month / Last Month / Last 30 days / This year / Last Year / Last 365 Days.

If there is something out there that will do this and break it down per ct and vm can someone let me know?

Thanks

[SOLVED] Problems migrating OpenVZ containers Proxmox 2->4 (ssh PTY allocation request failed)

$
0
0
I have not so far found any mention of the two problems we experienced migrating out OpenVZ containers into LXC when we upgraded to Proxmox 4, so I am memorializing them here in case they are useful to others:

Scenario: CentOS6 container (of maybe unknown template origin) running in OpenVZ under Proxmox 2.x. We followed the normal procedure for migration (proxmoxwiki Convert_OpenVZ_to_LXC) but

1) ssh into the CT gives the error
PTY allocation request failed on channel 0
shell request failed on channel 0

on the client side and

Nov 13 20:12:14 abc-vm sshd[1626]: error: openpty: No such file or directory
Nov 13 20:12:14 abc-vm sshd[1633]: error: session_pty_req: session 0 alloc failed

on the server/container side.

Unfortunately, I can not post the solution clearly because phpBB thinks I am posting a web link, maybe because it has slashes in it.

short form is that inside the container, remove slash dev slash ptmx and softlink it to slash dev slash pts slash ptmx


2) During the conversion, some of our CTs would not recognize as CentOS. (I have no idea if this is relevant to anyone but those
who used whatever templates we used to install our machines, but in case anyone else comes across this.)

The detection code found in
slash usr slash share slash perl5 slash PVE slash LXC slash Setup.pm
gets diverted if it finds the file $rootfs slash etc slash lsb-release and when it turns out to not be Ubuntu, we never get to checking
for
slash etc slash redhat-release to discover RedHat or CentOS. As a result we never do the conversions appropriate to CentOS and so
the converted container will not start.

This quick and dirty change resolved the problem for us: (though the indentation does not look quite like an the screen, so the
cut and paste has messed something up.)


(removed patch text)
(change the logic so it continues trying other indicators instead of failing to the end if lsb-release is present but Ubuntu is not in the file.

Hopefully this is somewhat useful to someone.

PCI passthrough of LSI HBA not working properly

$
0
0
I don't believe I'm the only one having issues, but tracking it down to the specific cause is elluding me or if the dev team is aware of this. I have an HP Z820 with onboard LSI SAS/SATA controller that I have flashed to the P19 LSI firmware. I'm trying to pass it through to a linux vm, specifically Debian Jessie, but I've also tried Fedora 23 Server. When I do this under Proxmox, following the wiki, the controller passes through but the filesystems on my disks act strangely. I can see the card on Proxmox switch from the mpt2sas driver to vfio, and the card and drives appear in the guest vm. Sometimes the disks mount in the guest vm but existing plaintext files get reported as being binary and full of gibberish. Other times the disks refuse to mount, reporting issues with the superblock and other odd things. I can mount the disks just fine on the hypervisor/Proxmox itself so I know the data is good. I know this is not a hardware problem as I can pass the controller and disks through perfectly under both Xenserver 6.5 and Esxi 6.0, with no corruption to the data. So there is something amiss with Proxmox 4.0. I'm inclined to stay with Xenserver, but the allure of Proxmox 4.0 with lxc container support and native html5 web management is strong.

Configuring Proxmox VE 4 as an iSCSI target

$
0
0
Hello all, first post here, so if I'm missing a key piece of documentation or post that I missed, I'll be happy to read it. (I'm just using it on a single host for a test/learning at home, and as a chance to jump into ZFS as well. )

Proxmox VE 4 latest (4.2.2-1-pve), using ZFS for local.

I was trying to present one of the ZFS Volumes as an iSCSI target for another client to access, but that part is probably irrelevant at the moment. (To be clear, I'm NOT using iSCSI initiator on the proxmox host, I'm trying to install/configure the iSCSI target on the host.) Right now just trying to share a simple folder. (e.g. /storage/DirIWantToShare)

For Proxmox specifically, one thread* is the only thing I found that seems relevant. I've found several various threads for issues w/ the "iscsitarget" package on Debian, but nothing I can quite make work.

I've managed to get the iSCSI dkms part installed (by adding the No-Sub repo), I've tried using tgt instead, and I've tried compiling scst as well.) Anyone been able to get any of these working? If someone wants to work with me on this, I'll be happy to get to a clean state and give more details.

*I don't have link posting privs yet, thread is serverfault.com/questions/557013/basic-iscsi-target-setup-on-proxmox-ve-debian-7-not-working

port forwarding + masquarading + firewall

$
0
0
What is the standard way to redirect the port to a virtual machine and masquerading?
I try the command below. They work, but if the firewall is enabled on the interface of the virtual machine, then masquerading is interrupted, if I somehow change the firewall the GUI.

iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j MASQUERADE
#or iptables -t nat -A POSTROUTING -s '192.168.1.0/24' -o vmbr0 -j SNAT --to-source abcd

iptables -A PREROUTING -t nat -i vmbr0 -p tcp -d $ ext_ip --dport 22 -j DNAT --to 192.168.1.2:22

Only RBD Storage

$
0
0
Right now my hosts only have 8GB of local storage for the operating system. The primary storage for the containers and vm's is ceph rbd. I have a vanilla ubuntu server that im current using for backups. Now that I am looking to implement proxmox for all my systems, how do i go about having shared storage for the ISO's and container templates? Do I need to create an NFS share from my backup server where I have plenty of raw space? I dont need a lot of space for them, why cant they just reside within the ceph storage cluster?

As far as backups, since im using ceph, it supports snapshots. Can those snapshots be archived to a slower ceph pool (spindle disks) or somewhere else? If so, then how would proxmox handle the restore process, etc?

Thanks for your time. Promox 4.0 is looking good so far and I am glad Im able to revisit promox as a possible solution.

Nodes going red

$
0
0
we have an issue with nodes going red at pve web page for at least a week.
we've a 3 node cluster, all software up to date. corosync uses a separate network.

From pve web pages: every morning at least 2 of the nodes show the other noded red . usually one of the nodes show all green.

from cli pve status shows all OK at the 3 nodes.

the red issue can be fixed by running: /etc/init.d/pve-cluster restart

The network can get busy overnight with pve backups and other rsync cronjobs.

we have the red not issue now.

here is more information:
Code:

dell1  /var/log # cat /etc/pve/.members
{
"nodename": "dell1",
"version": 94,
"cluster": { "name": "cluster-v4", "version": 13, "nodes": 3, "quorate": 1 },
"nodelist": {
  "sys3": { "id": 1, "online": 1, "ip": "10.1.10.42"},
  "dell1": { "id": 3, "online": 1, "ip": "10.1.10.181"},
  "sys5": { "id": 4, "online": 1, "ip": "10.1.10.19"}
  }
}

Code:

dell1  ~ # pvecm status
Quorum information
------------------
Date:            Sun Nov 15 07:51:39 2015
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          11448
Quorate:          Yes

Votequorum information
----------------------
Expected votes:  3
Highest expected: 3
Total votes:      3
Quorum:          2 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000004          1 10.2.8.19
0x00000001          1 10.2.8.42
0x00000003          1 10.2.8.181 (local)

Code:

dell1  ~ # pveversion -v
proxmox-ve: 4.0-21 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-1-pve: 4.2.3-18
pve-kernel-4.2.3-2-pve: 4.2.3-21
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie


Multicast tests have been done and seem to be OK
Code:


dell1  /etc # omping -c 10000 -i 0.001 -F -q  sys3-corosync sys5-corosync dell1-corosync
sys3-corosync : waiting for response msg
sys5-corosync : waiting for response msg
sys5-corosync : joined (S,G) = (*, 232.43.211.234), pinging
sys3-corosync : joined (S,G) = (*, 232.43.211.234), pinging
sys3-corosync : given amount of query messages was sent
sys5-corosync : given amount of query messages was sent

sys3-corosync :  unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.073/0.101/0.282/0.020
sys3-corosync : multicast, xmt/rcv/%loss = 10000/9993/0% (seq>=8 0%), min/avg/max/std-dev = 0.069/0.107/0.291/0.021
sys5-corosync :  unicast, xmt/rcv/%loss = 10000/10000/0%, min/avg/max/std-dev = 0.060/0.099/3.637/0.073
sys5-corosync : multicast, xmt/rcv/%loss = 10000/9993/0% (seq>=8 0%), min/avg/max/std-dev = 0.059/0.107/3.645/0.073

dell1  /etc # omping -c 600 -i 1 -q  sys3-corosync sys5-corosync dell1-corosync
sys3-corosync : waiting for response msg
sys5-corosync : waiting for response msg
sys3-corosync : waiting for response msg
sys5-corosync : waiting for response msg
sys5-corosync : joined (S,G) = (*, 232.43.211.234), pinging
sys3-corosync : joined (S,G) = (*, 232.43.211.234), pinging
sys3-corosync : given amount of query messages was sent
sys5-corosync : given amount of query messages was sent

sys3-corosync :  unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.108/0.251/0.382/0.035
sys3-corosync : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.112/0.253/0.779/0.041
sys5-corosync :  unicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.125/0.216/1.754/0.071
sys5-corosync : multicast, xmt/rcv/%loss = 600/600/0%, min/avg/max/std-dev = 0.116/0.210/1.762/0.067

As long as our vm's keep working I'll keep the nodes red in order to supply more information.

I've been checking syslog at each node and can not decipher what is causing the issue.

Any suggestions to try to get this fixed?

best regards, Rob Fantini

Backing up proxmox (the host itself) - what and how?

$
0
0
Found one single thread so far dealing with this question but no definitive answer so I'm opening my own. http://forum.proxmox.com/threads/215...(boot-and-lvm)

If I've somehow missed this information, feel free to point me towards it. I'll also happily edit the wiki if I get useful replies here.

My intention is not to talk about backup storage and target or the tool used for the backup as that depends on everyone's preferences.

My goal is:
If the worst case scenario happens and I need to completely restore a proxmox server, I want to be able to do a Bare-metal ISO install and then restore a backup, maybe change the hostname and networking configuration and have proxmox in exactly the state it was during the last backup without any VMs or containers of course.

What to backup?
Since proxmox doesn't run any DBs (right?) I thought I#d use a tool like duplicity or borgbackup or attic or whatever and backup /
Now I'm considering what to exclude from the backup. This is my list so far, any suggestions what to exclude or include?

EXCLUDE LIST
*/cache
*/cache/supercache
*/image_cache
*/lost+found
*/Maildir/.spam
*/Maildir/.Trash
*/rrd
*/tmp
*/web/serverstats
/cdrom
/dev
/initrd
/media
/mnt
/opt
/proc
/run
/srv
/sys
/usr/share/man
/var/backups
/var/cache
/var/empty
/var/lib/mysql
/var/lock
/var/log
/var/run
/var/spool
/var/webmin
/var/lib/lxc/*/rootfs/media/*
/var/lib/lxc/*/rootfs/mnt/*
/var/lib/lxc/*/rootfs/proc/*
/var/lib/lxc/*/rootfs/run/*
/var/lib/lxc/*/rootfs/sys/*
/var/lib/lxcfs/cgroup
/var/lib/vz


I'm not sure if I am excluding too much or too little, any hints are welcome please.

I found another thread: https://pve.proxmox.com/wiki/Backup_...re_of_LVM_data saying to use: vgcfgbackup - would that still be needed with my proposed solution above?

Windows 10 guests cannot install Fall update. Anyone else?

$
0
0
Has anyone else experienced this? I'm attempting to run the Fall update in a Windows 10 pro VM and it's failing during the upgrade. It just freezes. I've tried twice, left it overnight and it freezes somewhere around 29% each time.

The VM is in a qcow2 container, on a ZFS filesystem, writeback enabled, virtio disk/network drivers, cpu Default.

Code:

pveversion -vproxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

drbd 8.4.6 on pve 4.0?

$
0
0
Hello,

PVE 4.0 ships with DRBD9 already included in the pve kernel.
Is it possible to use drbd 8.4.6 (compiled from source) instead?
I'm trying to compile it against 4.2.3-2-pve but I get the following error:

Quote:

Need a git checkout to regenerate drbd/.drbd_git_revision
make[1]: Entering directory '/root/drbd-8.4.6/drbd'

Calling toplevel makefile of kernel source tree, which I believe is in
KDIR=/lib/modules/4.2.3-2-pve/build

make -C /lib/modules/4.2.3-2-pve/build SUBDIRS=/root/drbd-8.4.6/drbd modules
CC [M] /root/drbd-8.4.6/drbd/drbd_debugfs.o
CC [M] /root/drbd-8.4.6/drbd/drbd_bitmap.o
CC [M] /root/drbd-8.4.6/drbd/drbd_proc.o
/root/drbd-8.4.6/drbd/drbd_proc.c: In function ‘drbd_seq_show’:
/root/drbd-8.4.6/drbd/drbd_proc.c:292:4: error: implicit declaration of function ‘bdi_rw_congested’ [-Werror=implicit-function-declaration]
bdi_rw_congested(&device->rq_queue->backing_dev_info);
^
cc1: some warnings being treated as errors
scripts/Makefile.build:258: recipe for target '/root/drbd-8.4.6/drbd/drbd_proc.o' failed
make[3]: *** [/root/drbd-8.4.6/drbd/drbd_proc.o] Error 1
Makefile:1398: recipe for target '_module_/root/drbd-8.4.6/drbd' failed
make[2]: *** [_module_/root/drbd-8.4.6/drbd] Error 2
Makefile:103: recipe for target 'kbuild' failed
make[1]: *** [kbuild] Error 2
make[1]: Leaving directory '/root/drbd-8.4.6/drbd'
Makefile:90: recipe for target 'module' failed
make: *** [module] Error 2
PVE headers and build-essential already installed.

Thanks

PVE4 - VM Firewalls not working

$
0
0
I've got a clean install of Proxmox 4 with a newly created lxc container. I enabled the firewall for the datacenter and the node. I see the the expected rules when I run iptables-save.

Next, as a test, I enabled the container's firewall and added two rules, one to DROP POP3 and the other to REJECT IMAP. When I view the rules using iptables-save, no rules have been added for POP3 or IMAP. Also, pve-firewall simulate shows that the rules are not working.

What am I missing?

# cat 100.fw:
Code:

[OPTIONS]
enable: 1
[RULES]
IN POP3(DROP) -i net0
IN IMAP(REJECT) -i net0

# cat cluster.fw:
Code:

[OPTIONS]
enable: 1

# pve-firewall simulate -to ct100 -dport 143
Code:

Test packet:
  from    : outside
  to      : ct100
  proto  : tcp
  dport  : 143
ACTION: ACCEPT

# pve-firewall simulate -to ct100 -dport 110
Code:

Test packet:
  from    : outside
  to      : ct100
  proto  : tcp
  dport  : 110
ACTION: ACCEPT

[SOLVED] GUI/CONF Network configuration being ignored

$
0
0
Spent a couple hours today working on testing a couple restores of openvz/lxc containers and ran into some network trouble. I was able to figure out the cause and hope the below helps others as well.

Symptoms were network config would not be written to the container but you could manually configure CentOS 7.

Cause: Undefined subroutine &PVE::Network::is_ip_in_cidr
Solution: manually update perl package PVE::Network in /usr/share/perl5/PVE/Network.pm from GIT at *** won't let me post anything with a link
Why: It appears there were some changes made to the /usr/share/perl5/PVE/LXC/Setup/Redhat.pm perl module which depended on an updated /usr/share/perl5/PVE/Network.pm that were not yet available via apt-get update & upgrade, specifically the is_ip_in_cidr function and was causing an error on lxc container start. Once this was updated that worked - mostly. But that was another problem (maybe with something specific to my environemnt)

Version: ProxMox V4 updated to latest via apt-get update & upgrade.

This was pretty much a fresh install to which then I did an update on and had this issue. If there is a place I should file a bug report to let me know. It's working fine on my system after this quick fix which took a bit to track down.

I generated a patch to save some people some digging - hope it helps.
(edit: never mind the site wouldn't allow me to include that with the "CODE" tag as i guess it appeared to look like links or something.)

[SOLVED] CentOS Container network not starting

$
0
0
Problem: Can't bring up network in container. CentOS scripts would fail with error that IP address is already in use. The OS scripts on this distribution do some checking which might be environment specific to me but regardless i'm smart enough to assign IP addresses that are not already in use so this is not something want an OS failing to bring up an interface on a false positive.

I don't know if this helps anyone else but hopefully it does. If it's recommended that I file a bug report let me know i'm happy to do so.

Container Type: LXC
Container OS: CentOS 7
ProxMox Version 4 updated to latest w/ apt-get update && upgrade

The site (because i'm new) won't let me post the patch file i made to fix the problem. But I can describe it so if someone else has this problem you will know what you need to do.

Solution: add 2 $data .= "ARPCHECK=no\n"; lines to /usr/share/perl5/PVE/LXC/Setup/Redhat.pm. One is for the IPV4 check, the other for IPV6. with the container OS not doing the ARP check the interfaces come up as normal and there are no problems. This was important as now I fixed the GUI/<ID>.conf network settings they would overwrite my custom config and it wouldn't come up without this check disabled. the line was added after setting the manual IPV4/6 addresses.

Proxmox 4.0 CT deployment error

$
0
0
Hi all,

I'm having an issue deploying a container on a local ZFS storage.

Code:

CT 104 create:


Use of uninitialized value $path in pattern match (m//) at /usr/share/perl5/PVE/LXC.pm line 1969.
Use of uninitialized value $path in pattern match (m//) at /usr/share/perl5/PVE/LXC.pm line 1969.
extracting archive '/var/lib/vz/template/cache/ubuntu-15.04-standard_15.04-1_amd64.tar.gz'
Total bytes read: 521256960 (498MiB, 53MiB/s)
Detected container architecture: amd64
Creating SSH host key 'ssh_host_dsa_key' - this may take some time ...
Creating SSH host key 'ssh_host_ed25519_key' - this may take some time ...
Creating SSH host key 'ssh_host_key' - this may take some time ...
Creating SSH host key 'ssh_host_ecdsa_key' - this may take some time ...
Creating SSH host key 'ssh_host_rsa_key' - this may take some time ...
Use of uninitialized value $path in pattern match (m//) at /usr/share/perl5/PVE/LXC.pm line 1969.
Use of uninitialized value $path in pattern match (m//) at /usr/share/perl5/PVE/LXC.pm line 1969.
Use of uninitialized value $path in pattern match (m//) at /usr/share/perl5/PVE/LXC.pm line 1969.
TASK OK

I've deployed 2 containers so far and they've all gone off without any issues, so I'm not sure what's going on here. I've allocated 4096GB of disk space to the container. I'm using 3x2TB disks in a simple zpool for now, my hard drive space is reported by proxmox as 5.22TB free, and df -h verifies this. The container is created successfully, for some reason, but it won't start.

Code:


CT 104 start:

Use of uninitialized value $path in concatenation (.) or string at /usr/share/perl5/PVE/LXC.pm line 1110.

lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
TASK OK

running in foreground mode:

root@proxmox0:~# lxc-start -F --name storage
lxc-start: lxc_start.c: main: 295 Executing '/sbin/init' with no configuration file may crash the host



I'm not sure if there's anything else that's relevant.

pveproxy has peformance problem with keep alive connection

$
0
0
When many people use novnc, the api server(web gui, novnc) will not response for new connection. But the novnc alreay opend still working, and the command 'pveproxy status' shows the service is running.

After edit the /usr/share/perl5/PVE/Service/pveproxy.pm, change the 'max_workers' option, pveproxy can accept more connection.

Code:

my %daemon_options = (
    max_workers => 3, # change it to 7
    restart_on_error => 5,
    stop_wait_time => 15,
    leave_children_open_on_reload => 1,
    setuid => 'www-data',
    setgid => 'www-data',
    pidfile => '/var/run/wayproxy/wayproxy.pid',
);

Also I metioned :
Quote:

Proxmox VE 3.0 completely replaces Apache2 with a new event driven API server (pveproxy). This allows efficient support for HTTP keep-alive.
Maybe the pveproxy can not hold keep alive connection very well?

Is there a way to repalces pveproxy with nginx?

tlsv1 alert unknown ca

$
0
0
thousands of lines with
Code:

Nov 16 07:30:17 10.160.5.11 pveproxy[88404]: problem with client 10.160.5.12; ssl3_read_bytes: tlsv1 alert unknown ca
Nov 16 07:30:17 10.160.5.11 pveproxy[88404]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.

inside syslog

this message "tlsv1 alert unknown ca" everytime shown as a pop up alert on the web-gui

Proxmox backup fails "Got timeout"

$
0
0
Hello,

Sometimes my backup fails with following error:

206: Nov 13 20:55:12 INFO: Starting Backup of VM 206 (qemu)
206: Nov 13 20:55:12 INFO: status = running
206: Nov 13 20:55:12 INFO: update VM 206: -lock backup
206: Nov 13 20:55:13 INFO: backup mode: snapshot
206: Nov 13 20:55:13 INFO: ionice priority: 7
206: Nov 13 20:55:13 INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-206-2015_11_13-20_55_12.vma.lzo'
206: Nov 13 20:55:16 ERROR: got timeout
206: Nov 13 20:55:16 INFO: aborting backup job
206: Nov 13 20:55:17 ERROR: Backup of VM 206 failed - got timeout

-----------------------------------------------


# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

My backups are in local dir. Can someone help me solve it?

LXC - Giving absolut maximum cpu

$
0
0
Hi,

Is the value cpu units = 0 , the best option to get the server all performance from the cpu with no limits of using?

Best regards
Egner

How to disable "New software packages available" Mailing

$
0
0
Hey,

how can i disable the "New software packages available" mailing in proxmox 3.4?
i want to receive error messages and other important messages, but i don't need this
update information.


root@proxmox01:~# pveversion
pve-manager/3.4-6/102d4547 (running kernel: 2.6.32-38-pve)
root@proxmox01:~# pveversion -v
proxmox-ve-2.6.32: 3.4-155 (running kernel: 2.6.32-38-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-38-pve: 2.6.32-155
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-5
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

thanks for help
gruner
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>