Quantcast
Channel: Proxmox Support Forum
Viewing all 171974 articles
Browse latest View live

restrict outside access to tthe machine to only 1 port/service and speific ips?

$
0
0
Hey guys.
Even though im CCNA, cant seem to figure out the proxmox firewall config..

Scenario.
my main host ip is 11.11.11.11
and the Nessus machine is set to Bridge and receives the public IP address 22.22.22.22
So if i type https:/22.22.22.22:8834/#/ from anywhere in the world i can access it - that is BAD.

i want to allow access to several specific IP's, and only in port 8834, the rest i want to be blocked to everyone.
obviously i only want to restrict access from outside, the machine itself still needs to have the ability to access the iinternet .
can i get any help with this please?
thank you!

ps: even if i have zero rules in every "firewall" tab, when i type in "pve-firewall status" i get: "ip6tables-save v1.4.14: Cannot initialize: Address family not supported by protocol"

Migrating physical windows 2008 server to proxmox

$
0
0
Hello, I have problem migrating our Windows server to proxmox virtual. It has 1TB drive and about 900GB free space, but when I try to copy whole disk, it copies about 20MB/s or less and I have only 30 hours for maintenance on weekends to migrate it.
Is there a way to copy only used data on disk?

Speed Issue KVM Network NFS Mount

$
0
0
Hi all, a question on all. I hope you can help me.

On my Hostsystem i have connect a Storage with GBit and mount this with nfs. Speed are here 99-110MB/s. On same Host i have install a KVM Server with Debian and have mount the same Storage. But if i make the same test here, i have only 1/4 of speed. Max. 30 MB/s. Why this and how can i reach the same speed, or faster as the host?

regards

cannot view console of VM running on another node in 4.x

$
0
0
Running pve-manager/4.0-26/5d4a615b with two node cluster. Host names pve1 and pve2.

When I connect to the web interface on pve1, I cannot view a console of a VM running on pve2. It just hangs at 'starting VNC handshake'. To view a console on pve2 I need to log into the web interface on pve2.

Also, when logged into the console this way the "power control" menu does not function. That is, no menu shows up when clicking on it. The console works great on pve1 for VMs running on pve1.

Possible to change 4 processor license to 2x 2 processor license?

$
0
0
It appears that we purchased the correct number of processor licenses but in the wrong sizes. Is it possible to change the 4x processor license into 2x 2processor licenses?

Proxmox 4.0 beta1 unable to copy ssh ID pvecm add

$
0
0
I installed Proxmox 4.0 beta1 on three nodes and updated to latest version.
Creating a cluster on the master node went without problems.
Adding other two nodes returns an error:

root pve2:~# pvecm add 10.25.3.45
root 10.25.3.45's password:
unable to copy ssh ID

pvecm status on master node:

root pve1:/var/log# pvecm status
Quorum information
------------------
Date: Fri Aug 21 22:56:45 2015
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 4
Quorate: Yes


Votequorum information
----------------------
Expected votes: 1
Highest expected: 1
Total votes: 1
Quorum: 1
Flags: Quorate


Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.25.3.45 (local)

Also syslog contains these errors:

root pve1:/var/log# tail syslog
Aug 21 22:41:46 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:46 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:46 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:46 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:56 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:56 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:56 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:56 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:56 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9
Aug 21 22:41:56 pve1 pmxcfs[1369]: [status] crit: cpg_send_message failed: 9

root pve1:/var/log# pveversion
pve-manager/4.0-26/5d4a615b (running kernel: 4.1.3-1-pve)

root pve1:/var/log# service --status-all
[ + ] apparmor
[ + ] atd
[ - ] bootlogd
[ - ] bootlogs
[ - ] bootmisc.sh
[ - ] checkfs.sh
[ - ] checkroot-bootclean.sh
[ - ] checkroot.sh
[ + ] console-setup
[ + ] cron
[ + ] dbus
[ + ] hdparm
[ - ] hostname.sh
[ - ] hwclock.sh
[ + ] kbd
[ + ] keyboard-setup
[ - ] killprocs
[ + ] kmod
[ - ] lvm2
[ - ] motd
[ - ] mountall-bootclean.sh
[ - ] mountall.sh
[ - ] mountdevsubfs.sh
[ - ] mountkernfs.sh
[ - ] mountnfs-bootclean.sh
[ - ] mountnfs.sh
[ + ] networking
[ + ] nfs-common
[ + ] open-iscsi
[ + ] postfix
[ + ] procps
[ + ] pve-cluster
[ + ] pve-firewall
[ + ] pve-manager
[ + ] pvedaemon
[ + ] pvefw-logger
[ + ] pveproxy
[ + ] pvestatd
[ + ] rc.local
[ - ] rmnologin
[ + ] rpcbind
[ + ] rrdcached
[ - ] rsync
[ + ] rsyslog
[ - ] sendsigs
[ + ] spiceproxy
[ + ] ssh
[ - ] stop-bootlogd
[ - ] stop-bootlogd-single
[ + ] udev
[ + ] udev-finish
[ - ] umountfs
[ - ] umountiscsi.sh
[ - ] umountnfs.sh
[ - ] umountroot
[ + ] urandom
[ - ] x11-common
[ + ] zfs-import
[ + ] zfs-mount
[ + ] zfs-share
[ + ] zfs-zed





I am not sure where else to look for clues.

qcow2 resize without lvm

$
0
0
Hi all,

i have a KVM server with a qcow2 partition. i have resize it on webinterface. fdsik -l show me new size, but i can´t resize the system.

how can i resize the partition in system?

Code:

fdisk -l

Disk /dev/vda: 161.1 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders, total 314572800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00029a9a

  Device Boot      Start        End      Blocks  Id  System
/dev/vda1  *        2048  201129983  100563968  83  Linux
/dev/vda2      201132030  209713151    4290561    5  Extended
/dev/vda5      201132032  209713151    4290560  82  Linux swap / Solaris

Code:

pvresize /dev/vda1
-bash: pvresize: Kommando nicht gefunden.

Code:

resize2fs /dev/vda1
resize2fs 1.42.5 (29-Jul-2012)
Das Dateisystem ist schon 25140992 Blöcke groß. Nichts zu tun! / system is on block 25140992. not to change!

best regards

LVM for VMs on PVE Volume Group

$
0
0
In the past, I've configured my servers with a smaller capacity Raid 1 for the Proxmox installation and used a separate larger Raid 10 array for the VMs on a LVM volume. For this build I have just one large Raid 10 and I use a tiny percentage of the array for the swap, root, and data partitions. Using the GUI, I added a LVM volume to the existing PVE group for VM storage.

Is this the appropriate way to make use of the additional disk space that was available on the array at install or should this exist on a separate volume group on a separate physical volume? I guess my question is, after the proxmox installation has finished, what's the optimum way set up the rest of the available disk space for local VM storage? It's a 3.64 TB array and only about 120GB is being used by the proxmox installation. I'd like to use the rest as a LVM volume for KVM storage.

Empty storages for the web interface

$
0
0
Hello everyone, I have a Proxmox 3.0 and I have a problem, I was a good time without having to access the web interface however today when I went to access to check the backups, I realized that none of storages (local and NFS) are appearing the internal contents of it. First thought it might be a bug the WWW service, I took the restart in the same interface, however the problem persists.


After that I logged in via SSH Proxmox, and looked folders of storages:
/var/lib/vz/dump and /mnt/backupvms/dump and both were with your content locally.


Someone has been there give or know what can it be?


Thank you in advance for help.


PS: Just do not restarted the node, because it is not in a cluster and stop an application will have downtime at least 20 minutes.

Easiest way to setup private switch for a private network for test vms

$
0
0
Can someone suggest steps in setting up easiest method by which I can create a private virtual switch for my vms , that will grab ip from dhcp setup specially for the private network.
Basically, I want to test out some DHCP and GIT setup. I already have a DHCP server on my main subnet. Therefore I want a private network /subnet, that will not interfere to my main network.
Also if possible, if there is a need to download/install some packages from the internet, there should be a gateway through vmbr0 or vmbr1.
Thanks in advance.

Is there now support for USB ?

LVM for VMs on PVE Volume Group

$
0
0
In the past, I've configured my servers with a smaller capacity Raid 1 for the Proxmox installation and used a separate larger Raid 10 array for the VMs on a LVM volume. For this build I have just one large Raid 10 and I use a tiny percentage of the array for the swap, root, and data partitions. Using the GUI, I added a LVM volume to the existing PVE group for VM storage.

Is this the appropriate way to make use of the additional disk space that was available on the array at install or should this exist on a separate volume group on a separate physical volume? I guess my question is, after the proxmox installation has finished, what's the optimum way set up the rest of the available disk space for local VM storage? It's a 3.64 TB array and only about 120GB is being used by the proxmox installation. I'd like to use the rest as a LVM volume for KVM storage.

Backup error -5 input/output error

$
0
0
I get this error when backing up one of my machines:
Code:

INFO: starting new backup job: vzdump 101 --remove 0 --mode snapshot --compress lzo --storage usb --node orac
INFO: Starting Backup of VM 101 (qemu)
INFO: status = running
INFO: update VM 101: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/media/backup/dump/vzdump-qemu-101-2015_08_22-16_37_56.vma.lzo'
INFO: started backup task '1c994266-b1ba-4988-acbb-a8a08ab0891a'
INFO: status: 0% (23855104/107374182400), sparse 0% (2519040), duration 3, 7/7 MB/s
INFO: status: 1% (1087111168/107374182400), sparse 0% (228904960), duration 85, 12/10 MB/s
INFO: status: 2% (2157182976/107374182400), sparse 0% (237760512), duration 168, 12/12 MB/s
INFO: status: 3% (3235250176/107374182400), sparse 0% (331046912), duration 233, 16/15 MB/s
INFO: status: 4% (4310761472/107374182400), sparse 0% (347684864), duration 311, 13/13 MB/s
INFO: status: 5% (5413666816/107374182400), sparse 0% (425197568), duration 352, 26/25 MB/s
INFO: status: 6% (6474563584/107374182400), sparse 0% (425197568), duration 384, 33/33 MB/s
INFO: status: 7% (7517896704/107374182400), sparse 0% (522604544), duration 425, 25/23 MB/s
INFO: status: 8% (8590721024/107374182400), sparse 0% (522604544), duration 505, 13/13 MB/s
INFO: status: 9% (9666428928/107374182400), sparse 0% (615350272), duration 576, 15/13 MB/s
INFO: status: 10% (10738073600/107374182400), sparse 0% (615464960), duration 648, 14/14 MB/s
INFO: status: 11% (11820924928/107374182400), sparse 0% (713023488), duration 720, 15/13 MB/s
INFO: status: 12% (12897091584/107374182400), sparse 0% (713023488), duration 804, 12/12 MB/s
INFO: status: 13% (13969653760/107374182400), sparse 0% (806354944), duration 871, 16/14 MB/s
INFO: status: 14% (15039594496/107374182400), sparse 0% (806354944), duration 948, 13/13 MB/s
INFO: status: 15% (16109993984/107374182400), sparse 0% (900157440), duration 1017, 15/14 MB/s
INFO: status: 16% (17183277056/107374182400), sparse 0% (908517376), duration 1090, 14/14 MB/s
INFO: status: 17% (18260164608/107374182400), sparse 0% (1002291200), duration 1158, 15/14 MB/s
INFO: status: 18% (19331743744/107374182400), sparse 0% (1002291200), duration 1233, 14/14 MB/s
INFO: status: 19% (20413284352/107374182400), sparse 1% (1095925760), duration 1296, 17/15 MB/s
INFO: status: 20% (21485977600/107374182400), sparse 1% (1095925760), duration 1362, 16/16 MB/s
INFO: status: 21% (22576431104/107374182400), sparse 1% (1189421056), duration 1404, 25/23 MB/s
INFO: status: 22% (23657971712/107374182400), sparse 1% (1189421056), duration 1437, 32/32 MB/s
INFO: status: 23% (24704057344/107374182400), sparse 1% (1282719744), duration 1464, 38/35 MB/s
INFO: status: 24% (25792675840/107374182400), sparse 1% (1282719744), duration 1495, 35/35 MB/s
INFO: status: 25% (26896957440/107374182400), sparse 1% (1375895552), duration 1540, 24/22 MB/s
INFO: status: 26% (27922792448/107374182400), sparse 1% (1375895552), duration 1574, 30/30 MB/s
INFO: status: 27% (29049618432/107374182400), sparse 1% (1491714048), duration 1600, 43/38 MB/s
INFO: status: 28% (30070407168/107374182400), sparse 1% (1509806080), duration 1636, 28/27 MB/s
INFO: status: 29% (31153389568/107374182400), sparse 1% (1603641344), duration 1679, 25/23 MB/s
INFO: status: 30% (32227459072/107374182400), sparse 1% (1603641344), duration 1709, 35/35 MB/s
INFO: status: 31% (33313062912/107374182400), sparse 1% (1702100992), duration 1751, 25/23 MB/s
INFO: status: 32% (34371600384/107374182400), sparse 1% (1709268992), duration 1800, 21/21 MB/s
INFO: status: 33% (35469262848/107374182400), sparse 1% (1808715776), duration 1860, 18/16 MB/s
INFO: status: 34% (36523802624/107374182400), sparse 1% (1809051648), duration 1908, 21/21 MB/s
INFO: status: 34% (37403885568/107374182400), sparse 1% (1880457216), duration 1947, 22/20 MB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
ERROR: Backup of VM 101 failed - job failed with err -5 - Input/output error
INFO: Backup job finished with errors
TASK ERROR: job errors

I tried increasing the "size" value in vzdump.conf to 108000 (my disk is 100Gb) but it didn't help. I also did a "badblocks" scan of the backup disk, but nothing showed up and the same disk successfully backs up two other machines (one of them bigger).

Help!

NFS Mount in LXC, v4.0 beta.

$
0
0
I'm trying to wrap my mind around mounting NFS file systems to LXC containers running under Proxmox 4.0b. I tried adding the following to /etc/pve/lxc/<container id>/config

Code:

lxc.mount = /var/lib/lxc/<container id>/fstab
and then using standard fstab entries in that file for mounting the NFS file systems. No such luck.

Is this possible in proxmox's implementation of LXC at this time? Documented anywhere?

my server with latest proxmox always goes down

$
0
0
two times today:( what's up? where can I check the problem?

VPS assigned Ip not showing inside webgui

$
0
0
apologize if this is a dumb question in advance still new to proxmox also paying subcriber Is there not a way to see the actual assigned ips to each container? under status ? im used to solusvm and i seen the ips assigned easily displayed im using proxmox module for deployment from modulesgarden and its real slick but after deployment im not seeing the actual ips assigned to each container or to each nic easily or visibly in the webgui status/hardware/options/etc

OpenVZ & KVM VPS creation

$
0
0
Sir in proxmox can I create KVM and OpenVZ vps both? Means one or two vps will be on KVM and another one or two VPS on openVZ. Is it possible on proxmox? In xenserver it is possible but I do not know about proxmox.

raid controller and SSD

$
0
0
Hi all,

Just read an interesting article in admin-magazine.com concerning raid and SSD's. According to the article controller write-back cache and controller read-ahead cache should be disabled. Disabling write-back cache gains 33% performance while disabling read-ahead cache gains 40% performance. Test was made with Intel DC S3500 and Avago (formerly LSI) Mega-RAID 9365. Performance wise mdadm based raid was on par with a raid controller.

Conclusion must be that if you only use SSD in your raid you gain very little by spending 400-600 $ on a raid controller and if you are limited in SATA ports stick to a HBA and spend the saved money on more SSD's.

ZFS change device names

$
0
0
Hi!

I currently playing around with ZFS and noticed a very odd behavior.
Pool's name is storage and my aliases in vdev is vdisk1 to vdisk6.

If I create a new pool and uses /by-id or /by-vdev then the pool is create as it should and zpool status storage shows the Id's or aliases used.
Code:

#zpool status storage
  pool: storage
 state: ONLINE
  scan: none requested
config:


        NAME        STATE    READ WRITE CKSUM
        storage    ONLINE      0    0    0
          raidz2-0  ONLINE      0    0    0
            vdisk1  ONLINE      0    0    0
            vdisk2  ONLINE      0    0    0
            vdisk3  ONLINE      0    0    0
            vdisk4  ONLINE      0    0    0
            vdisk5  ONLINE      0    0    0
            vdisk6  ONLINE      0    0    0

But if I do a zpool export pool and zpool import storage suddenly some of my disks are listed as sdb sdc .. and so on.
Code:

#zpool status storage
  pool: storage
 state: ONLINE
  scan: none requested
config:


        NAME        STATE    READ WRITE CKSUM
        storage    ONLINE      0    0    0
          raidz2-0  ONLINE      0    0    0
            sdb    ONLINE      0    0    0
            vdisk2  ONLINE      0    0    0
            sdc    ONLINE      0    0    0
            sde    ONLINE      0    0    0
            vdisk5  ONLINE      0    0    0
            sdg    ONLINE      0    0    0


errors: No known data errors

But if I do a zpool import -d /dev/disk/by-vdev/ storage then all is fine again!
I don't quite see what have changed for zfs to import wrongly, so my question is: Why is this happening and is the any way I can avoid it?
Is is possible to make it so zfs used -d /dev/disk/by-vdev/ at startup so the pool is started with the right devices?

LXC - Container memory usage is above 90%

$
0
0
Hi all

I have been experimenting with LXC, in particular OpenMediaVault running on Debian 7 in a container and I've found that after using the container for a while Proxmox shows the memory usage above 90% (sometimes 100%), but within the container itself running the free command shows that most of this memory usage is just the IO cache.

My question is, if I had a total of 4GB ram on the host and ran 6 containers each with a maximum of 1GB each, would some of the machines be starved of RAM they actually need to run processes whilst other machines are holding on to RAM as IO cache? (or will the IO cache from other machines be released to free up RAM for any container that truly needs it?)

Thanks

Matt
Viewing all 171974 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>