Quantcast
Channel: Proxmox Support Forum
Viewing all 180706 articles
Browse latest View live

cluster with different filesystems

$
0
0
Hello,

I was toying around with an old Poweredge R200 that was given to me, and tried to make a cluster with my existing proxmox instance.
Both run PVE 4.0-48, but my "Main" box uses ZFS. As I seemingly couldn't get that to work on the R200, the latter was installed using ext4.
I succesfully made a two-node cluster, but when I tried to migrate a VM or CT to the R200, both failed.
Do both nodes need to use the same filesystem?

Regards,

Job

Proxmox Supermicro A1SRi

$
0
0
Hello, I bought the Supermicro A1SRi ( https://www.supermicro.com/products/...1sri-2558f.cfm ) and running pfsense as a router, works great. It has 4 Atom cores 2.4 ghz

I also have two E3-1231v2 on supermicro boards with Proxmox in a cluster. To what I have come to understand in a cluster it is much better to have three nodes rather than two because of qorum issues.

So, I thought that i would virtualize pfsense on the A1SRi. The question is, will the A1SRi handle the load? I have 250/100 internet and running openvpn (only i'm using it) Not anything else.
I was thinking about ginving pfsense 2 cores and 4 gig ram.
Will runn pfsense on SSD thats connected to th A1SRi.

Or, is there any possiblities running any software as a container?? (Think that would be the best but haven´t found anyone doing that??)

Upgrade Proxmox 3.4 to premium repositories: quorum problem

$
0
0
I have a 2 node proxmox 3.4 cluster, with separate quorum device.

I upgraded the proxmox nodes using premium repositories, with success.

Nevertheless, after the reboots, one node loose quorum and the other, for mysterious reasons, interferes with another cluster we have, using proxmox 2.3:

proxmox 2.3 cluster: /var/log/syslog
Nov 25 10:59:45 a84 rgmanager[641850]: [pvevm] VM 41245 is running
Nov 25 10:59:45 a84 rgmanager[641864]: [pvevm] VM 243 is running
Nov 25 10:59:46 a84 corosync[3031]: [TOTEM ] Retransmit List: 298b1 298b2 298b
3 298b4 298b5 298b6 298b7 298b8 298b9 298ba 298bb 298bc 298bd 298be 298bf 298c0
298c1

If I power down the 3.4 nodes, the messages disappear from the neighbor 2.3 cluster and works normally with no issues at all.

First I thought the problem was multicast issues, I reconfigured the cluster for unicast, modified some firewall rules, check the switches, and after several tests, down-times, and so on, I discovered if I boot proxmox 3.4-premium nodes with old kernel, the quorum problem no longer exists and all nodes and clusters work fine.

stock kernel: 2.6.32-39-pve
premium kernel: 2.6.32-43-pve

short story:

- one proxmox 2.3 cluster working normally.
- another 3.4 proxmox cluster.
|->after upgrade 3.4 nodes, proxmox 2.3 cluster looses rgmanager conectivity and messages from syslog are flooded with 'retransmit list' and alike. 3.4 cluster never gets Quorate status (inquorate).

powering down the 3.4 nodes, or booting 3.4 nodes with stock kernel, the problem is gone from the 2.3 cluster.

Please help me with this issue.

Regards,

Alfredo Luco.

Live migration failed (Proxmox 3.3)

$
0
0
I tried to do a live migration of a VM a couple hours ago and it failed (full output attached).
Code:

ERROR: failed to clear migrate lock: no such VM ('104')
It was a live migration from a Proxmox node named vm6 to a node named vm4. The GUI shows VM 104 in the list for vm4 (Proxmox node), but displays the error "no such VM ('104') (500)" when I click on it. VM 104 no longer shows in the GUI on the original node (vm6), however, it did still show as being on vm6 in /etc/pve/, so I tried a manual move of the file:
Code:

root@vm4:~# mv /etc/pve/nodes/vm6/qemu-server/104.conf /etc/pve/nodes/vm4/qemu-server/104.conf
The move has been running for 3 hours and has still not finished. qm status shows the same error:
Code:

root@vm4:~# qm status 104
no such VM ('104')

I have a quorum of nodes (6 in the cluster, 5 actively running now), so the quorum is 4 nodes.

I cannot interrupt the manual `mv' I kicked off, even `kill -9' is ignored. The VM I tried to move is not running, and cannot be started, "no such VM".

EDIT: one thing I forgot to mention is that I had a large (~1TB) VM disk image from a different VM being moved during the time I tried to live migrate the above VM.

I guess the question at this point is: how do I recover?

Thanks,
Omen
Attached Files

ProxMox PVE 4 Only allows 6 Drives in ZFS

$
0
0
Is it normal behavior for ZFS in ProxMox 4 to only allow 6 drives? My server has 8 drives but I am only allowed to add 6 of them to any sort of ProxMox RAID Configuration.

Proxmox 4 3 Server Cluster crashes with bug error.

$
0
0
I have a 3 cluster proxmox 4 installation. 3 ethernet cards 1 dedicated for NFS. All three of these servers were part of a proxmox 3.4 cluster with no issues. I pulled them from the other cluster and reformatted and did fresh install of Proxmox 4. about 2 days after clustering the 3 servers together i started getting the following log error. BUG: Bad page map in process pve-firewall pte:10000000 pad:8383d0067after this but i lose access to all three servers and their corresponding VM's. The only way to bring them back is a hard reboot. After this went on for about a week i pulled all three boxes and brought them back to the lab. I have reinstalled proxmox 4 from ISO on each machine and received the same bug.I then reinstalled each machine with a base version of debian jessie and then did the install from repository. Still the same bug. I have no idea other then it is a bug in Proxmox as to what is happening. Any help would be appreciated.

Server Motherboard compatibility - SUPERMICRO MBD-X10SLH-F-O uATX

$
0
0
I have currently been testing ProxMox 3.x on an mini ITX board with a Celeron J1900. I'm running one Untangle VM and two CentOS 7 VMs. It's working much better than I would have thought.

Now that I have done the testing and am more familiar with ProxMox I am going to be setting up two new identical ProxMox 4 servers for my client. They will be single socket 1150 Xeon servers. This is the hardware:
SUPERMICRO MBD-X10SLH-F-O uATX Server Motherboard LGA 1150 Intel C226 DDR3 1600
Intel Xeon E3-1241 v3 Haswell 3.5 GHz 8MB L3 Cache LGA 1150 80W
Crucial 16GB (2 x 8GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1600 (PC3 12800)
(quantity=2) TOSHIBA P300 HDWD120XZSTA 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5"

(The drives will be setup as a RAID 1 using the onboard sata raid controller)

The mail purpose of the servers will be for the linux based email server Atmail (retail version). One server will be at the main location and one at an offsite location as a type of hot backup for Atmail. Basically a backup of the ProxMox VM will be sent to the remote location either daily or weekly. (Sending of the backup is only temporary until the Atmail has their server mirroring technology finished next year.)

There will probably be no more than 2 VM's on each ProxMox server. So my question would be: "Is this hardware/motherboard setup compatible with ProxMox?" Is it a good compatibility or is it a "should work" kind of compatibility?

I currently have a ticket into Atmail about whether their software is compatible with ProxMox or if it can even be run as a VM.

pve has been removed

$
0
0
Hi there, I need some help here, I did an installation of corosync, it remove all my pve packages

The following extra packages will be installed:
libcfg4 libconfdb4 libcoroipcc4 libcoroipcs4 libcpg4 libevs4 liblogsys4 libpload4 libquorum4 libsam4 libtotem-pg4 libvotequorum4
The following packages will be REMOVED:
corosync-pve libpve-access-control librados2-perl proxmox-ve pve-cluster pve-container pve-firewall pve-ha-manager pve-manager
qemu-server
The following NEW packages will be installed:
corosync libcfg4 libconfdb4 libcoroipcc4 libcoroipcs4 libcpg4 libevs4 liblogsys4 libpload4 libquorum4 libsam4 libtotem-pg4
libvotequorum4


is there any way to restore it? now my proxmox gui unable to login
Screen Shot 2015-12-01 at 4.26.38 PM.jpgScreen Shot 2015-12-01 at 4.29.10 PM.jpg

Very high load after a few days

$
0
0
Hello together,
i have a quite huge problem with a extremely high load on a Proxmox 4 server and i don´t know where this comes from.

The problem is that the load goes up after a few days, and it´s between 4 and 10(!). The only thing i could find in the logs comes here:

Code:

Dec  1 09:03:08 PVE01 kernel: [  44.659864] kvm [1228]: vcpu0 unhandled rdmsr: 0xc001100d
Dec  1 09:03:08 PVE01 kernel: [  44.659984] kvm [1228]: vcpu0 unhandled rdmsr: 0xc0010112
Dec  1 09:03:09 PVE01 kernel: [  44.828058] kvm [1228]: vcpu1 unhandled rdmsr: 0xc001100d
Dec  1 09:03:09 PVE01 kernel: [  44.840424] kvm [1228]: vcpu2 unhandled rdmsr: 0xc001100d
Dec  1 09:03:09 PVE01 kernel: [  44.856567] kvm [1228]: vcpu3 unhandled rdmsr: 0xc001100d

Rebooting gives me also an error and the server hangs during the reboot with the following message:

Code:

reached target Shutdown.
hpwdt: Unexpected close, not stopping watchdog

After this message nothing happens, so i have to kill the power switch.

The strangest thing about this at all is, that it happens randomly for me and i have sadly no idea where to search for the problem.

Hopefully you guys can help.

Thanks!

Edit:
Could this be caused by a temporally gone NFS share? I have a share which is not always available and i have found this topic about the problem:
http://forum.proxmox.com/threads/504...nhandled-rdmsr


Edit2:
Here comes the cpu installed
Code:

processor      : 3
vendor_id      : AuthenticAMD
cpu family      : 15
model          : 65
model name      : Dual-Core AMD Opteron(tm) Processor 2218
stepping        : 2
microcode      : 0x62
cpu MHz        : 2600.000
cache size      : 1024 KB
physical id    : 1
siblings        : 2
core id        : 1
cpu cores      : 2
apicid          : 3
initial apicid  : 3
fpu            : yes
fpu_exception  : yes
cpuid level    : 1
wp              : yes
flags          : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch vmmcall
bugs            : apic_c1e fxsave_leak sysret_ss_attrs
bogomips        : 5200.17
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes  : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp tm stc

Cluster, Corosync Problem: JOIN or LEAVE message was thrown away during flush...

$
0
0
Cluster, Corosync Problem: JOIN or LEAVE message was thrown away during flush operation

Hello,

I have caused the problem yourself: pvecm delnode px2 and pvecm delnode px3
Nodes px1 and px2 are back and online.

Code:

root@px1 ~ > pvecm status
Quorum information
------------------
Date:            Tue Dec  1 12:23:13 2015
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1888
Quorate:          Yes

Votequorum information
----------------------
Expected votes:  3
Highest expected: 3
Total votes:      2
Quorum:          2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.0.7 (local)
0x00000002          1 192.168.0.8

The corosync config on node px1 an px2 :
Code:

root@px1 /etc/pve > cat corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: px3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: px3
  }

  node {
    name: px2
    nodeid: 2
    quorum_votes: 1
    ring0_addr: px2
  }

  node {
    name: px1
    nodeid: 1
    quorum_votes: 1
    ring0_addr: px1
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: Domainname
  config_version: 7
  ip_version: ipv4
  secauth: on
  version: 2
  interface {
    bindnetaddr: 192.168.0.7
    ringnumber: 0
  }

}


Node3 no come back, the Log:
Code:

root@px3 ~ > grep corosync /var/log/syslog
Dec  1 10:27:45 px3 corosync[32014]: Starting Corosync Cluster Engine (corosync): [FAILED]
Dec  1 10:27:45 px3 systemd[1]: corosync.service: control process exited, code=exited status=1
Dec  1 10:27:45 px3 systemd[1]: Unit corosync.service entered failed state.
Dec  1 10:36:38 px3 pvedaemon[11768]: <root@pam> starting task UPID:px3:000018DF:0033D07C:565D6A26:srvstart:corosync:root@pam:
Dec  1 10:36:38 px3 pvedaemon[6367]: starting service corosync: UPID:px3:000018DF:0033D07C:565D6A26:srvstart:corosync:root@pam:
Dec  1 10:36:38 px3 corosync[6374]:  [MAIN  ] Corosync Cluster Engine ('2.3.5'): started and ready to provide service.
Dec  1 10:36:38 px3 corosync[6374]:  [MAIN  ] Corosync built-in features: augeas systemd pie relro bindnow
Dec  1 10:36:38 px3 corosync[6375]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Dec  1 10:36:38 px3 corosync[6375]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Dec  1 10:36:38 px3 corosync[6375]:  [TOTEM ] The network interface [192.168.0.9] is now up.
Dec  1 10:36:38 px3 corosync[6375]:  [SERV  ] Service engine loaded: corosync configuration map access [0]
Dec  1 10:36:38 px3 corosync[6375]:  [QB    ] server name: cmap
Dec  1 10:36:38 px3 corosync[6375]:  [SERV  ] Service engine loaded: corosync configuration service [1]
Dec  1 10:36:38 px3 corosync[6375]:  [QB    ] server name: cfg
Dec  1 10:36:38 px3 corosync[6375]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Dec  1 10:36:38 px3 corosync[6375]:  [QB    ] server name: cpg
Dec  1 10:36:38 px3 corosync[6375]:  [SERV  ] Service engine loaded: corosync profile loading service [4]
Dec  1 10:36:38 px3 corosync[6375]:  [QUORUM] Using quorum provider corosync_votequorum
Dec  1 10:36:38 px3 corosync[6375]:  [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Dec  1 10:36:38 px3 corosync[6375]:  [QB    ] server name: votequorum
Dec  1 10:36:38 px3 corosync[6375]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Dec  1 10:36:38 px3 corosync[6375]:  [QB    ] server name: quorum
Dec  1 10:36:38 px3 corosync[6375]:  [TOTEM ] JOIN or LEAVE message was thrown away during flush operation.
Dec  1 10:36:38 px3 corosync[6375]:  [TOTEM ] JOIN or LEAVE message was thrown away during flush operation.
Dec  1 10:36:38 px3 corosync[6375]:  [TOTEM ] A new membership (192.168.0.9:1880) was formed. Members joined: 3
Dec  1 10:36:38 px3 corosync[6375]:  [QUORUM] Members[1]: 3
Dec  1 10:36:39 px3 corosync[6375]:  [MAIN  ] Completed service synchronization, ready to provide service.
Dec  1 10:36:39 px3 corosync[6375]:  [TOTEM ] A new membership (192.168.0.7:1884) was formed. Members joined: 1 2
Dec  1 10:36:39 px3 corosync[6375]:  [CMAP  ] Received config version (6) is different than my config version (5)! Exiting
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Unloading all Corosync service engines.
Dec  1 10:36:39 px3 corosync[6375]:  [QB    ] withdrawing server sockets
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Service engine unloaded: corosync vote quorum service v1.0
Dec  1 10:36:39 px3 corosync[6375]:  [QB    ] withdrawing server sockets
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Service engine unloaded: corosync configuration map access
Dec  1 10:36:39 px3 corosync[6375]:  [QB    ] withdrawing server sockets
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Service engine unloaded: corosync configuration service
Dec  1 10:36:39 px3 corosync[6375]:  [QB    ] withdrawing server sockets
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
Dec  1 10:36:39 px3 corosync[6375]:  [QB    ] withdrawing server sockets
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
Dec  1 10:36:39 px3 corosync[6375]:  [SERV  ] Service engine unloaded: corosync profile loading service
Dec  1 10:36:39 px3 corosync[6375]:  [MAIN  ] Corosync Cluster Engine exiting normally
Dec  1 10:37:39 px3 corosync[6369]: Starting Corosync Cluster Engine (corosync): [FAILED]
Dec  1 10:37:39 px3 systemd[1]: corosync.service: control process exited, code=exited status=1
Dec  1 10:37:39 px3 systemd[1]: Unit corosync.service entered failed state.
Dec  1 10:37:39 px3 pvedaemon[6367]: command 'systemctl start corosync' failed: exit code 1

The Path /etc/pve on node px3 is read only.
Can i repair without new install the node px3 ?

LXC Devnodes PCI Passthrough

$
0
0
Just upgraded to Proxmox 4.0 by backing up my containers and VM's and then restoring them.. have a few questions about LXC I guess.

1) In OpenVZ I was able to do PCI "passthrough" using "devnodes" for example (vzctl set 1000 --devnodes device:rw --save) to send my PCI TV Tuner through to an OpenVZ container. I didn't see any concrete information about whether this is possible in LXC. I saw some information about using "cgroup" settings to allow this but wanted to see if anyone knew for sure if this was possible before I started changing things.

2) Looks like cgroups are also how a I set up tun/tap devices, by searching the forums it looks like users were having issues with that and needed to create a script? Is there official documentation I haven't found on this somewhere?

3) I haven't really looked into this issue much since it isn't a big deal, but for all my LXC containers (haven't looked at my VM's, but might be for them as well) I have no CPU graph or network graph, nothing showing at all, and my CPU usage meter doesn't move past 0.0% at all.

Thanks!

3.4 -> 4.0 with Ceph Server installed

$
0
0
I'm running a three node 3.4 cluster with integrated ceph server.

Upgrading another test cluster showed that one has to be very careful as you have to create a new cluster. So I have some questions:

  • Did anybody already upgrade a running 3.4 ceph cluster to 4.0?
  • Will I loose my ceph configuration?
  • What is the right order of steps to not fully loose the ceph cluster and all its OSDs (if possible)?

Thanks for any help.
Birger

VM Migration Issue - LVM on CentOS 7 Guest

$
0
0
Good afternoon, all!

In the process of evaluating Proxmox for work, I've been testing the migration process for vSphere 5.5 VMs to Proxmox. I've been able to successfully migrate CentOS 6 VMs with little difficulty (process outlined below), but I've yet to have a CentOS 7 VM migrate properly. Every time I migrate a CentOS 7 VM, it fails to boot: failing to dracut as the root and swap LVM partitions cannot be found.

Code:

dracut-initqueue[283]: Warning: Could not boot.
dracut-initqueue[283]: Warning: /dev/centos_emby/root does not exist
dracut-initqueue[283]: Warning: /dev/centos_emby/swap does not exist
dracut-initqueue[283]: Warning: /dev/mapper/centos_emby-root does not exist

When I boot from a CentOS 7 ISO into rescue mode, it is able to find the system just fine.

My migration process (which works well for CentOS 6) is as follows:
  1. Remove MAC and UUID references from /etc/sysconfig/network-scripts/ifcfg-eth0.
  2. Remove MAC reference from /etc/udev/70-persistent-net.rules file.
  3. Cleanly shutdown guest.
  4. Move .vmdk and -flat.vmdk to storage accessible by Proxmox.
  5. Create new Proxmox VM with similar structure to vSphere VM.
  6. Edit configuration file to point to the .vmdk file as its hard drive.
  7. Boot VM.


For CentOS 7, however, this does not work. Nor does using clonezilla to clone the VMware VM to the new Proxmox VM.

Any help would be greatly appreciated. Thanks!

Installation of Proxmox on top of KVM/QEMU

$
0
0
Hey guys,

much to my regret, I found out that the new Proxmox 4.0 isn't able to work as a 2-Node-Cluster with an additional quorum disk, due to the new corosync version. So now I'm working on the proposed solution, to add a third adequate cluster member. In practice I want to set up the third member on our additional existing KVM/QEMU server as a very small VM without any storage/resources, just to get the quorum and the necessary amount of safety against split-brain problems. The two other nodes will deliver computing power and local storage (synced via DRBD) to the VMs on the cluster.

So I'm trying to do a regular install on my additional server via libvirt, but I'm not able to get libvirt to work with the provided Proxmox VE 4.0 iso-image:
Code:

virt-install -n pxmx-ext0 -r 512 --disk path=/dev/vg01/pxmx-ext0-disk,sparse=false,bus=virtio,cache=none --disk path=/dev/vg01/pxmx-ext0-swap,sparse=false,bus=virtio,cache=none -b br1,model=virtio -l /media/proxmox -x console=ttyS0,115200 --force --debug
It won't find any useable boot entry (traceback shows that it's trying various options) and stops with the following message:
Code:

Error validating install location: Could not find an installable distribution at '/media/proxmox'
Has anyone already tried to use the official install-image on a libvirt environment or should I just stick to the easier solution and install a regular Debian (which works fine with the netinstall image) and upgrade the Proxmox-Features? I naturally would prefer using the first solution because it would save me some time. Thanks in advance for your help!

Cheers, Johannes

cluster not ready after pvecm create cluster

$
0
0
after I run command pvecm create pve2
Screen Shot 2015-12-02 at 12.14.43 PM.png

I having problem on perform any action on proxmox such as : create backup or clone the machine; whatever action that need to write on the disk, It keep show cluster not ready - no quorum? (500)
I tried to create a file under /etc/pve , i have received an error msg: Error writing 1: Permission denied

Screen Shot 2015-12-02 at 12.10.40 PM.png

is there any way to undo the pvecm create pve2 ?

HP proliant Randoms reboots

$
0
0
Hello guys!
I have 3 clusters nodes, 2 HP servers Proliant and 1 more machine with HA and NFS for shared storage.

Hp servers have reset without reason (by proxmox) and i don´t know why. i checked the logs and nothing. I have blackisted hpwdt too. and i haven´t problems on network. version: [code ] proxmox-ve: 4.0-22 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
brzfsutils: 0.6.5-pve6~jessie
[/code] The reset are more or less every 24H.
Help? Thank you! :D
and sorry for my english!

A specific ZFS Pool should not to be mounted at boot time

$
0
0
Hhow can I prevent that a particular ZFS Pool will be mounted at boot time?

Upgrade proxmox 3.4 to 4.0 problems repository

$
0
0
Hi, before anything great work for the proxmox 4 with kernel 4.2 in Debian Jessi. Openvz is obsolete.

I try upgrade proxmox 3.4 to 4.0 follow this guide:

https://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0

But in the step:

apt-get install pve-kernel-4.2.2-1-pve pve-firmware

The packages pve-kernel-4.2.2-1-pve pve-firmware hasn't in the repo...

I add the new jessie repos but not work...

sed -i 's/wheezy/jessie/g' /etc/apt/sources.list
sed -i 's/wheezy/jessie/g' /etc/apt/sources.list.d/pve-enterprise.list
apt-get update

I had install proxmox 4.0 from zero...

Still struggling with concepts of Container VS VM in proxmox

$
0
0
First, I haven't set up my system yet (home server) as last parts are due today.

But I'm still unclear on how Proxmox differentiates a VM from a container?

As I understand it, a VM is the whole OS. It will simulate booting from power off to desktop/cli at full boot. But a container is NOT the full OS. It's just a captured install that builds on the underlying OS. So it relies on all the underlying OS for hardware interface.

If that is true, then in cases where you'd want pci passthru, it would seem to make more sense to use a container.

Does that work in practice?

Or am I completely wrong here?

Moved Proxmox Drives from Dead Server to New - now no Network.

$
0
0
I have spent the last two days googling, everything is about the VM... but, my problem is the Host.

What file(s) do I need to change to make Proxmox (I think v2.3) work with new NICs?

My configuration from the old server, the interfaces file is very basic, like:

...vmbr0
...address info
...bridge eth0
...etc

I found, I needed to edit a file that stated which MAC address was eth0, etc. I changed eth0 and what was then eth2 and eth2 to eth0... now ifconfig shows the physical hardware of eth0 and vmbr0 shows same HWADDR and IP is showing again.

I do 'route' and everything is right - but, cannot ping 10.1.0.1, 8.8.8.8, etc. I edited 151.conf in hopes to just get the VM working (changed it's HWADDR for eth0 to eth0 - no luck)... so, now I bow my head down and ask - HELP. :D

Thanks for any assistance on getting this server back online!
Ozz
Viewing all 180706 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>