Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Container backup snapshot fails

$
0
0
First of all, thanks for the great product. I'm running into a problem with our first container vm (all others are kvm based) where snapshot backups on a regular volume (not NFS) fail. I did a search on the forums for the mode failure but all other posts had some different configurations not matching ours (i.e. we did not change any defaults that could cause this and the CT is not running on a NFS share).

Any help how to get snapshots working is really appreciated.

Here all configuration and logging details you might need:

Code:

INFO: starting new backup job: vzdump 201 --remove 0 --mode snapshot --compress lzo --storage storage1 --node vmhost1
INFO: Starting Backup of VM 201 (openvz)
INFO: CTID 201 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /mnt/ssdraid/private/201/ to /mnt/pve/storage1/dump/vzdump-openvz-201-2015_01_24-13_07_39.tmp
INFO: Number of files: 23869
INFO: Number of files transferred: 18439
INFO: Total file size: 664082539 bytes
INFO: Total transferred file size: 663612639 bytes
INFO: Literal data: 663612639 bytes
INFO: Matched data: 0 bytes
INFO: File list size: 537978
INFO: File list generation time: 0.001 seconds
INFO: File list transfer time: 0.000 seconds
INFO: Total bytes sent: 665030106
INFO: Total bytes received: 368804
INFO: sent 665030106 bytes received 368804 bytes 12437362.80 bytes/sec
INFO: total size is 664082539 speedup is 1.00
INFO: first sync finished (53 seconds)
INFO: suspend vm
INFO: Setting up checkpoint...
INFO: suspend...
INFO: get context...
INFO: Checkpointing completed successfully
INFO: starting final sync /mnt/ssdraid/private/201/ to /mnt/pve/storage1/dump/vzdump-openvz-201-2015_01_24-13_07_39.tmp
INFO: Number of files: 23869
INFO: Number of files transferred: 1
INFO: Total file size: 664082551 bytes
INFO: Total transferred file size: 937 bytes
INFO: Literal data: 237 bytes
INFO: Matched data: 700 bytes
INFO: File list size: 537978
INFO: File list generation time: 0.001 seconds
INFO: File list transfer time: 0.000 seconds
INFO: Total bytes sent: 540189
INFO: Total bytes received: 1957
INFO: sent 540189 bytes received 1957 bytes 72286.13 bytes/sec
INFO: total size is 664082551 speedup is 1224.91
INFO: final sync finished (7 seconds)
INFO: resume vm
INFO: Resuming...
INFO: vm is online again after 7 seconds
INFO: creating archive '/mnt/pve/storage1/dump/vzdump-openvz-201-2015_01_24-13_07_39.tar.lzo'
INFO: Total bytes written: 680591360 (650MiB, 11MiB/s)
INFO: archive file size: 334MB
INFO: Finished Backup of VM 201 (00:02:18)
INFO: Backup job finished successfully
TASK OK

df -h is looking fine so far:

Code:

root@vmhost1:~# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                          10M    0  10M  0% /dev
tmpfs                        6.3G  412K  6.3G  1% /run
/dev/mapper/pve-root          95G  1.6G  89G  2% /
tmpfs                        5.0M    0  5.0M  0% /run/lock
tmpfs                          13G  53M  13G  1% /run/shm
/dev/mapper/pve-data          1.7T  109G  1.6T  7% /var/lib/vz
/dev/sda2                    494M  81M  388M  18% /boot
/dev/sdb1                    470G  146G  301G  33% /mnt/ssdraid
/dev/fuse                      30M  44K  30M  1% /etc/pve
192.168.100.251:/volume1/vms  9.1T  2.5T  6.6T  28% /mnt/pve/storage1
/mnt/ssdraid/private/201      50G  691M  50G  2% /var/lib/vz/root/201
tmpfs                        103M  36K  103M  1% /var/lib/vz/root/201/run
tmpfs                        5.0M    0  5.0M  0% /var/lib/vz/root/201/run/lock
tmpfs                        410M    0  410M  0% /var/lib/vz/root/201/run/shm

pvdisplay:

Code:

  --- Physical volume ---
  PV Name              /dev/sda3
  VG Name              pve
  PV Size              1.82 TiB / not usable 3.98 MiB
  Allocatable          yes
  PE Size              4.00 MiB
  Total PE              476707
  Free PE              4095
  Allocated PE          472612
  PV UUID              hlQC95-5T3B-Ttse-rWzc-In9S-4Iy8-ift0ck

vgdisplay:

Code:

  --- Volume group ---
  VG Name              pve
  System ID           
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  32
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                3
  Open LV              3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size              1.82 TiB
  PE Size              4.00 MiB
  Total PE              476707
  Alloc PE / Size      472612 / 1.80 TiB
  Free  PE / Size      4095 / 16.00 GiB
  VG UUID              Dp1ReU-8zbQ-6bLy-zoM8-R1fl-dQHu-qpd3qz

lvdisplay:

Code:

  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                Qcejfv-2xqD-nsRy-fQM0-AuSa-6uyO-tt7cOL
  LV Write Access        read/write
  LV Creation host, time proxmox, 2014-09-22 23:52:16 +0200
  LV Status              available
  # open                1
  LV Size                62.00 GiB
  Current LE            15872
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:1
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                BNH5WT-FiU8-UL3w-kjeR-zjEM-MCB2-l0r5DZ
  LV Write Access        read/write
  LV Creation host, time proxmox, 2014-09-22 23:52:17 +0200
  LV Status              available
  # open                1
  LV Size                96.00 GiB
  Current LE            24576
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:0
 
  --- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID                DJUdaN-DFfU-BBFw-xYhm-E2Xu-iA6o-i7zXfO
  LV Write Access        read/write
  LV Creation host, time proxmox, 2014-09-22 23:52:17 +0200
  LV Status              available
  # open                1
  LV Size                1.65 TiB
  Current LE            432164
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:2

config:

Code:

dir: local
    path /var/lib/vz
    shared
    content images,iso,vztmpl,backup,rootdir
    maxfiles 5

nfs: storage1
    path /mnt/pve/storage1
    server 192.168.100.251
    export /volume1/vms
    options vers=3
    content images,iso,vztmpl,rootdir,backup
    maxfiles 5

dir: ssdraid
    path /mnt/ssdraid
    content images,iso,vztmpl,backup,rootdir
    maxfiles 1
    nodes vmhost1,vmhost2

vm config:

Code:

root@vmhost1:~# cat /etc/pve/openvz/201.conf
ONBOOT="yes"

PHYSPAGES="0:1024M"
SWAPPAGES="0:1024M"
KMEMSIZE="465M:512M"
DCACHESIZE="232M:256M"
LOCKEDPAGES="512M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"

# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="50G:55G"
DISKINODES="10000000:11000000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"

# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="2"
HOSTNAME="dbsql.artway.de"
SEARCHDOMAIN="artway.de"
NAMESERVER="8.8.8.8 4.4.4.4"
NETIF="ifname=eth0,bridge=vmbr0,mac=AE:FE:A6:10:0B:0C,host_ifname=veth201.0,host_mac=5A:0F:DF:2C:BB:03;ifname=eth1,bridge=vmbr1,mac=D2:EA:43:44:AF:52,host_ifname=veth201.1,host_mac=4E:27:56:AF:B4:D0"
VE_ROOT="/var/lib/vz/root/$VEID"
VE_PRIVATE="/mnt/ssdraid/private/201"
OSTEMPLATE="debian-7.0-standard_7.0-2_amd64.tar.gz"

lsblk:

Code:

root@vmhost1:~# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0  1.8T  0 disk
├─sda1                8:1    0    1M  0 part
├─sda2                8:2    0  510M  0 part /boot
└─sda3                8:3    0  1.8T  0 part
  ├─pve-root (dm-0) 253:0    0    96G  0 lvm  /
  ├─pve-swap (dm-1) 253:1    0    62G  0 lvm  [SWAP]
  └─pve-data (dm-2) 253:2    0  1.7T  0 lvm  /var/lib/vz
sdb                  8:16  0 476.9G  0 disk
└─sdb1                8:17  0 476.9G  0 part /mnt/ssdraid
sr0                  11:0    1  1024M  0 rom

Did I mess up setting up the ssdraid volume somewhere?

Sebastian

Backup script

$
0
0
1. Is it possible to add parameter to vzdump by GUI when doing backups? (bwlimit, ionice)Is it planned to place that in gui in next release? It would be very useful.2.How to set proxmox 3.3 to use lzop (2.03) or lzop with GNU parrel ? Regards

Partition alignment tool for Windows XP/2k3?

$
0
0
hello all,

I have a windows 2003 machine which is not properly aligned (4K block),so disk performance is not optimal.
I would like to know if there is a tool (preferably free) to properly align NTFS partition to 4K block sector inside windows VM.
I can do both offline/online conversion but I would like to avoid creating VM from scratch or copy it to another VM.
This VM resides on LVM.
Can anyone suggest such a tool?

OpenVZ performarce issues

$
0
0
Hi,

I've migrated a large web site from a dedicated server to an OpenVZ vm and having issues with very slow response. I've increased the cpu units to 500000, all 8 cores and 32GB of RAM but still very small improvement.

Is it anything else that I can configure to give the max resources to this vm?

unable to create CT / already exists

$
0
0
Hello!


It seems that my proxmox config is broken somewhere.


When I create or try to restore (with another CT ID), I got this message:


TASK ERROR: unable to create CT 120 - directory '/var/lib/vz/root/120' already exists


ID 120 is the first available number. I curently do not have a CT with this ID.


If I manualy use anothe number (available), it work.

The cluster of 6 nodes use Proxmox VE 3.3.


Does someone know where to look to correct this?


Sincerely,


Stephane

ceph firefly ceph.conf thread

$
0
0
Lowering Ceph scrub I/O priority

using version 0.80.8 +
The disk I/O of a Ceph OSD thread scrubbing is the same as all other threads by default. It can be lowered with ioprio options for all OSDs with:

*in ceph.conf
Code:

        osd_disk_thread_ioprio_class  = idle
        osd_disk_thread_ioprio_priority = 7

*set now:
Code:

ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'

see per http://dachary.org/?p=3268

novnc character issues with finnish keyboard layout

$
0
0
With novnc console on finnish keyboard layout and finnish defined as the layout in the datacenter options, it is impossible to produce the & or ^ characters. shift-6 produces a square and the ^ key produces nothing.

Moving proxmox to another network

$
0
0
He all,
Im gona move my proxmox server to another location and i was wondering how to setup my /etc/network/interfaces
this current /etc/network/interfaces is setup by my old webhost.

The webhost where im going to provided me with 2 uplinks that go to my 1800-24g (default setup)
i have my 3 nodes connected to the switch

how do i setup my /etc/network/interfaces now
vmbr159 and vmbr168 are vlans provided my old host
that should become 1 bridge that allows all my incomming ipnumbers from the new provider

Wil removing vmbr168 and changing vmbr159 to vmbr0 work? i gues 0 or 1 is the default vlan in my switch


this is the old one
---------------------

auto lo
iface lo inet loopback


auto eth0.159
iface eth0.159 inet manual


auto eth0.168
iface eth0.168 inet manual


auto eth1
iface eth1 inet manual


auto vmbr159
iface vmbr159 inet manual
bridge_ports eth0.159
bridge_stp off
bridge_fd 0


auto vmbr168
iface vmbr168 inet manual
bridge_ports eth0.168
bridge_stp off
bridge_fd 0


auto vmbr907
iface vmbr907 inet static
address 10.90.7.10
netmask 255.255.255.0
gateway 10.90.7.1
bridge_ports eth1
bridge_stp off
bridge_fd 0

Proxy to the webinterface on a local ip

Support 2 WAN IP's to single interface Eth0

$
0
0
Hi, I have hosted dedicated server with 2 NICs Eth0 & Eth1. (Eth0 is my public facing interface WAN & Eth1 is for private IP's which will support all internal VM's).

I'd like to see some examples on how to configure the WAN interfaces and how the internal VM's route to either of the WAN subnets. I'd prefer a setup where I control the VM network configurations (but would welcome seeing all options).

Note: I've read through this wiki:
https://pve.proxmox.com/wiki/Vlan
https://pve.proxmox.com/wiki/Vlans (I see this article use the bonding but have not been able to get this to work... Also, want to understand when to use one vs the other, especially, with my need to support the 2 WAN subnets).

Appreciate any sample configs posted here - Thanks!

--Joe

Connecting an external USB Hard drive for backups

$
0
0
I was wondering if I could connect a USB hard drive to my Proxmox host and use it for backups.

Sata 3 Compatible PCI Card

$
0
0
Hi all,

Just got Proxmox ve v3.3 setup nicely on a Dell 2950 iii server. I currently have the Dell perc 6i raid controller installed which is currently serving a simple raid 1 for the main storage and backup drives. I also have a Samsung 850 SSD installed on the 6i but am I reaching the limit of controller (about 190MBs each way).

Is there a simple and well supported Sata3 PCI card that is recommended? I don't need any Raid etc as it will be only hosting the single Samsung SSD.

I am also looking at installing a USB 3 card for backup in the future. Is there any well support cards also?

Thanks,

Jonathan

Trying to get a VPN to work within my container (but also have it accessible lan)

$
0
0
I have Proxmox VE installed and have an Ubuntu 13.10 OpenVZ container running (192.168.1.106 on my lan).

I want to be able to access the container using the local address, BUT I also want all outgoing traffic destined for outside my lan to go through a VPN.

I've found some tutorials online, but they mostly deal with sending ALL traffic through the VPN without having another network interface for my local traffic. If it matters, I'm using OpenVPN with the install script from Private Internet Access (PIA). I can provide the OpenVPN configuration files here if it matters.

Even after following all the instructions from tutorials (changing the IPTABLES in the vz.conf, making sure 'tun' is loaded on the host and changing the configuration of the container to use the same block) but I'm still having issues running 'openvpn ./config.conf'

Windows 7 and "required cd/dvd drive device driver is missing"

$
0
0
Hi all,
I'm trying to install Windows 7 in Proxmox 3 and I don't think Windows can see the virtual hard disk. I've tried several combinations of storage formats and shortly after I click "Install Now", I get "..a required cd/dvd drive device driver is missing". I'm not sure what that means because it's reading the CDROM just fine. I can only assume it's talking about the virtual hard drive.

I've tried to change the Hard Disk to virtio (tried SCSCI and IDE too) and added a second CDROM with the Redhat Virtio driver iso loaded. I can then get as far as browsing the Virtio cdrom but when I click the Win7/ADM64 virtio drivers to install them, windows finds the Virtio SCSI, ethernet and Ballon drivers, grinds around for about 20 seconds and just repeats the same error message above.

Any idea what's up?

Upgrade 3.0 to current fails

$
0
0
root@vwsrv2:/etc/apt/sources.list.d# cat pve-enterprise.list
deb https://enterprise.proxmox.com/debian wheezy pve-enterprise
root@vwsrv2:/etc/apt/sources.list.d# apt-get update
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://security.debian.org wheezy/updates Release
Hit http://ftp.de.debian.org wheezy Release.gpg
Hit http://ftp.de.debian.org wheezy Release
Hit http://download.proxmox.com wheezy Release.gpg
Hit http://security.debian.org wheezy/updates/main amd64 Packages
Hit http://security.debian.org wheezy/updates/contrib amd64 Packages
Hit http://security.debian.org wheezy/updates/contrib Translation-en
Hit http://security.debian.org wheezy/updates/main Translation-en
Hit http://download.proxmox.com wheezy Release
Hit http://ftp.de.debian.org wheezy/main amd64 Packages
Hit http://download.proxmox.com wheezy/pve amd64 Packages
Hit http://ftp.de.debian.org wheezy/contrib amd64 Packages
Hit http://ftp.de.debian.org wheezy/contrib Translation-en
Hit http://ftp.de.debian.org wheezy/main Translation-en
Ign http://download.proxmox.com wheezy/pve Translation-en_US
Ign http://download.proxmox.com wheezy/pve Translation-en
Ign https://enterprise.proxmox.com wheezy Release.gpg
Ign https://enterprise.proxmox.com wheezy Release
Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages
The requested URL returned error: 401
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en_US
Ign https://enterprise.proxmox.com wheezy/pve-enterprise Translation-en
W: Failed to fetch https://enterprise.proxmox.com/debia...amd64/Packages The requested URL returned error: 401

E: Some index files failed to download. They have been ignored, or old ones used instead.
root@vwsrv2:/etc/apt/sources.list.d#

We have bought Basic support!!

Type
Proxmox VE Community Subscription 2 CPUs/year
Subscription Key pve2c-xxxxxxxxxx
Status Active
Server ID B8A14902748BA03F4049686DC220983E
Sockets 2
Last checked 2015-01-26 19:10:37

Running system:
root@vwsrv2:/etc/apt/sources.list.d# pveversion -v
pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-22-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-93
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1
root@vwsrv2:/etc/apt/sources.list.d#

ProxMox shows 11% RAM usage but top and free -g show 99% usage

$
0
0
Hi,

I am running ProxMox with OpenVZ Containers for 2 months now and love it. I am having a issue with my RAM usage as free -g command displays 99% usage and ProxMox web interface displays 11% usage.

htop program shows displays 11% usage and I know htop calculates buffers and cache with actual RAM.

Code:

# free -g
            total      used      free    shared    buffers    cached
Mem:          118        116          1          0          2        100
-/+ buffers/cache:        13        104
Swap:            0          0          0

top output

Code:

MiB Mem:    120872 total,  119718 used,    1153 free,    2756 buffers
MiB Swap:      976 total,      76 used,      900 free,  103206 cached

htop output

Code:

|13735/120872MB
ScoutRealtime app shows

Code:

memory used

  13760 MB of 120872 MB




meminfo outout

Code:

~# cat /proc/meminfo
MemTotal:      123773160 kB
MemFree:        1159692 kB
Buffers:        2822964 kB
Cached:        105687992 kB
SwapCached:        25696 kB
MemCommitted:  36065280 kB
VirtualSwap:    1336768 kB
Active:        35571388 kB
Inactive:      81330396 kB
Active(anon):    5329632 kB
Inactive(anon):  3611956 kB
Active(file):  30241756 kB
Inactive(file): 77718440 kB
Unevictable:      10928 kB
Mlocked:          10928 kB
SwapTotal:      1000440 kB
SwapFree:        921840 kB
Dirty:            89756 kB
Writeback:            16 kB
AnonPages:      7051040 kB
Mapped:          610028 kB
Shmem:            544780 kB
Slab:            5275068 kB
SReclaimable:    5056124 kB
SUnreclaim:      218944 kB
KernelStack:      14960 kB
PageTables:      129856 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    62887020 kB
Committed_AS:  19869256 kB
VmallocTotal:  34359738367 kB
VmallocUsed:      253984 kB
VmallocChunk:  34359371884 kB
HardwareCorrupted:    0 kB
AnonHugePages:        0 kB
HugePages_Total:      0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:      2048 kB
DirectMap4k:        8128 kB
DirectMap2M:    125820928 kB

I am running

Code:

# pveversion
pve-manager/3.3-1/a06c9f73 (running kernel: 2.6.32-32-pve)

What am I missing?

VM Create Failing due to lock in CEPH.

$
0
0
Hey,
Has anyone seen this before? Sometimes we are seeing a VM create (or clone) fail due to the following error:
Code:

TASK ERROR: create failed - rbd error: got lock timeout
When this happened earlier today I could reproduce the issue until I ran some rbd commands to check that I could create or copy an image directly in Ceph.
After I ran these commands then Proxmox could create images again.
This is only happening intermittently.
Any help or insight on this would be appreciated.
Using Proxmox v3.3

Two node cluster keeps fencing

$
0
0
Hi all,

I'm trying to setup a two node high availability cluster and so far everything seems ok. The trouble is that when I test fencing with:
Code:

fence_node NODENAME -vv
it is successful but I have trouble re-adding the fenced node back into the cluster. I'm fencing using a HP 1910-16G switch and when I re-enable the fenced port, the fenced node immediately fences the first node! I have to move the fenced node to another switch port and restart rgmanager.

Now, I've not added DRBD or the quorum disk yet. Could this be the trouble? At the moment I'm just trying to test fencing. I don't have an VMs yet.

So, is this expected behaviour and am I trying to test too soon?

Many thanks,

NTB

Best Server Backup Software?

$
0
0
We have a running many windows server 2012 R2. Apparently, MS's built-in software is sub-par, so I was wondering if anyone had any suggestions for what we should use.

Can anyone recommend an easy to use backup Software?

Thanks

what is modifying etc/sysconfig/network?

$
0
0
after reboot my routing scheme breaks:
Code:

# route -n
Kernel IP routing table
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
172.16.0.110    172.16.0.109    255.255.255.255 UGH  0      0        0 venet0
10.24.106.0    0.0.0.0        255.255.254.0  U    0      0        0 eth0
0.0.0.0        0.0.0.0        0.0.0.0        U    0      0        0 eth0

becomes:
Code:

# route -n
Kernel IP routing table
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
172.16.0.110    172.16.0.109    255.255.255.255 UGH  0      0        0 venet0
10.24.106.0    0.0.0.0        255.255.254.0  U    0      0        0 eth0
0.0.0.0        0.0.0.0        0.0.0.0        U    0      0        0 venet0

centos-6-x86_64-minimal was the template and I enabled venet during creation.
After first boot I added veth for external network access, then created a routing file route-venet0:0 with 172.16.0.110 via 172.16.0.109,
and modified /etc/sysconfig/network to define the default gateway device as follows:
Code:

NETWORKING="yes"
GATEWAYDEV="eth0"
NETWORKING_IPV6="yes"
IPV6_DEFAULTDEV="eth0"
HOSTNAME="xxxxxxxx.xxx.xxx.xxx"
NOZEROCONF=yes

After a reboot, the file looks like this:
Code:

NETWORKING="yes"
GATEWAYDEV="venet0"
NETWORKING_IPV6="yes"
IPV6_DEFAULTDEV="eth0"
HOSTNAME="xxxxxxxx.xxx.xxx.xxx"
NOZEROCONF=yes

Why is this file being re-written during reboot?
Viewing all 171654 articles
Browse latest View live