November 2, 2014, 8:01 am
Hi,
I would like to share my troublesome experience with you.
I setup two Proxmox VE 3.3 installed on Debian Wheezy. They are connected with a Gigabit LAN via a crossover cable and there is a drbd blockdevice where qemu disks are located, with HA enabled, failover domain properly configured, a quorum disk on a third server and fencing setup. Up to here, everything works great: I can live migrate, rgmanager works as expected.
Now, in order to play with some HA openvz container, I tried different setups with no luck.I tried with a glusterfs, set on the same two machines, via the same LAN, backupping and restoring containers on it, but I can't properly start the containers from there and I get weird errors when trying to migrate them between my two nodes.
I tried to mount the glusterfs as NFS, with the same result: containers seem to hang at startup, with no external errors. Stopping them takes more time than the usual, they appear stuck.
What could cause this behavior?
Thank you in advance.
↧
November 2, 2014, 2:30 pm
Hello,
I use proxmox at home with a VM (Fedora 20) and i connect to it with SPICE.
All work fine but the sound is very bad.
Is there a config to increase sound quality with SPICE console? This VM will be my default PC so i need a good sound quality
Thx for your help
↧
↧
November 2, 2014, 3:08 pm
I just created a CT. Ubuntu 12.04, from one of the default templates. Gave it a password, bridged networking, and named it "test". Let's see.
So it's definately running. In the status, it says the name has become "test.local". Why? Well, let's ping it anyway.
Code:
Ping request could not find host test. Please check the name and try again.
Code:
Ping request could not find host test.local. Please check the name and try again.
So that doesn't work. So I move over to the Network tab to figure out what's wrong. It says IP-address/name: "eth0". Very helpful.
Alright no network then. Local console. I start up the console using the noVNC option. This is what it says:
Code:
Attached to CT 100 (ESC . to detach)
And that's it. Is it stuck? How do I make it.... go? How do I access it?
/edit
Forgot to mention, this is a freshly installed proxmox, completely virgin. All I have done so far is apt-get the thing to get up to date software.
↧
November 2, 2014, 6:51 pm
I have my proxmox ve 3.3 cluster setup. I have created a vm and internal to my network I can open console but externally it will not come up. I researched the java issue and followed the steps but that was no help. I have played with the pve-firewall but I am at a point that I cut it off till I can get it to work. I am using a cisco 1841 for port forwarding that I would like to continue to use. Here is what I had when the firewall was enabled
[OPTIONS]
enable: 0
[IPSET client-44]
[RULES]
IN SSH(ACCEPT) -source 50.31.1.62
IN SSH(ACCEPT) -source 122.181.3.130
IN SSH(ACCEPT) -source 203.197.151.138
IN SSH(ACCEPT) -source 203.200.152.147
IN SSH(ACCEPT) -source 10.10.10.0/24
IN SSH(ACCEPT) -source 172.17.254.220
IN SSH(ACCEPT) -source 10.66.66.0/24
IN ACCEPT -p tcp -dport 80
IN ACCEPT -p tcp -dport 443
IN ACCEPT -p tcp -dport 8006
IN ACCEPT -p tcp -dport 5900,5901,5902,5903,5904,5905,5906,5907,5908,5909, 5910
IN ACCEPT -p tcp -dport 3128
[group client-44]
[group client-net]
[group dmz]
[group host-net]
[group int-mgmt]
and here is my config for my router
no ip http server
no ip http secure-server
ip nat pool VNC 10.10.10.201 10.10.10.201 netmask 255.255.255.0 type rotary
ip nat inside source list NAT-ACL interface FastEthernet0/0 overload
ip nat inside source static tcp 10.10.10.201 22 interface FastEthernet0/0 22
ip nat inside source static udp 10.66.66.47 1194 interface FastEthernet0/0 1194
ip nat inside source static tcp 10.10.10.201 3128 interface FastEthernet0/0 3128
ip nat inside source static tcp 10.10.10.201 8006 interface FastEthernet0/0 8006
ip nat inside destination list NAT-ACL pool VNC
!
ip access-list extended NAT-ACL
permit ip 10.10.0.0 0.0.255.255 any
permit ip 10.66.66.0 0.0.0.255 any
permit tcp any any range 5900 5999
Any help would be greatly appreciated.
↧
November 3, 2014, 1:18 am
Dear collegues,
I was really exited about the new ovs integration, as it would further simplify and centralize the administration of cluster installations.
So I tried to switch to ovs. I run 2 NICs that form a bond and a ovs_bridge that is connected to the bond. This setting is working so far.
Now I want to capsulate the network of a group of (KVM) VM`s, so they can no longer talk to each other but still can use the internet connection.
I was thinking of something like this: vm1 => vlan1, vm2 => vlan2, vm3+4+5 =>vlan3
How does this setup correspond to the proxmox network configuration?
The /etc/network/interfaces of my test-setup looks like this:
Code:
iface eth2 inet manual
iface eth1 inet manual
allow-vmbr0 bond0
iface bond0 inet manual
ovs_bonds eth1 eth2
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options bond_mode=balance-slb
auto vmbr0
iface vmbr0 inet static
address 192.168.178.63
netmask 255.255.255.0
gateway 192.168.178.1
ovs_type OVSBridge
ovs_ports bond0
I´ve read the ovs tutorial of the wiki. Do I really need to use OVSIntPorts for every vlan? This would make a reboot necessary after every adding new vlans, which would not be practical for productive setups.
Chris
↧
↧
November 3, 2014, 4:33 am
Hello,
We are using method of installing Proxmox VE over Debian installed first (due flexibility). But, because it is production environment, we need to install from enterprise repository right from beginning. So we need use our buyed licence before Proxmox VE is ever installed, so questions are:
- how to get id of hardware of this machine (hypervizor) for issuing request for generating server key (tool)?
- how to apply this server key, so be able to use enterprise repository?
Thanks
P.B.
↧
November 3, 2014, 5:45 am
Hi,
We've recently configured a Proxmox node with two hosts and shared storage. All three servers are connected directly via 10Gbit Intel 82599EB based SFP+ link. Storage server runs on latest OpenMediavault and provides 32TB of RAID10 (mdadm based) space for both backups, ISOs and VMs over NFS and is also the host for a 1GB quorum disk shared via iscsi/fileio protocol. The entire configuration is pretty simple and looks like this: NODE1->STORAGE<-NODE2. Unforttunatelly we couldn't afford a decent 10Gbit switch which is why the storage server contain a dual-port card and acts as a switch between NODE1 and NODE2 - it's a simple bridge configured with brctl. All three machines are Supermicro Xeon based computers with 16GB of RAM for the storage and 64GB for each node.
Each machine is absolutelly up to date, there's only one Windows 2012R2 VM configured in HA mode. Now, everything works perfectly fine, except for backups (we're using LZO if that matters)... During the backup task something strange happens and the host currently hosting the VM gets evicted from the cluster, the job itself is interrupted, VM gets stuck in "locked" state and everything goes south.
Could anybody please take a look at the logs below and help us track the problem?
In this specific case sul-node0001 was hosting the VM and performing the backup while sul-node0002 was completely idle acting as a backup host (and part of the cluster, of course).
Logs for NODE1:
Code:
Nov 3 12:22:05 sul-node0001 pvedaemon[4016]: <root@pam> successful auth for user 'root@pam'
Nov 3 12:22:06 sul-node0001 rgmanager[11957]: [pvevm] VM 100 is running
Nov 3 12:22:41 sul-node0001 pvestatd[4037]: WARNING: command 'df -P -B 1 /mnt/pve /NFS_BACKUP' failed: got timeout
Nov 3 12:22:43 sul-node0001 pvestatd[4037]: WARNING: command 'df -P -B 1 /mnt/pve /NFS_MSSQL' failed: got timeout
Nov 3 12:22:45 sul-node0001 pvestatd[4037]: WARNING: command 'df -P -B 1 /mnt/pve /NFS_STORAGE' failed: got timeout
Nov 3 12:22:45 sul-node0001 pvestatd[4037]: status update time (6.066 seconds)
Nov 3 12:22:46 sul-node0001 rgmanager[12045]: [pvevm] VM 100 is running
Nov 3 12:23:06 sul-node0001 rgmanager[12099]: [pvevm] VM 100 is running
Nov 3 12:23:26 sul-node0001 rgmanager[12152]: [pvevm] VM 100 is running
Nov 3 12:23:57 sul-node0001 rgmanager[12284]: [pvevm] VM 100 is running
Nov 3 12:24:06 sul-node0001 rgmanager[12320]: [pvevm] VM 100 is running
Nov 3 12:24:09 sul-node0001 pveproxy[4042]: worker 9158 finished
Nov 3 12:24:09 sul-node0001 pveproxy[4042]: starting 1 worker(s)
Nov 3 12:24:09 sul-node0001 pveproxy[4042]: worker 12353 started
Nov 3 12:24:36 sul-node0001 kernel: kvm: exiting hardware virtualization
Nov 3 12:24:36 sul-node0001 kernel: sd 0:0:1:0: [sdb] Synchronizing SCSI cache
Logs for NODE2:
Code:
Nov 3 12:18:51 sul-node0002 pmxcfs[3247]: [status] notice: received log
Nov 3 12:19:29 sul-node0002 qdiskd[3448]: qdisk cycle took more than 1 second to complete (1.340000)
Nov 3 12:22:05 sul-node0002 pmxcfs[3247]: [status] notice: received log
Nov 3 12:22:34 sul-node0002 pvestatd[4430]: WARNING: command 'df -P -B 1 /mnt/pve/NFS_BACKUP' failed: got timeout
Nov 3 12:22:36 sul-node0002 pvestatd[4430]: WARNING: command 'df -P -B 1 /mnt/pve/NFS_MSSQL' failed: got timeout
Nov 3 12:22:38 sul-node0002 pvestatd[4430]: WARNING: command 'df -P -B 1 /mnt/pve/NFS_STORAGE' failed: got timeout
Nov 3 12:22:38 sul-node0002 pvestatd[4430]: status update time (6.056 seconds)
Nov 3 12:22:44 sul-node0002 pvestatd[4430]: WARNING: command 'df -P -B 1 /mnt/pve/NFS_BACKUP' failed: got timeout
Nov 3 12:22:46 sul-node0002 pvestatd[4430]: WARNING: command 'df -P -B 1 /mnt/pve/NFS_MSSQL' failed: got timeout
Nov 3 12:22:48 sul-node0002 pvestatd[4430]: WARNING: command 'df -P -B 1 /mnt/pve/NFS_STORAGE' failed: got timeout
Nov 3 12:22:48 sul-node0002 pvestatd[4430]: status update time (6.056 seconds)
Nov 3 12:24:30 sul-node0002 qdiskd[3448]: Assuming master role
Nov 3 12:24:31 sul-node0002 qdiskd[3448]: Writing eviction notice for node 1
Nov 3 12:24:32 sul-node0002 qdiskd[3448]: Node 1 evicted
Nov 3 12:25:29 sul-node0002 corosync[3397]: [TOTEM ] A processor failed, forming new configuration.
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] CLM CONFIGURATION CHANGE
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] New Configuration:
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] #011r(0) ip(10.10.10.222)
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] Members Left:
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] #011r(0) ip(10.10.10.221)
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] Members Joined:
Nov 3 12:25:31 sul-node0002 corosync[3397]: [QUORUM] Members[1]: 2
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] CLM CONFIGURATION CHANGE
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] New Configuration:
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] #011r(0) ip(10.10.10.222)
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] Members Left:
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CLM ] Members Joined:
Nov 3 12:25:31 sul-node0002 corosync[3397]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Nov 3 12:25:31 sul-node0002 rgmanager[3888]: State change: sul-node0001 DOWN
Nov 3 12:25:31 sul-node0002 corosync[3397]: [CPG ] chosen downlist: sender r(0) ip(10.10.10.222) ; members(old:2 left:1)
Nov 3 12:25:31 sul-node0002 pmxcfs[3247]: [dcdb] notice: members: 2/3247
Nov 3 12:25:31 sul-node0002 pmxcfs[3247]: [dcdb] notice: members: 2/3247
Nov 3 12:25:31 sul-node0002 kernel: dlm: closing connection to node 1
Nov 3 12:25:31 sul-node0002 corosync[3397]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 3 12:25:31 sul-node0002 fenced[3603]: fencing node sul-node0001
Nov 3 12:25:32 sul-node0002 fence_ipmilan: Parse error: Ignoring unknown option 'nodename=sul-node0001
Nov 3 12:25:47 sul-node0002 fenced[3603]: fence sul-node0001 success
Nov 3 12:25:48 sul-node0002 rgmanager[3888]: Starting stopped service pvevm:100
Nov 3 12:25:48 sul-node0002 rgmanager[9284]: [pvevm] Move config for VM 100 to local node
Nov 3 12:25:48 sul-node0002 pvevm: <root@pam> starting task UPID:sul-node0002:00002458:00050553:5457663C:qmstart:100:root@pam:
Nov 3 12:25:48 sul-node0002 task UPID:sul-node0002:00002458:00050553:5457663C:qmstart:100:root@pam:: start VM 100: UPID:sul-node0002:00002458:00050553:5457663C:qmstart:100:root@pam:
Nov 3 12:25:48 sul-node0002 task UPID:sul-node0002:00002458:00050553:5457663C:qmstart:100:root@pam:: VM is locked (backup)
Nov 3 12:25:48 sul-node0002 pvevm: <root@pam> end task UPID:sul-node0002:00002458:00050553:5457663C:qmstart:100:root@pam: VM is locked (backup)
Nov 3 12:25:48 sul-node0002 rgmanager[3888]: start on pvevm "100" returned 1 (generic error)
Nov 3 12:25:48 sul-node0002 rgmanager[3888]: #68: Failed to start pvevm:100; return value: 1
Nov 3 12:25:48 sul-node0002 rgmanager[3888]: Stopping service pvevm:100
Nov 3 12:25:49 sul-node0002 rgmanager[9306]: [pvevm] VM 100 is already stopped
Nov 3 12:25:49 sul-node0002 rgmanager[3888]: Service pvevm:100 is recovering
Nov 3 12:25:49 sul-node0002 rgmanager[3888]: #71: Relocating failed service pvevm:100
Nov 3 12:25:49 sul-node0002 rgmanager[3888]: Service pvevm:100 is stopped
There's nothing extraordinary in storage server's log. nfsd doesn't get messed up everything seems perfectly OK on that side. Could it be due to that 10Gbit getting over-saturated and dropping some vital packets which in turn leads node0002 to believe that node0001 has gone down? This seems both possible and highly unlikely to me, because peak HDD transfers on the storage server oscilate around 350MB/s which shouldn't choke those Intel 82599s, right?
Could you please provide me with any tips?
↧
November 3, 2014, 6:13 am
Hey Guys,
I have some problems with my Server so i think you could help me. ( Sry for my English it's not the best ;) )
i had an Proxmox 3.3 VE configured with eth0 manually and vmbr0 with briged ports to eth0
Quote:
auto lo
iface lo inet loopback
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 188.165.xxx.xxx
netmask 255.255.255.0
gateway 188.165.xxx.xxx
bridge_ports eth0
bridge_stp off
bridge_fd 0
auto vmbr2
iface vmbr2 inet static
address 192.168.0.1
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
bridge_ports none
bridge_stp off
bridge_fd 0
The VM's have all Network Adapters to vmbr2 so i push the traffic from vmbr2 (the VM's Network "local") to the vmbr0 (output through eth0)
Quote:
-A POSTROUTING -s 192.168.0.0/24 -o vmbr0 -j MASQUERADE
The VM's are all available to connect to Web and can talk together in the vmbr2 Network (192.168.0.0/24)
The vmbr0 routes some ports to the VM's via iptables PREROUTING DNAT like this.
Quote:
-A PREROUTING -i vmbr0 -p udp -m udp --dport 9987 -j DNAT --to-destination 192.168.0.101:9987 (this is my Teamspeak Port so the other Ports are exactly same Routed per iptables to the VM's)
This work very well so i can have 10 VM's and can set their SSH Ports routed so internal i can use it normaly and externaly i routed their to connect from 1 IP Address.
BUT when i now have a VPN (OPENVPN) Server on one of the VM's i can't connect over the Web to the VM's
so i mean:
Laptop with VPN (client) --> Internet --> VPN (server) (VM on the Host) --> Internet --> Host (My 1 IP i have) --> (PREROUTE/DNAT) --> another VM on my Host
This doesn't work and i don't know why ^^
Hope you can help me.
Hopefully
ChoosenEye aka Ivo
↧
November 3, 2014, 6:51 am
Hi There
I was hoping someone might be able to tell me how to set default values during VM creation. For instance I would like disks to default to Virtio and Raw rather than IDE an QCOW2, or I would like the default nic to be Virtio.
Thanks for any help
James
↧
↧
November 3, 2014, 7:12 am
Hi There
For various reasons I really need to disable Thin Provisioning of disks for Raw and QCOW2. I can't seem to find an option to do this, is there one?
Thanks
↧
November 3, 2014, 7:27 am
I'm trying to configure OVS (not working with Linux VLANs either) VLANs that are accessible across my Proxmox cluster.
Searching around for Proxmox and VLANs I find several different examples on how to do them. Most show setting up bonding and I plan on doing that also, but I've been waiting on our new L3 switch. So in the mean time, I've just setup a cross over cable on the second nic to practice getting this setup, but while I can get the virtual switch configured on eth1 and VMs connected to that virtual switch can communicate with each other, it does not communicate with the other Proxmox server which has the same VLAN configuration.
My setup is I have vmbr1 configured as a bridge and it has both vlan20 and eth1 as bridged ports. I've configured this several different ways based on example configurations I've found online. None of them seem to pass packets over the nic to the other server. The cross over cable does work as if I only assign a network IP to eth1 on both servers, they function fine.
I'm trying to figure out what I'm doing wrong that it does not forward packets. :/
↧
November 3, 2014, 7:52 am
Hi, when i reinstalled my proxmox to get it working. I followed someones command that gave the disk around 10 GB only.. today when i got home, i got loads of errors due to that disk was full.
i Ran apt-get clean and i got around 120MB more free:d (still like 98% full)..
Code:
/dev/fuse 30M 16K 30M 1% /etc/pveroot@Caelus:/# du -ax --max-depth=3 / | sort -n | awk '{if($1 > 102400) print $1/1024 "MB" " " $2 }'
106.926MB /lib/modules/2.6.32-32-pve
106.93MB /lib/modules
125.445MB /usr/bin
132.203MB /var
165.723MB /lib
218.043MB /usr/lib
253.891MB /usr/share
664.152MB /usr
981.985MB /
What i was thinking.. to move /usr into one of the storage nodes that has terabytes of disk space..
How would i go about to do this the best and simplest way?
Additional information that might be usefull(?)
Code:
root@Caelus:/# df -hFilesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 1.6G 552K 1.6G 1% /run
/dev/mapper/pve-root 9.9G 9.2G 192M 98% /
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.2G 28M 3.1G 1% /run/shm
/dev/mapper/pve-data 67G 8.3G 59G 13% /var/lib/vz
/dev/sda2 494M 36M 433M 8% /boot
/dev/md0 7.3T 560G 6.7T 8% /storage
tmpfs 7.2G 0 7.2G 0% /tmp
/dev/fuse 30M 16K 30M 1% /etc/pve
Code:
root@Caelus:/# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=2035237,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1630212k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,barrier=1,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3260400k)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/mapper/pve-data on /var/lib/vz type ext4 (rw,relatime,barrier=1,data=ordered)
/dev/sda2 on /boot type ext4 (rw,relatime,barrier=1,data=ordered)
/dev/md0 on /storage type ext4 (rw,relatime,barrier=0,stripe=512,data=ordered)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime,size=7454720k)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
beancounter on /proc/vz/beancounter type cgroup (rw,relatime,blkio,name=beancounter)
container on /proc/vz/container type cgroup (rw,relatime,freezer,devices,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,relatime,cpuacct,cpu,cpuset,name=fairsched)
↧
November 3, 2014, 10:00 am
HI,
I try to configure an OpenVH container in Proxmox.
My container can ping my server but it can't acces internet. I'm not very good in network so if someone can explain me if my bridge is correctyle configured. :
Host configuration : /etc/network/interfaces
iface eth0 inet static
address my.host.IP.adress
netmask 255.255.255.0
network x.x.x.x
broadcast x.x.x.255
gateway x.x.x.254
auto vmbr0
iface vmbr0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo > & /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -1 POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
On Proxmox CT network I choose routed mode (venet) : 10.10.10.2
Can anyone tell me what's wrong?
Thanks
↧
↧
November 3, 2014, 11:40 am
Hey there,
i was wondering if someone could help on a question that chases me now for about a week.
I need to setup a server with proxmox. The server has got 3 NIC`s and 3 public IPV4-Addresses.
I do need to achieve the following setup:
Code:
INTERNET ---NIC1(IPV4_1)---| |----VM1(Linux) [IPV4_1]
| |----VM2(Linux) [IPV4_2:80]
INTERNET ---NIC2(IPV4_2)------PROXMOX[IPV4_2:8006]---|
| |----VM3(Linux) [IPV4_2:8080]
INTERNET ---NIC2(IPV4_3)---| |----VM4(Windows) [IPV4_3]
I hope my intention is clear with this sheme.
How should my network.conf look like for this setup?
This is what it actually looks but somehow with this configuration I loose my complete connectivity to the internet.
Do I have to route at this point, when i'm using more than one NIC?
Code:
auto lo
iface lo inet loopback
#Nic1
auto eth1
allow-hotplug eth0
iface eth0 inet static
address 141.xxx.xxx.49
netmask 255.255.254.0
gateway 141.xxx.xxx.1
dns-nameservers 141.xx.xx.3 141.xx.xx.4
#Nic2
auto eth1
iface eth0 inet static
address 141.xxx.xxx.50
netmask 255.255.254.0
gateway 141.xxx.xxx.1
dns-nameservers 141.xx.xx.3 141.xx.xx.4#Nic3
#Nic3
auto eth2
iface eth2 inet static
address 141.xxx.xxx.51
netmask 255.255.254.0
gateway 141.xxx.xxx.1
dns-nameservers 141.xx.xx.3 141.xx.xx.4
I would appreciate any help.
MrBrown
↧
November 3, 2014, 3:13 pm
i tryed use dkms xtables from debian. geoip iptables is important to control bad trafic to any server without using larger ip list.
Code:
dpkg-reconfigure xtables-addons-dkms -- no sucess.
Code:
Done.
Loading new xtables-addons-1.42 DKMS files...
Building for 2.6.32-32-pve and 3.10.0-5-pve
Building initial module for 2.6.32-32-pve
Error! Build of xt_ACCOUNT.ko failed for: 2.6.32-32-pve (x86_64)
Consult the make.log in the build directory
/var/lib/dkms/xtables-addons/1.42/build/ for more information.
Code:
expr: syntax error
make: Entering directory `/usr/src/linux-headers-2.6.32-32-pve'
CC [M] /usr/src/xtables-addons-1.42/compat_xtables.o
CC [M] /usr/src/xtables-addons-1.42/xt_CHAOS.o
CC [M] /usr/src/xtables-addons-1.42/xt_DELUDE.o
CC [M] /usr/src/xtables-addons-1.42/xt_DHCPMAC.o
CC [M] /usr/src/xtables-addons-1.42/xt_DNETMAP.o
/usr/src/xtables-addons-1.42/xt_DNETMAP.c: In function dnetmap_tg:
/usr/src/xtables-addons-1.42/xt_DNETMAP.c:318:14: warning: unused variable net [-Wunused-variable]
CC [M] /usr/src/xtables-addons-1.42/xt_ECHO.o
/usr/src/xtables-addons-1.42/xt_ECHO.c: In function echo_tg6:
/usr/src/xtables-addons-1.42/xt_ECHO.c:36:16: error: storage size of fl isnt known
/usr/src/xtables-addons-1.42/xt_ECHO.c:99:37: error: implicit declaration of function flowi6_to_flowi [-Werror=implicit-function-declaration]
/usr/src/xtables-addons-1.42/xt_ECHO.c:119:2: error: implicit declaration of function ip6_local_out [-Werror=implicit-function-declaration]
/usr/src/xtables-addons-1.42/xt_ECHO.c:36:16: warning: unused variable fl [-Wunused-variable]
cc1: some warnings being treated as errors
make[1]: *** [/usr/src/xtables-addons-1.42/xt_ECHO.o] Error 1
make: *** [_module_/usr/src/xtables-addons-1.42] Error 2
make: Leaving directory `/usr/src/linux-headers-2.6.32-32-pve'
Any idea how solve it? Thanks Alot!
↧
November 3, 2014, 10:30 pm
hello sir, today i buy a dedicated server with Debian 6 64bit os i want to istall proxmox in that please tell me the installation process with detail step by step and if possibble plesase send me a video tutorial plese sir i m noob in this .thanks
↧
November 3, 2014, 11:52 pm
We just released hotfix 3.1-5892 for our Proxmox Mail Gateway 3.1.
Download
http://www.proxmox.com/downloads/category/service-packs
Release Notes
03.11.2014: Proxmox Mail Gateway 3.1-5892
- proxmox-mailgateway (3.1-16)
- disable SSLv3 to protect against poodle attacks
- rsyslog update
- bash update
04.09.2014: Proxmox Mail Gateway 3.1-5874
- proxmox-mailgateway (3.1-15)
- fix problem with 7z unpacker
29.08.2014: Proxmox Mail Gateway 3.1-5866
- proxmox-mailgateway (3.1-14)
- use 7z to unpack zip archives
- Spamassassin updates (3.4.0-1)
04.06.2014: Proxmox Mail Gateway 3.1-5853
- proxmox-mailgateway (3.1-13)
- apache ldap auth: correctly update ldap database
- SAV update
- Spamassassin rule updates
03.03.2014: Proxmox Mail Gateway 3.1-5829
- proxmox-mailgateway (3.1-11)
- improve email parser and quote links to avoid XSS reflection
- fix menu activation with firefox
- proxmox-spamassassin (3.3.2-4), updated ruleset
06.06.2013: Proxmox Mail Gateway 3.1-5773
- ClamAV 0.97.8
- Avira SAV bug fix (license check)
- Spamassassin rule updates
19.04.2013: Proxmox Mail Gateway 3.1-5741
- fix admin permissions for cluster nodes (admin can now reboot slaves)
07.11.2012: Proxmox Mail Gateway 3.1-5695
- Add the possiblity to flush the address verification database
27.09.2012: Proxmox Mail Gateway 3.1-5673
- named: use forward only mode (always contact configured servers)
14.09.2012: Proxmox Mail Gateway 3.1-5670
- Commtouch AV updates
- Small bug fixes
__________________
Best regards,
Martin Maurer
↧
↧
November 4, 2014, 1:02 am
Hi,
i'd really like to create a new post for everyone who is searching for a new RAID-Controller. It would be very nice if everyone could post his controller type, the used Hard-Drives and the Performance-Messurements on this device.
i hope many people reply and share this informations with other to make decissions for new hardware easier
↧
November 4, 2014, 1:12 am
Hi guys,
I am in the process of trying to setup a Proxmox network config with the following requirements:
- Multiple dev environments with *no* public facing IP address - all sharing a single IP using NAT. This is working perfectly using the configuration below.
- One single VM using a DEDICATED IP which is external facing and is attached only to this VM. I would prefer this to be tied as directly as possible to the interface. The catch is that this server only has one physical NIC so I have two IPs routed over one NIC (hence the efforts below to add this as an alias IP.)
If anyone could point me in the right direction I would greatly appreciate it! :)
I know how to do this with eth0:0 type alias interfaces on Debian but I'm not sure how to tie that in with Proxmox.
Code:
# cat /etc/network/interfaces # network interface settings
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
broadcast 94.76.xxx.xxx
network 94.76.xxx.xx
dns-nameservers 217.112.xx.xxx 217.112.xx.xxx
auto vmbr0
iface vmbr0 inet static
address 94.76.xxx.xxx
netmask 255.255.255.192
gateway 94.76.xxx.xxx
bridge_ports eth0
bridge_stp off
bridge_fd 0
post-up ip addr add 85.234.xxx.xxx/32 dev vmbr0
post-down ip addr del 85.234.xxx.xxx/32 dev vmbr0
auto vmbr2
iface vmbr2 inet static
address 10.0.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up iptables -A POSTROUTING -t nat -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE
post-up iptables -A POSTROUTING -t mangle -p udp --dport bootpc -j CHECKSUM --checksum-fill
post-down iptables -D POSTROUTING -t nat -s '10.0.0.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -D POSTROUTING -t mangle -p udp --dport bootpc -j CHECKSUM --checksum-fill
↧
November 4, 2014, 1:56 am
Hi, if I use spice for Kubuntu or openSuse VM, xorg in VM use 100%CPU so VM is unusable..... I have qxl drivers and vdagent installed. After connect to VM everything is fine and smooth but after short time CPU go to 100%. If I set vmware dispay and use noVNC, VM run fine.
# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Two servers in cluster +DRBD, 2socket Xeon e5-2620v2, 6x10k SAS HDD (hw RAID 10)
Is this Proxmox bug? Or si there any additional setup of VM needed?
Thanks.
↧