Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 170704

Network lag

$
0
0
Hi,

I'm running Proxmox 2.3 with the latest updates installed (though, I noticed the same issue on the previous 2.3 builds).

When I ping (from another server) an openvz container which has no network load at the moment, the 1st ping has high latency:
Code:

# ping vm-centos6-2
PING vm-centos6-2.domain.tld (172.26.200.250) 56(84) bytes of data.
64 bytes from vm-centos6-2.domain.tld (172.26.200.250): icmp_seq=1 ttl=64 time=227 ms
64 bytes from vm-centos6-2.domain.tld (172.26.200.250): icmp_seq=2 ttl=64 time=0.096 ms
64 bytes from vm-centos6-2.domain.tld (172.26.200.250): icmp_seq=3 ttl=64 time=0.088 ms

Digging it deeper, it appears that the delay occurs before arp reply packet:
Code:

17:32:24.289426 ARP, Request who-has 172.26.200.250 tell 172.26.1.14, length 46
17:32:25.040691 ARP, Reply 172.26.200.250 is-at 00:30:48:79:20:d6, length 28
17:32:25.040763 IP 172.26.1.14 > 172.26.200.250: ICMP echo request, id 36915, seq 1, length 64
17:32:25.040812 IP 172.26.200.250 > 172.26.1.14: ICMP echo reply, id 36915, seq 1, length 64

Code:

# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  172.26.1.36
        netmask  255.255.0.0
        gateway  172.26.1.1
        bridge_ports eth0.1
        bridge_stp off
        bridge_fd 0

auto vmbr12
iface vmbr12 inet static
        address  172.27.1.36
        netmask  255.255.0.0
        bridge_ports eth0.12
        bridge_stp off
        bridge_fd 0

Any ideas why that happens? Is there anything that could be tweaked to fix this?

Viewing all articles
Browse latest Browse all 170704

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>