Quantcast
Channel: Proxmox Support Forum
Viewing all 173547 articles
Browse latest View live

Basic public IP config for guests?

$
0
0
I have a default install of Proxmox 3.3. I have told my ISP that intend to virtualize my hardware, and asked them if they are OK with having multiple MAC addresses on the same NIC. They say they are OK with that. I am currently using 5 public IPV4 addresses on a phyiscal machine which I intend to replace with new hardware running Proxmox.

I intend to do a P2V of the Ubuntu Server install on the old box into a VM on the new one. I'll then need to replicate the networking on that guest, which currently looks like this:


Code:

auto lo
iface lo inet loopback


auto eth0
iface eth0 inet static
        address xx.xx.40.154
        netmask 255.255.255.0
        network xx.xx.40.0
        broadcast xx.xx.40.255
        gateway xx.xx.40.1


auto eth0:0
iface eth0:0 inet static
        address xx.xx.40.155
        netmask 255.255.255.0


auto eth0:1
iface eth0:1 inet static
        address xx.xx.40.156
        netmask 255.255.255.0


In this case, what do I put in /etc/networking/interfaces if I want to run KVM guests? Right now it has the default of:

Code:

auto lo
iface lo inet loopback
iface eth0 inet manual
 
auto vmbr0
iface vmbr0 inet static
        address 192.168.10.2
        netmask 255.255.255.0
        gateway 192.168.10.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

So I assume I replace the vmbr0 address with that of the host (which is one of the public addresses I have), and the netmask and gateway provided by my ISP.

Can I keep the networking config the same on the guest after the P2V is complete? Will that work? I ask because I’m only going to get one chance to set this up in the data centre.

Thanks for any help. For some reason I can't find any info on this that doesn't involved more complicated things like firewalls and stuff, which I don't have.

127.0.53.53 Help with network

$
0
0
Hello!

I have problem with my Proxmox server, OpenVZ VM.
Version PROXMOX 3.2-1/1933730b

I set up external IP on my VM

root@alfatell:~# vzlist
CTID NPROC STATUS IP_ADDR HOSTNAME
109 73 running 217.23.86.148 voip.prod


But when app inside VM try connect to external resources for example sip registration, all traffic go to lo interface and all connections nated with address 127.0.53.53.
I don't need nat with adress 127.0.53.53. How I can fix that?


root@alfatell:~# vzctl enter 109
entered into CT 109
[root@voip /]# tcpdump -i lo -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
13:10:08.291244 IP 127.0.0.1.37205 > 127.0.0.1.agentx: Flags [S], seq 205119582, win 32792, options [mss 16396,sackOK,TS val 224785158 ecr 0,nop,wscale 7], length 0
13:10:08.291258 IP 127.0.0.1.agentx > 127.0.0.1.37205: Flags [R.], seq 0, ack 205119583, win 0, length 0
13:10:16.536140 IP 127.0.0.1.35932 > 127.0.0.1.5038: Flags [S], seq 3536973848, win 32792, options [mss 16396,sackOK,TS val 224793402 ecr 0,nop,wscale 7], length 0
13:10:16.536153 IP 127.0.0.1.5038 > 127.0.0.1.35932: Flags [S.], seq 1690861501, ack 3536973849, win 32768, options [mss 16396,sackOK,TS val 224793402 ecr 224793402,nop,wscale 7], length 0
13:10:16.536160 IP 127.0.0.1.35932 > 127.0.0.1.5038: Flags [.], ack 1, win 257, options [nop,nop,TS val 224793402 ecr 224793402], length 0
13:10:16.536233 IP 127.0.0.1.35932 > 127.0.0.1.5038: Flags [F.], seq 1, ack 1, win 257, options [nop,nop,TS val 224793403 ecr 224793402], length 0
13:10:16.536306 IP 127.0.0.1.5038 > 127.0.0.1.35932: Flags [.], ack 2, win 256, options [nop,nop,TS val 224793403 ecr 224793403], length 0
13:10:16.536421 IP 127.0.0.1.5038 > 127.0.0.1.35932: Flags [P.], seq 1:28, ack 2, win 256, options [nop,nop,TS val 224793403 ecr 224793403], length 27
13:10:16.536446 IP 127.0.0.1.35932 > 127.0.0.1.5038: Flags [R], seq 3536973850, win 0, length 0
13:10:19.021754 IP 127.0.53.53.sip > 127.0.53.53.sip: SIP, length: 517
13:10:19.021777 IP 127.0.53.53.sip > 127.0.53.53.sip: SIP, length: 3
13:10:19.021834 IP 127.0.53.53.sip > 127.0.53.53.sip: SIP, length: 464
13:10:23.306396 IP 127.0.0.1.37208 > 127.0.0.1.agentx: Flags [S], seq 775776305, win 32792, options [mss 16396,sackOK,TS val 224800173 ecr 0,nop,wscale 7], length 0
13:10:23.306422 IP 127.0.0.1.agentx > 127.0.0.1.37208: Flags [R.], seq 0, ack 775776306, win 0, length 0
^C
14 packets captured
28 packets received by filter


Thank you!

memory usage

Corosync Error

$
0
0
Hi there;

i wants to have a 2 nodes configuration

cl1 = 192.168.123.10
cl2 = 192.168.123.11

i changed the hosts file and ping is working fine!

then i created a cluster at the cl1 and i added the cl2

like this tutorial:
http://www.jamescoyle.net/how-to/911...luster-proxmox

but i earned quorum fail.....

so i make pvecm expected 1


CL1 pvecm status =

Quote:

root@cl1:/etc# pvecm statusVersion: 6.2.0
Config Version: 2
Cluster Name: cl1proxmoxclu
Cluster Id: 5801
Cluster Member: Yes
Cluster Generation: 301100
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: cl1
Node ID: 1
Multicast addresses: 239.192.22.191
Node addresses: 192.168.123.10

CL2 PVEcm status =

Quote:


root@cl2:~# pvecm status
cman_tool: Cannot open connection to cman, is it running ?
so next i tried to start
Code:

service cman start
Quote:

root@cl2:~# service cman startStarting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... /usr/sbin/ccs_config_validate: line 186: 3197 Segmentation fault (core dumped) ccs_config_dump > $tempfile


Unable to get the configuration
corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
corosync [MAIN ] Corosync built-in features: nss
corosync [MAIN ] Successfully read config from /etc/cluster/cluster.conf
corosync died with signal: 11 Check cluster logs for details
[FAILED]
Pleas can you help me???

is it possible to run at my cluster1 proxmox with 3.1 and at cluster2 3.3??






greetings from germany!!!

stephan

Spice & Custom Self Signed Certificate

$
0
0
Hi!

Just updated my Proxmox Certs (self signed) like described in https://pve.proxmox.com/wiki/HTTPSCe...eConfiguration without any problems - brower interface & everything else is working fine except connecting to my win7 VM through SPICE. Got it completely working before.

Debugging the Spice Connection gives me an self signed cert error:

Quote:

C:\Program Files\VirtViewer v2.0256\bin>remote-viewer.exe Download.vv --debug
C:\Program Files\VirtViewer v2.0256\bin>(remote-viewer.exe:256): remote-viewer-DEBUG: fullscreen display 0: 0
(remote-viewer.exe:256): remote-viewer-DEBUG: Opening display to Download.vv
(remote-viewer.exe:256): remote-viewer-DEBUG: Guest (null) has a spice display
(remote-viewer.exe:256): remote-viewer-DEBUG: After open connection callback fd=-1
(remote-viewer.exe:256): remote-viewer-DEBUG: Opening connection to display at Download.vv
(remote-viewer.exe:256): remote-viewer-DEBUG: New spice channel 0000000001026010 SpiceMainChannel 0
(remote-viewer.exe:256): remote-viewer-DEBUG: notebook show status 000000000101B840
((null):256): Spice-Warning **: ../../../spice-common/common/ssl_verify.c:429:openssl_verify: Error in certificate chain verification: self signed certificate in certificate chain (num=19:depth1:/CN=Proxmox Virtual Environment/OU=17cb65412addccef86c0ce1865be41ce/O=PVE Cluster Manager CA)
(remote-viewer.exe:256): GSpice-WARNING **: main-1:0: SSL_connect: error:00000001:lib(0):func(0):reason(1)
(remote-viewer.exe:256): remote-viewer-DEBUG: Disposing window 0000000003B450A0
(remote-viewer.exe:256): remote-viewer-DEBUG: Set connect info: (null),(null),(null),-1,(null),(null),(null),0
Can't really figure how to handle that error...

Also tried copying the CA to %APPDATA%\spicec\spice_truststore.pem but no change...

Any hints on how to get this back working?

4 network segments on one machine and VLANs

$
0
0
Hi! Been fighting with this since last week and it has utterly evaded me so far.

Hosting has been giving us Proxmox machines before, both times they have given us the IP segment for the VMs on the same subnet as the Proxmox machine itself, configuration is a cinch giving the VMs the --ipaddress directly.

This time they gave us a new one, host machine has one IP, and they gave us 3 other small network segments (2 of 32 IPs and one of 64), each on their own VLAN, with their own netmask/gateway, on a second NIC.

Have been following guides all over the net, from the ones in the Wiki (here, here and here) to others everywhere on the web. And still the VMs can't ping anything.

It seems like there isn't an up to date documentation that tells you how to do it in this case, even the Wiki seems to reference mostly very old information.

Can somebody give me some info on how to do this correctly, down to the correct VM network configuration? All i have clear is that these more complex configs use veth and not venet and that bonding may be required (although it didn't seem to help). Once i have one VM online i can take it from there.

The host is running Proxmox 3.3 and the VMs are all going to be Debian 7. If there's more information needed i can give it.

Host don´t boot after creating a Win8/2012 guest

$
0
0
Hello...


We have only one guest (Windows 2012 Server), which i have migrated from VMware guest. The guest was running the whole day very closely, so i activated the auto-start option.
After restarting the host hangs during the boot process with the message "Waiting for vmbr0 to get ready (MAXWAIT 2 seconds)."


The web interface and console is not accessible.


I tried to boot with a Knoppix-CD, but i failed to mount the system disk.


How can I fix it anyway?

Proxmox WebGUI Issues

$
0
0
Promox Friends,
I have been using proxmox for a long time. But recently my WebGUI is going intermittently for few minutes. After few minutes i am able to access. Could any one of you guide me where to look exactly [ I mean log files]. if any one of you faced the same problem can you post the solution.




Thanks
Persevere

General Proxmox Setup Advice Needed

$
0
0
Hello,

I am starting to setup Proxmox for testing and am relatively new to this and have a few questions about how to best proceed.

I am using 3 x DL360 G6 with 1 TB WD Velocirapor drives to store VMs and Samsung 120GB Enterprise SSDs for the OS.

1. I am trying to decide whether it's best to setup my system as Cluster or HA using local drives or DAS?

Right now, I want to go with Cluster since it's seems easier to setup and manage and potentially more reliable. HA introduces added complexity and if I use bacula4hosts incremental backup then I believe I can create a system which will allow me to restore VMs quickly and thus build reliability and as close to HA as I can get.

Additionally, cluster provides greater storage capacity since I don't have to use disks to store copies of all running VMs across all nodes.

2. Is HA more complex than Cluster configuration and if so, less reliable?

Any feedback or insights on the above would be greatly appreciated.

3. Should I raid the SSDs storing the Proxmox/Debian OS?

4. Should I raid my storage drives?

5. Is proxmox robust enough to handle retail VPS hosting?

6. Has anyone used bacula4hosts for incremental backup? What has your experience with b4h been like?


Thanks in advance!
G

Windows server 2012R2 not shutting down

Juniper or Netgear Switch and Fencing ?

$
0
0
Hy guys,

Anyone tried to do fencing with juniper SRX100 or other juniper?
Will this go smooth with junipers or not ?

Cluster broken after netsplit - rgmanager fails to stop, dlm_controld too

$
0
0
Hello,

we've a 2-node cluster running privately and went for a 4-node cluster recently. Initial setup went smooth, DRBD and fencing devices are all fine (and working!). Last week we decided to change the vlan for the eth1 interface on all machines simultaneously. E.g. we caused a very, very short netsplit on the whole cluster.

Corosync noticed this but recovered very soon. But the toxic duo rgmanager and/or dlm_controld seems unforgivable. The rgmanager is off on all machines since then:

Code:

clustat
Cluster Status for Proxmox @ Tue Feb 10 12:10:17 2015
Member Status: Quorate


 Member Name                                              ID  Status
 ------ ----                                              ---- ------
 server-01                                                      1 Online
 server-02                                                      2 Online
 server-03                                                      3 Online, Local
 server-04                                                      4 Online

Stopping the rgmanager via init script fails everytime and hangs. Just possible with killall -9. Same thing with dlm_controld:

Code:

service cman stop
Stopping cluster:
  Leaving fence domain... [  OK  ]
  Stopping dlm_controld...
[FAILED]

On startup we got the following message:

Code:

dlm: rgmanager: group join failed -512 0
And

Code:

dlm_controld process_uevent online@ error -17 errno 11
We've started rgmanager in foreground and with debugging switch on. Gives no output at all.

Reboots of the node don't fix this and we're wondering why this is a supposed solution. We've several clusters running with Corosync + Pacemaker and a reboot was never ever a solution for cluster issues.

Any hints are greatly appreciated.

Two gateway with for proxmox ve 3.3

$
0
0
Hello everyone!
I have 2 provider with 2 External ip.
1st with very fast speed but only 5Gb
2nd with low speed but unlimited.

I need make 1st provider for web proxmox managment and some other service.

Second provider for VM update,upgrade, web service and other, for VM network.

I make for it 3 vlans
Vlan 200 - erh0 - vmbr0-1st provider
Vlan 300 - eth1- vmbr1-2nd provider
Vlan 400 - dummy0-vmbr40 -for local network.
Its all work with only 1 gateway, can i seporate network?

Two node proxmox cluster with ZFS and DRDB support

$
0
0
Hi,

I have created a two node proxmox cluster and configured ZFS, DRDB

I followed this documentation.

http://jsmcomputers.biz/wp/?p=559

I noticed, there is an option to add ZFS directly on the web GUI. Do that method supports a live migration? Do I need to mixup drdb and ZFS to get enable live migration?

IPv6

$
0
0
Is there a how too for using IPv6 on containers? It seems to work fine for bridged containers but would also like it on venet based containers.

Issues with LACP and VM's

$
0
0
Hey all, I'm having trouble configuring a VM Bridge off of a LACP interface. I'll post my /etc/network/interfaces below, as I'm sure that will help. I'm using a TP-Link TL-SG1016DE operating a 2-port trunk, connecting to a PRO-1000 dual-port NIC. I'm not having any trouble with the interface or the actual LACP setup, as I can connect to the node's IP address (192.168.1.51) that's been assigned to the bond and get both SSH and the web-ui. My issue is that when I switch a KVM (don't have any OpenVZ setup, so I haven't tried) machine over to vmbr1 which I set up to link to bond0, I can't send or receive any packets. The minute I switch it back to vmbr0, which is set up to the onboard nic (I think it's a realtek, if memory serves), packets flow fine. It's not a huge issue, as I'm using this in a lab environment, but I'd like to be able to stop using the onboard nic for anything outside my management vlan.

# cat /etc/network/interfaces
Code:

# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual


auto bond0
iface bond0 inet manual
        slaves eth1 eth2
        bond-miimon 100
        bond-mode 802.3ad
        bond-downdelay 200
        bond-updelay 200
        bond-lacp-rate 4


auto vmbr0
iface vmbr0 inet static
        address  192.168.1.50
        netmask  255.255.254.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address  192.168.1.51
        netmask  255.255.254.0
        gateway  192.168.0.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

I'm sure I'm missing something simple. I'm still very new to the non-basic (anything above subnetting, tbh) network topics.

Thanks,
Sgt

Cron daemon errors

$
0
0
Good morning,

today, for the second time since we update PMG to Version 4.0 we receive six emails from the mail gateway with:

Subject: Cron <root@mailgw> test -x /usr/lib/atsar/atsa1 && /usr/lib/atsar/atsa1

Content of the message: rm: cannot remove `/var/log/atsar': Is a directory


I miss something? Today I apply also the patch "hotfix_4.0-eb35aa9e.bin"

I wait for an advice.

Thank you

Ceph calamri - vagrant base for Proxmox Debian

$
0
0
Anyone who know where to find the right vagrant base for building Debian packages of Ceph calamari for to Proxmox Debian?

tg3 timeouts with KVM

$
0
0
Recently I've received several Dell R730 servers in Hetzner for existing project. Unfortunatelly the built-in NICs are not very good with KVM virtual machines. As soon as I put some (20-50 Mbps) traffic on VM the host dumps some backtrace and disables the interface. The problem isn't new and is well described in various maillists. Unfortunatelly I couldn't find any reliable workaround except for new NIC (Intel works great). Am I missing something or the problem is just not worth it? The host works fine without KVM guests, no problem with OpenVZ guest with veth interface or without VMs at all.

Here's excerpt from logs
Code:

Feb 11 08:53:24 s13 kernel: tap116i0: no IPv6 routers present
Feb 11 08:53:29 s13 kernel: vmbr1: port 2(tap116i0) entering learning state
Feb 11 08:53:44 s13 kernel: vmbr1: topology change detected, sending tcn bpdu
Feb 11 08:53:44 s13 kernel: vmbr1: port 2(tap116i0) entering forwarding state
Feb 11 08:55:24 s13 kernel: ------------[ cut here ]------------
Feb 11 08:55:24 s13 kernel: WARNING: at net/sched/sch_generic.c:267 dev_watchdog+0x28a/0x2a0() (Not tainted)
Feb 11 08:55:24 s13 kernel: Hardware name: PowerEdge R730
Feb 11 08:55:24 s13 kernel: NETDEV WATCHDOG: eth1 (tg3): transmit queue 0 timed out
Feb 11 08:55:24 s13 kernel: Modules linked in: dlm configfs xt_state ip_set vzethdev vznetdev pio_nfs pio_direct pfmt_raw pfmt_ploop1 ploop simfs vzrst nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 vzcpt nf_conntrack vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_length xt_hl xt_tcpmss xt_TCPMSS iptable_mangle iptable_filter xt_multiport xt_limit xt_dscp ipt_REJECT ip_tables vhost_net tun macvtap macvlan nfnetlink_log kvm_intel nfnetlink kvm vzevent nfsd nfs nfs_acl auth_rpcgss fscache lockd sunrpc ipv6 ext2 fuse snd_pcsp iTCO_wdt iTCO_vendor_support snd_pcm snd_page_alloc snd_timer dcdbas snd soundcore lpc_ich mfd_core shpchp wmi power_meter ext4 jbd2 mbcache sg ahci tg3 ptp pps_core megaraid_sas [last unloaded: configfs]
Feb 11 08:55:24 s13 kernel: Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-34-pve #1
Feb 11 08:55:24 s13 kernel: Call Trace:
Feb 11 08:55:24 s13 kernel: <IRQ> [<ffffffff810733b7>] ? warn_slowpath_common+0x87/0xe0
Feb 11 08:55:24 s13 kernel: [<ffffffff810734c6>] ? warn_slowpath_fmt+0x46/0x50
Feb 11 08:55:24 s13 kernel: [<ffffffff8149e01a>] ? dev_watchdog+0x28a/0x2a0
Feb 11 08:55:24 s13 kernel: [<ffffffff81015319>] ? sched_clock+0x9/0x10
Feb 11 08:55:26 s13 kernel: [<ffffffff8106c3da>] ? scheduler_tick+0xfa/0x240
Feb 11 08:55:26 s13 kernel: [<ffffffff8149dd90>] ? dev_watchdog+0x0/0x2a0
Feb 11 08:55:26 s13 kernel: [<ffffffff81087b76>] ? run_timer_softirq+0x176/0x370
Feb 11 08:55:26 s13 kernel: [<ffffffff8107d24b>] ? __do_softirq+0x11b/0x260
Feb 11 08:55:26 s13 kernel: [<ffffffff8100c4cc>] ? call_softirq+0x1c/0x30
Feb 11 08:55:26 s13 kernel: [<ffffffff81010215>] ? do_softirq+0x75/0xb0
Feb 11 08:55:26 s13 kernel: [<ffffffff8107d525>] ? irq_exit+0xc5/0xd0
Feb 11 08:55:26 s13 kernel: [<ffffffff8156404a>] ? smp_apic_timer_interrupt+0x4a/0x60
Feb 11 08:55:26 s13 kernel: [<ffffffff8100bcd3>] ? apic_timer_interrupt+0x13/0x20
Feb 11 08:55:26 s13 kernel: <EOI> [<ffffffff812e8dcb>] ? intel_idle+0xdb/0x160
Feb 11 08:55:26 s13 kernel: [<ffffffff812e8da9>] ? intel_idle+0xb9/0x160
Feb 11 08:55:26 s13 kernel: [<ffffffff81446994>] ? cpuidle_idle_call+0x94/0x130
Feb 11 08:55:26 s13 kernel: [<ffffffff81009219>] ? cpu_idle+0xa9/0x100
Feb 11 08:55:26 s13 kernel: [<ffffffff81536001>] ? rest_init+0x85/0x94
Feb 11 08:55:26 s13 kernel: [<ffffffff81c33ce1>] ? start_kernel+0x3ff/0x40b
Feb 11 08:55:26 s13 kernel: [<ffffffff81c3333b>] ? x86_64_start_reservations+0x126/0x12a
Feb 11 08:55:26 s13 kernel: [<ffffffff81c3344d>] ? x86_64_start_kernel+0x10e/0x11d
Feb 11 08:55:26 s13 kernel: ---[ end trace 3eb6af1e220fb20d ]---
Feb 11 08:55:26 s13 kernel: Tainting kernel with flag 0x9
Feb 11 08:55:26 s13 kernel: Pid: 0, comm: swapper veid: 0 Not tainted 2.6.32-34-pve #1
Feb 11 08:55:26 s13 kernel: Call Trace:
Feb 11 08:55:26 s13 kernel: <IRQ> [<ffffffff81073269>] ? add_taint+0x69/0x70
Feb 11 08:55:26 s13 kernel: [<ffffffff810733d9>] ? warn_slowpath_common+0xa9/0xe0
Feb 11 08:55:26 s13 kernel: [<ffffffff810734c6>] ? warn_slowpath_fmt+0x46/0x50
Feb 11 08:55:26 s13 kernel: [<ffffffff8149e01a>] ? dev_watchdog+0x28a/0x2a0
Feb 11 08:55:26 s13 kernel: [<ffffffff81015319>] ? sched_clock+0x9/0x10
Feb 11 08:55:26 s13 kernel: [<ffffffff8106c3da>] ? scheduler_tick+0xfa/0x240
Feb 11 08:55:26 s13 kernel: [<ffffffff8149dd90>] ? dev_watchdog+0x0/0x2a0
Feb 11 08:55:26 s13 kernel: [<ffffffff81087b76>] ? run_timer_softirq+0x176/0x370
Feb 11 08:55:26 s13 kernel: [<ffffffff8107d24b>] ? __do_softirq+0x11b/0x260
Feb 11 08:55:26 s13 kernel: [<ffffffff8100c4cc>] ? call_softirq+0x1c/0x30
Feb 11 08:55:26 s13 kernel: [<ffffffff81010215>] ? do_softirq+0x75/0xb0
Feb 11 08:55:26 s13 kernel: [<ffffffff8107d525>] ? irq_exit+0xc5/0xd0
Feb 11 08:55:26 s13 kernel: [<ffffffff8156404a>] ? smp_apic_timer_interrupt+0x4a/0x60
Feb 11 08:55:26 s13 kernel: [<ffffffff8100bcd3>] ? apic_timer_interrupt+0x13/0x20
Feb 11 08:55:26 s13 kernel: <EOI> [<ffffffff812e8dcb>] ? intel_idle+0xdb/0x160
Feb 11 08:55:26 s13 kernel: [<ffffffff812e8da9>] ? intel_idle+0xb9/0x160
Feb 11 08:55:26 s13 kernel: [<ffffffff81446994>] ? cpuidle_idle_call+0x94/0x130
Feb 11 08:55:26 s13 kernel: [<ffffffff81009219>] ? cpu_idle+0xa9/0x100
Feb 11 08:55:26 s13 kernel: [<ffffffff81536001>] ? rest_init+0x85/0x94
Feb 11 08:55:26 s13 kernel: [<ffffffff81c33ce1>] ? start_kernel+0x3ff/0x40b
Feb 11 08:55:26 s13 kernel: [<ffffffff81c3333b>] ? x86_64_start_reservations+0x126/0x12a
Feb 11 08:55:26 s13 kernel: [<ffffffff81c3344d>] ? x86_64_start_kernel+0x10e/0x11d
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: transmit timed out, resetting
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0x00000000: 0x165f14e4, 0x00100406, 0x02000000, 0x00800000
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0x00000010: 0x91b0000c, 0x00000000, 0x91b1000c, 0x00000000
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0x00000020: 0x91b2000c, 0x00000000, 0x00000000, 0x1f5b1028
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0x00000030: 0xfffc0000, 0x00000048, 0x00000000, 0x0000020e
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0x00000040: 0x00000000, 0xe2000000, 0xc8035001, 0x64002008
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0x00000050: 0x818c5803, 0x78000000, 0x0086a005, 0x00000000
[many lines just like these]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0: Host status block [00000005:000000e4:(0000:0723:0000):(0000:00ab)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 0: NAPI info [000000e2:000000e2:(0097:00ab:01ff):0000:(073b:0000:0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 1: Host status block [00000001:0000000b:(0000:0000:0000):(0aff:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 1: NAPI info [000000d9:000000d9:(0000:0000:01ff):0acd:(02cd:02cd:0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 2: Host status block [00000001:00000015:(09e5:0000:0000):(0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 2: NAPI info [000000de:000000de:(0000:0000:01ff):09ae:(01ae:01ae:0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 3: Host status block [00000001:0000003a:(0000:0000:0000):(0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 3: NAPI info [0000001d:0000001d:(0000:0000:01ff):0180:(0180:0180:0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 4: Host status block [00000001:000000c6:(0000:0000:00a1):(0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: 4: NAPI info [0000009d:0000009d:(0000:0000:01ff):0078:(0078:0078:0000:0000)]
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: tg3_stop_block timed out, ofs=1400 enable_bit=2
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: tg3_stop_block timed out, ofs=c00 enable_bit=2
Feb 11 08:55:26 s13 kernel: tg3 0000:01:00.1: eth1: Link is down
Feb 11 08:55:27 s13 kernel: vmbr1: port 1(eth1) entering disabled state
Feb 11 08:55:27 s13 kernel: vmbr1: topology change detected, propagating
Feb 11 08:55:30 s13 kernel: tg3 0000:01:00.1: eth1: Link is up at 1000 Mbps, full duplex
Feb 11 08:55:30 s13 kernel: tg3 0000:01:00.1: eth1: Flow control is off for TX and off for RX
Feb 11 08:55:30 s13 kernel: tg3 0000:01:00.1: eth1: EEE is disabled
Feb 11 08:55:30 s13 kernel: vmbr1: topology change detected, propagating
Feb 11 08:55:30 s13 kernel: vmbr1: port 1(eth1) entering forwarding state

HELP - VM start fails since running package update

$
0
0
Hi,

I have just updated packages of PVE 3.3.
The update was interrupted and the server was not responsive.

Then I did hard reset of the server, and finished the package update via CLI with apt-get update.
However, I cannot start VMs.

In Syslog I can see latest entry for VM that is supposed to start up with host.
But the VM is effectively not starting, and I cannot find any indicator in Syslog for the failure.

What else should I check for additional information of the root cause?
What other logs are relevant?

My next step is to start the VM via CLI using qm start <vmid>.
Is there an option that could be used in order to get more details for the start up displaying errors etc.?

THX
Viewing all 173547 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>