Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Cluster problem. Node is red, but online

$
0
0
Hello! I have cluster with 10 nodes and all nodes red in web interface.

5d23ce74ecb34f0587e62b2a67b84ed4.png

Code:

root@node0:~# pvecm status
Version: 6.2.0
Config Version: 10
Cluster Name: Cluster0
Cluster Id: 57240
Cluster Member: Yes
Cluster Generation: 5140
Membership state: Cluster-Member
Nodes: 10
Expected votes: 10
Total votes: 10
Node votes: 1
Quorum: 6 
Active subsystems: 5
Flags:
Ports Bound: 0 
Node name: node0
Node ID: 3
Multicast addresses: 239.192.223.120
Node addresses: 172.16.187.10

Code:

root@node0:~# pveversion -v
proxmox-ve-2.6.32: 3.3-147 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-1 (running version: 3.4-1/3f2d890e)
pve-kernel-2.6.32-37-pve: 2.6.32-147
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.3-20
pve-firmware: 1.1-3
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-31
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-12
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Code:

root@node0:~# cat /etc/pve/.members
{
"nodename": "node0",
"version": 19,
"cluster": { "name": "Cluster0", "version": 10, "nodes": 10, "quorate": 1 },
"nodelist": {
  "node1": { "id": 1, "online": 1, "ip": "172.16.187.11"},
  "node3": { "id": 2, "online": 1, "ip": "172.16.187.13"},
  "node0": { "id": 3, "online": 1, "ip": "172.16.187.10"},
  "pmox4": { "id": 4, "online": 1, "ip": "172.16.187.24"},
  "pmox2": { "id": 5, "online": 1, "ip": "172.16.187.22"},
  "pmox3": { "id": 7, "online": 1, "ip": "172.16.187.23"},
  "pmox1": { "id": 8, "online": 1, "ip": "172.16.187.20"},
  "pmox5": { "id": 6, "online": 1, "ip": "172.16.187.21"},
  "node2": { "id": 9, "online": 1, "ip": "172.16.187.12"},
  "pmox0": { "id": 10, "online": 1, "ip": "172.16.187.30"}
  }
}

Code:

node0 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.596/0.596/0.596/0.000
node0 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.620/0.620/0.620/0.000
node1 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.560/0.560/0.560/0.000
node1 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.570/0.570/0.570/0.000
node3 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.573/0.573/0.573/0.000
node3 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.585/0.585/0.585/0.000
pmox0 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.322/0.322/0.322/0.000
pmox0 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.333/0.333/0.333/0.000
pmox1 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.181/0.181/0.181/0.000
pmox1 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.230/0.230/0.230/0.000
pmox3 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.403/0.403/0.403/0.000
pmox3 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.454/0.454/0.454/0.000
pmox4 :  unicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.191/0.191/0.191/0.000
pmox4 : multicast, xmt/rcv/%loss = 1/1/0%, min/avg/max/std-dev = 0.249/0.249/0.249/0.000

Attached Images

TRIM in KVM VM on LVM and software raid?

$
0
0
Hello,

i have lvm running on top of a mdadm software raid 1 with two samsung 850pro SSDs.

As far as i know lvm and mdadm do support trim (i am using proxmox with kernel 3.10).

Does anybody know if and how i can trim my ext4 filesystems inside my kvm machines?

Thank you.

Backup on Storwize v3700 with LVM

$
0
0
Hi,

We have in our company 3 servers IBM x3650M4 (each server has 2 SAS ports) in cluster, and 2 storage devices: IBM STOREWIZE v3700 (both has also 8 SAS ports, I only use the first cannister).
Each server is connected with both storage:
- server1: SAS port 1 connected to Storewize1 SAS port 1
SAS port 2 connected to Storewize2 SAS port 1
- server2: SAS port 1 connected to Storewize1 SAS port 2
SAS port 2 connected to Storewize1 SAS port 2
- server3: SAS port 1 connected to Storewize1 SAS port 3
SAS port 2 connected to Storewize2 SAS port 3
I made the first storage to be "shared LVM storage" in Proxmox for all servers (visible on all servers, VM migration is also possible), and everything is OK.
My question is, how can I make the second storage to be a "shared, visible and backup storage" for all servers.
If I configure LVM on storage2, backup option from Proxmox isn't possible. Also, only raw format for VMs is possible.
If I configure ext4 on storage2 after the backup, the backup files will be corrupted (backup in the same time from 2 servers) or are only visible after the umount/mount command.

Any ideas?

can vnc resolution be scaled/zoomed if too large?

$
0
0
Hi, I dont' know how and if I can do this.
I have a kvm server which I use at work with the desktop set to a 1600x1200 resolution which fits well my computer display at work.
But if I need to connect with another pc (eg: through a vpn from elsewhere) I could have a smaller computer display, like 1024x768, and login could be difficult because the opened browser window is 1600x1200.
Currently display is "default". Would other choice help in this respect?

I believe that vnc cannot adapt the remote resolution dynamically like rdp (on windows) or spice, but is there any way to make at least the VM "scale/zoom" the offered resolution through vnc/java to better fit in my current computer display?
Since the used pc could not support spice or rdp, I would prefer a more compatible solution. Would novnc help in this respect? This server is still on 3.1 btw.

Marco

Cluster failover problem after network failure

$
0
0
Hello community!

A few days back, I experienced an issue with automatic recovery of my HA Proxmox cluster.
We're using a 3 node cluster with manual fencing.

There was a short network failure because of a broken switch power adaptor.
After the switch worked again, the cluster reconnected, but fence_tool showed the "wait state messages".
Turned out, everything worked but DLM waited for fencing to occur. This also blocked operations that use the rgmanager, e.g. VM migration or starting.
After hard resetting (rebooting was not possible because of the blocking rgmanager) each node one after another, fence_tool was fine, but actions to the VMs like starting, migration etc was not possible (error code 1).
It seems, the cluster was still waiting for something. So hard resetting all nodes in the cluster simultaneously worked for me and all operations were working again.
But this can't be a proper solution for my productive cluster. I don't want non-HA VMs being resetted just because there was a network failure.

Is there any better way to repair the cluster considering the dependencies between the services? E.g. restarting services in a particular order, so they don't block each other or using specific CLI commands.
Thanks for your feedback!

My cluster.conf:

Code:

<?xml version="1.0"?>
<cluster config_version="14" name="MJ">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu"/>
  <fencedevices>
    <fencedevice agent="fence_manual" name="fenceProxmox01"/>
    <fencedevice agent="fence_manual" name="fenceProxmox02"/>
    <fencedevice agent="fence_manual" name="fenceProxmox03"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="proxmox-test-cluster1" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="fenceProxmox01"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox-test-cluster2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="fenceProxmox02"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox-test-cluster3" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device name="fenceProxmox03"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <pvevm autostart="1" vmid="107"/>
  </rm>
</cluster>

Very slow Windows 2008 R2 (CPU usage high)

$
0
0
Hi,

We are using a proxmox cluster since 3 years now.

Here are the spec of the two physical servers:
2*16 cores Xeon E5-2640 v2 @ 2Ghz
64 Go of RAM
RAID10 of 4 * SAS 10K disk
4*NIC 1Gbit/s bonded with LACP

And we use a third node that is not hosting any VM for the quorum.

We are running PVE 3.2 (not updated yet)
Code:

proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)pve-kernel-2.6.32-29-pve: 2.6.32-126lvm2: 2.02.98-pve4clvm: 2.02.98-pve4corosync-pve: 1.4.5-1openais-pve: 1.1.4-3libqb0: 0.11.1-2redhat-cluster-pve: 3.2.0-2resource-agents-pve: 3.9.2-4fence-agents-pve: 4.0.5-1pve-cluster: 3.0-12qemu-server: 3.1-16pve-firmware: 1.1-3libpve-common-perl: 3.0-18libpve-access-control: 3.0-11libpve-storage-perl: 3.0-19pve-libspice-server1: 0.12.4-3vncterm: 1.1-6vzctl: 4.0-1pve5vzprocps: 2.0.11-2vzquota: 3.1-2pve-qemu-kvm: 1.7-8ksm-control-daemon: 1.1-1glusterfs-client: 3.4.2-1
We were exclusivly using linux KVM since this year.

We started some windows server 2008 R2 on the cluster and we are experiencing performance issue inside the VM.

RDP sessions are very slow (sometimes it freeze) and we see high CPU usage even if nothing is running on the system:
- VMID.conf:
Code:

bootdisk: ide0
cores: 4
cpuunits: 100000
ide0: local:114/vm-114-disk-1.qcow2,format=qcow2,size=30G
ide1: none,media=cdrom
ide2: none,media=cdrom
memory: 8192
name: ****
net0: e1000=D6:E1:CE:71:B4:75,bridge=vmbr0
ostype: win7
sockets: 1

- VM Hardware


- VM options


- CPU Usage


On the task manager we see that svchost.exe is taking 30% of CPU and 1GB of RAM but nothing to explain why RDP sessions are unusable.

On the same physical node Linux KVM are running without any issue.

What we tried:
- Increase CPU unit ==> No change
- Change disk to virtio ==> No change (better disk perf but nothing related to CPU usage)
- Change disk file from QCOW to RAW ==> No change (better disk perf but nothing related to CPU usage)
- Give 2 Socket and 4 core ==> No change
- Change CPU to Host CPU ==> No change

Here is a dd command on the physical node:
Code:

dd if=/dev/zero of=/tmp/testfile bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 4.51222 s, 238 MB/s

Any help would be appreciated.

Regards

Help with making a functional Proxmox setup out of this

$
0
0
I'm assisting a community organization to virtualize their small infrastructure. I've used Proxmox primarily for personal use so far.

Through some awesome local support and donations from IT companies they got some older, but IMO still very usable, hardware:
2x Dell R610 servers that are almost identically configured with 16GB and 24GB RAM. About 600GB of space in each server (4x 150 GB drives)
1x 16 port GigE switch.
1x fiber channel storage switch with 4x HBAs.
A number of workstations but nothing good for virtualization.

Goals:
- Virtualize their existing "servers" which are just desktops right now as they keep breaking. They host a website, shared file server, calendar, CRM app, some other webapps, windows apps, etc.
- It's used hardware, so make it easy to recover from hardware failure (so one R610 dying). Manual fail-over is fine...
- Backup critical files (in VMs) and VM images themselves. Surprisingly they do have an LTO3 tape drive in one of the existing desktops.
- Rely on open-source technology as much as possible (limited budget so can't afford much in terms of licensing).

I've been reading up on proxmox clusters and replicated storage. A little (ok A LOT) overwhelmed with choices.

What could be done with what they got? I would welcome any advise and suggestions. Thanks!

Spice Without Web Console?

$
0
0
Is it possible to connect to a VM using Spice without using the Proxmox Web Console?

If so, how would it be done?

Venom vulnerablity

Adding a node

$
0
0
Hi guys,

How can I add a new node on my PVE?

Thanks in advance!

Bet regards.

Proxmox VE extremely slow when one NFS share is gone

$
0
0
Hello,

We have a Proxmox VE cluster (3 nodes at the moment) and are using Ceph/rbd for our primary storage. We also have some NFS shares for non-critical data, in example for ISO images and backups (our Ceph cluster is fully SSD, so not ideal for storing ISO images and backups). When Ceph is redundant, our NFS shares are not, because it's non-critical. However, when a NFS share is gone without disabling the share first in Proxmox VE, Proxmox VE becomes very slow and shows VM's black (like they are off/shutdown). In fact the VM's are still online and the VM's themself are working fine, but as admin/user working in Proxmox VE is as good as impossible because it's terribly slow and doesn't give good status information of the VM's. Even when I disable the NFS share that is down, this doesn't help at that moment. Only thing that fixes this is bringing up the NFS share again and then disable it in Proxmox VE before it's going down again (for whatever reason). If the NFS share is disabled in Proxmox VE before it goes down, there are no issues. But since this is non-redundant storage, that's not ideal.

Can this be fixed in a future release? To me this doesn't seem to be normal behavior. I can understand it will be slow when trying to access a NFS that is down (waiting for the timeout or whatever), but when I disable it in Proxmox VE it should work as before (except the NFS share is gone, offcourse) and besides that I think not whole Proxmox VE should be slow and showing VM's down when they are not.

Thank you,

Bug? - Web GUI shows Network Bonds as Inactive

$
0
0
Hello. I've been upgrading my 2.2 cluster to 3.4 and I've noticed that my network bonds all show as inactive in the GUI. I've double checked my 70-persistent-net-rules and my interfaces config files and everything looks good.

Checking out the bond0 and bond1 results look good as well. The network bonds are working as intended.

Code:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)


Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0


802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 4
        Actor Key: 17
        Partner Key: 55190
        Partner Mac Address: 00:1e:2a:ce:aa:06


Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:9b:92:2f:a8
Aggregator ID: 1
Slave queue ID: 0


Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:9b:92:2f:aa
Aggregator ID: 1
Slave queue ID: 0


Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:9b:92:2f:ac
Aggregator ID: 1
Slave queue ID: 0


Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:21:9b:92:2f:ae
Aggregator ID: 1
Slave queue ID: 0

Gui Screenshot:

pve-network-bond.png
Attached Images

can't create journal device for CEPH

$
0
0
I've got two 10GB LUNs and a 1.1TB LUN exposed to my server as /dev/sda, /dev/sdb and /dev/sdc.
PVE 3.4 was installed to /dev/sda.
I want to create a CEPH OSD on /dev/sdc with /dev/sdb as the journal, but I'm unable to do so.

Firstly, the GUI only offers to let me use /dev/sda as a journal device (!). This isn't what I want - that would completely screw things up!

The command-line pveceph tool has a different problem, I see this:

Quote:

root@pvetemp:~# pveceph createosd /dev/sdc -journal_dev /dev/sdb
create OSD on /dev/sdc (xfs)
using device '/dev/sdb' for journal
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
Error: /dev/sdb: unrecognised disk label
ceph-disk: Error: weird parted units:
command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 57ee93f9-e7bc-4aba-a103-a1fdf35db5ac --journal-dev /dev/sdc /dev/sdb' failed: exit code 1
What on earth? Why do I need a disk label on /dev/sdb, when I'm about to use the raw device as a journal?

OK, so I use gdisk to zap the MBR & GPT labels, then parted to mklabel a new, empty, GPT label on /dev/sdb.
Then pveceph gets happier, but with some scary warnings:

Quote:

root@pvetemp:~# gdisk /dev/sdb
GPT fdisk (gdisk) version 0.8.5


Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present


Creating new GPT entries.


Command (? for help): x


Expert command (? for help): z
About to wipe out GPT on /dev/sdb. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N): y
root@pvetemp:~# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted)
Information: You may need to update /etc/fstab.


root@pvetemp:~# pveceph createosd /dev/sdc -journal_dev /dev/sdb
create OSD on /dev/sdc (xfs)
using device '/dev/sdb' for journal
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.


************************************************** **************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
************************************************** **************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.


meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=74792895 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=299171579, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=146079, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
The operation has completed successfully.
Um. In the Disks tab of the PVE GUI, sdc shows as "osd.0", but sdb shows as "partitions", not "journal" or anything like that.
Have I successfully got a journaled OSD now? "ceph osd tree" doesn't show the journal, but I'm not sure if it should or not.

Thanks,
-Adam Thompson
athompso@athompso.net

mitigation for vulnerabilities in kvm and vnc?

Fails to boot after fresh install

$
0
0
Hi,
I used a brand new machine and installed from a disk. It finished installing, but before restarting, it says:

waiting for /dev to be fully populated ... acer-wmi No WMI interface, unable to load

Then, after restarting it, it does not find the boot drive (system).

I am new to Linux, and could you tell me how to fix it?
I've tried it a few times, and the only thing I can think of is this "acer-wmi". Files are there.

It is an Acer desktop with Core i7 & 8GB memory & 1TB HDD.

Please help. This cool program got me interested in Linux, I am so excited but I cannot even start...
Thank you!

Feature Request: KSM Graph

$
0
0
I just think it would be nice to be able to see how KSM is doing with a graph rather than watching a number.

VZ Container networking problems, when not in /var/lib/vz

$
0
0
Hi,

I think I must be missing something, but when I chose different storage for OpenvZ containers, I am not able to run any tcp services on them.
SSH example:

Starting OpenBSD Secure Shell server: sshdPRNG is not seeded
failed!


As soon as I move them to /var/lib/vz/ everything works fine.

Using latest proxmox.

Second Hard Disk Cannot Be Migrated To?

$
0
0
Hi,

PVE. Great product! Big fan! Many thanks to the Proxmox team!!

My apologies if this has been answered elsewhere but I'm 5 hours deep into Proxmox forums, wiki and Google and have not found a conclusive answer to my question.


BACKGROUND ----
I have a Proxmox cluster where all nodes only have local storage. All guests are KVM. Running Proxmox 3.3 or 3.4. Migration is possible between nodes but only when offline due to the local storage limitations but this is acceptable for our needs. All nodes are using suitable combinations of hardware RAID 5 or 1. Assume discussion of any single disk is actually a RAID array and beyond that the physical combination of and redundancy of disks is both unknown to the Proxmox host and presumably irrelevant to my question.

One of the nodes (node 2) has an 800GB drive with proxmox installed and joined to the cluster. There was space for more disks so a 2TB drive was added as an LVM. The 2TB drive is visible in the web gui and VMs can be created on it.


ISSUE ----
What I can't seem to do is directly migrate a VM from node 1 to the new 2TB storage location on node 2. At least not from the web GUI that I can see. From node 1 I can select node 2 but without more options it will migrate the VM to the original local storage area on the 800GB drive. This is problematic when the VM I'm wanting to migrate exceeds the available free space, or in my case exceeds the entire capacity, of the 800GB drive.

In looking at the qm manual I see qm migrate doesn't seem to allow you to specify a target storage location on the destination node; and qm move_disk seems to be a local only command i.e. a target node cannot be specified.

I've seen some suggestions that you can backup the VM and then during the restore you can specify the target storage area. This seems a bit inefficient and requires a shared storage (3rd) device.

I guess I could create NFS shares on the nodes and copy or rsync the VM's between them directly? I'm hoping there is a more simplistic direct approach. If not, well, so be it.


QUESTION ----
Does the PVE web GUI or terminal command(s) provide the functionality I require here?
If so, what am I missing here?



TIA

Assign static IP's to KVM guests

$
0
0
I have searched around the web the last couple of days, but can't seem to find what I am looking for.

Here is my situation:

I have a machine installed in a datacenter with a /26 of public IP's assigned to it (into eth0).

I set up a bridge within Proxmox and everything works as intended (assigning the public IP's within the KVM guests).

My concern is that some clients that have access to some of the KVM guests could potentially change their guests IP to anything on the bridge (whether it be intentional or accidental) and cause issues.

Is it possible within the Proxmox host to build the network in such a way that each KVM guest on the host can only use a specific IP address from the bridge?

I thought about setting up a pfSense guest to do the networking, but I would like to keep it all within Proxmox if possible.

Is this possible?

Connection between clients and proxmox Virtual machines

$
0
0
Hello, i'm starting with proxmox. Im trying it all on virtual machines.
My dubt is:
If I have a few VM on the server, How do a client (thinclient) can connect to a certain virtual machine? I use Thinlinc for connection(it has a few options).
I think it's around ldap, but not sure.
Can someone help me?
Thx.
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>