Is VirtualGL working correctly?

Forum / NoMachine for Linux / Is VirtualGL working correctly?

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • #50153
    boospy
    Participant

    Hello all,

    • I’ve downloaded NoMachine Worstation EVAL 8.14.2_1.
    • Server: HPE Proliant ML350G10
    • GPU Nvidia RTX A2000 (Full featured)
    • OS Kubuntu 24.04 LTS
    • Driver Nvidia 550.107.02

    Basically with normal X11 Login everything is working fine with the card. If I log in with NoMachine virtual desktop, applications such as the desktop, video editing, games, do not use the graphics card. Only 3 processes:

    • Displaymanager
    • Xserver
    • Nxnode

    On the NoMachine Client I see this line:

    2552×1324, 60FPS, HW encoding, H.264, NVENC, SW decoding

    Doesn’t that actually mean that it should work? VirtualGL is enabled and X11VectorGraphics is disabled.

    GLXInfo show me that: There doesn’t seem to be a code block or anything like that here, so I was able to copy it in like this.

    ————————————————————————-

    server glx vendor string: SGI

    client glx vendor string: Mesa Project and SGI

    GLX_MESA_copy_sub_buffer, GLX_MESA_gl_interop, GLX_MESA_query_renderer,

    GLX_MESA_copy_sub_buffer, GLX_MESA_query_renderer, GLX_SGIS_multisample,

    Extended renderer info (GLX_MESA_query_renderer):

    Vendor: Mesa (0xffffffff)

    OpenGL vendor string: Mesa

    OpenGL renderer string: llvmpipe (LLVM 17.0.6, 256 bits)

    ————————————————————————-

    According to the datasheet, NoMachine Workstation can handle GPUs. Am I doing something wrong?

    #50161
    Britgirl
    Keymaster

    All NoMachine Linux Terminal Server products support VirtualGL, including the evaluation version. You need to enable VirtualGL first, so this is the first thing you should check. How to do that is explained in the following article:

    How to enable VirtualGL support on Linux in NoMachine
    https://kb.nomachine.com/AR05P00982

    In short, the commands to use are sudo /etc/NX/nxserver --virtualglinstall and sudo /etc/NX/nxserver --virtualgl yes. But please consult the article for full steps.

    Let us know if following those steps helps.

    In your output of the client, we can see that HW encoding is working correctly. On the client side the session is using software decoding, and not hardware decoding. If this is not what you expected on the client side (we are not sure from what you wrote), you should send us logs from the device you are connecting from and we will check them. See this article (step 4) https://kb.nomachine.com/DT07S00243.

    #50169
    boospy
    Participant

    Thank you for your reply. I have already set up VirtualGL according to these instructions.

    What I would like is that applications in the KDE desktop, such as Steam, KDEnlive, video player…, or even the whole desktop uses the Nvidia GPU. That was my plan.

    Attached is the log I created according to the instructions and a Screenshot from the NX Client on my local machine.

    #50172
    boospy
    Participant

    What I also noticed is that the variable has no value here, but it should according to the instructions.

    env | grep LD_PRELOAD

    LD_PRELOAD=

     

    #50233
    Britgirl
    Keymaster

    Can you send us the node.cfg file of the NoMachine server, and also the output of this command:

    /usr/NX/scripts/vgl/vglrun glxinfo | grep -i “renderer\|vendor”

    In the logs we can see that the server sets the LD_PRELOAD library. Maybe is not set in your user profile?

    #50244
    boospy
    Participant

    Hello @Britgirl and thank you for your answer. I’ve executed the command on active NoMachine Session on Kubuntu Desktop:

    ——–

    Authorization required, but no authorization protocol specified

    [VGL] ERROR: Could not open display :0.

    ——–
    I find the output strange, I mean… I’m on the desktop and the command says that no display can be opened.

    In the logs we can see that the server sets the LD_PRELOAD library. Maybe is not set in your user profile?

    The variable is displayed/read out correctly when I log in on the physical screen. Unfortunately not via the virtual Nomachine session. node.cfg is attached.

    Attachments:
    #50260
    fra81
    Moderator

    Hi @boospy

    [VGL] ERROR: Could not open display :0.

    Did you get that in the physical desktop or in the NoMachine virtual desktop? Can you run ‘env | grep DISPLAY’ in the physical desktop? If you find out that the display number is, e.g., :1 instead of :0, you can adjust the vglrun command in this way:

    /usr/NX/scripts/vgl/vglrun -d :1 glxinfo | grep -i “renderer\|vendor”

    Or, even better, you can instruct VirtualGL to use the GPU directly without passing for the X server. This is a new feature of VirtualGL. So, if the path to your video device is e.g. /dev/dri/card0, you can use this command for vglrun:

    /usr/NX/scripts/vgl/vglrun -d /dev/dri/card0 glxinfo | grep -i “renderer\|vendor”

    Using the -d option is equivalent to setting the VGL_DISPLAY variable in the environment (e.g. VGL_DISPLAY=:0 or VGL_DISPLAY=/dev/dri/card0).

    What’s the output of the vglrun command after these adjustments?

    The variable is displayed/read out correctly when I log in on the physical screen. Unfortunately not via the virtual Nomachine session.

    This is indeed strange, I suspect that something in your user profile is unsetting the LD_PRELOAD variable (I see that it is correctly set by the server at the moment it launches the desktop environment in the virtual desktop). Could you check if this is the case in your user profile files (.profile, .bashrc, .bash_profile, /etc/profile, /etc/profile.d/*, etc)? Or are you aware of anything that might be interfering?

    #50261
    boospy
    Participant

    I’ve done alle the tings. Attached the outputs. I’ve also set the ENV in .profile, .bashrc and /etc/environment.

    But the variable is never set with NX virtual Desktop. But it works on PHY screen. I have also tried to set it manually in virtual Desktop:

    ————————————————-

    LD_PRELOAD=/usr/NX/lib/libnxegl.so

    source .bashrc

    ————————————————-

    Even when I log in with SSH, the ENV variable is set. I have to admit, this is an exciting topic.

    #50264
    boospy
    Participant

    BTW: 

    This is a new feature of VirtualGL.

    This is an amazing feature! So I can at least start individual apps directly with GPU. Very high praise to NoMachine! I also tried to start the desktop directly with GPU, but unfortunately it didn’t work. But that would be an idea, wouldn’t it?

    env XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop /usr/NX/scripts/vgl/vglrun -d /dev/dri/card0 /usr/bin/dbus-launch –sh-syntax –exit-with-session /usr/bin/startplasma-x11

    #50282
    fra81
    Moderator

    So now we know that VirtualGL works when used through vglrun, the only problem is with the environment, the reason of which is still unclear. Do you confirm that no system script overwrites, unsets, or affects in any way the LD_PRELOAD variable?

    LD_PRELOAD=/usr/NX/lib/libnxegl.so

    Please note that the libnxegl library has a different purpose (it’s used for capturing the screen from the physical display when Wayland is in use), it should only be set in the physical desktop and it’s not useful in virtual desktops. In virtual desktops, when VirtualGL is enabled, LD_PRELOAD should look like this:

    LD_PRELOAD=/usr/NX/scripts/vgl/libdlfaker.so:/usr/NX/scripts/vgl/libvglfaker.so

    env XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop /usr/NX/scripts/vgl/vglrun -d /dev/dri/card0 /usr/bin/dbus-launch –sh-syntax –exit-with-session /usr/bin/startplasma-x11

    This is definitely possible and, maybe, it could help with working around the environment problem. Note that you will have to disable VirtualGL in the server configuration (/usr/NX/bin/nxserver --virtualgl no) after putting the vglrun command in the DefaultDesktopCommand key.

    However, I would recommend a slightly different default desktop command from the one you posted:

    env XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop /usr/bin/dbus-launch –sh-syntax –exit-with-session /usr/NX/scripts/vgl/vglrun -d /dev/dri/card0 /usr/bin/startplasma-x11

    #50309
    boospy
    Participant

    Many thanks for all the information. The Commando:

    env XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop /usr/bin/dbus-launch –sh-syntax –exit-with-session /usr/NX/scripts/vgl/vglrun -d /dev/dri/card0 /usr/bin/startplasma-x11

    unfortunately has no effect. I have built a small workaround by placing this file in the Plasma Autostart.

    #!/bin/bash
    killall plasmashell
    LD_PRELOAD=/usr/NX/scripts/vgl/libdlfaker.so:/usr/NX/scripts/vgl/libvglfaker.so
    plasmashell

    This starts Plasma with full GPU support. All other applications are also started with GPU support. I also tested Steam Remoteplay. It is even really usable for some games such as Hitman3. Tekken 8 is probably a bit too big for the protocol.

    Kerberos login works with NX as it should. Unfortunately, the automount of Samba shares via Kerberos only works with a workaround. I edit the file “/etc/pam.d/nx” and change it as follows:

    auth       include       su
    account    include       su
    password   include       su
    # session    include       su

    session required pam_loginuid.so
    session optional pam_env.so
    session optional pam_umask.so
    session required pam_unix.so
    session optional pam_mount.so disable_interactive

    Unfortunately, something seems to be missing. Because then there is no sound and “non-QT/Plasma applications” start in a segfault.

    I will continue to test whether I can still solve the Kerberos automounts with the pam_mount. In any case, many thanks for the support. The GPU works, so the thread can be closed here.

    #50311
    boospy
    Participant

    And I have now bought NoMachine Workstation. The issues can certainly be solved.

Viewing 12 posts - 1 through 12 (of 12 total)

This topic was marked as solved, you can't post.