Hardware acceleration support

Forum / NoMachine for Windows / Hardware acceleration support

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #46518

    Q1. How does the hardware acceleration support compares in Windows vs Linux?

    Asking after reading lots about “H.264 hardware encoding”
    https://kb.nomachine.com/DT11R00174 “NoMachine Workstation – Installation and Configuration Guide”

    > “H.264 hardware encoding is possible when the Workstation host machine has an hardware accelerated video cards (GPU) with Nvidia Kepler microarchitecture onward or Intel Quick Sync processors or AMD card (at the moment only on Windows).”

    NM acceleration possible when it creates its own display

    How do these setup issues apply for a Windows server?

    Q2. Are there differences between the free and commercial versions of the software in terms of  “hardware acceleration” / “hardware encoding” / VirtualGPL GPU access ?

    According to https://kb.nomachine.com/AR10K00702 “Differences between NoMachine Free Edition for Linux and NoMachine Workstation for Linux” the free versions can do hardware acceleration the same.
    Is that really true?

    Q3. How would “hardware acceleration” work if NoMachine server (free, workstation or another) would in a Windows guest in Proxmox ?

    Have read quite a bit but could not answer these questions.
    If there are knowledge sources I’ve missed please reference them in your answer.




    Q1. How does the hardware acceleration support compares in Windows vs Linux?

    Hardware encoding is available in NoMachine sessions in all of the supported operating systems provided the graphics card supports it. It’s available in the free and commercial products.

    More about GPU acceleration, H.264 encoding and decoding in NoMachine software is available here:

    H.264 hardware and software encoding/decoding in NoMachine remote desktop sessions

    Q2. Are there differences between the free and commercial versions of the software in terms of “hardware acceleration” / “hardware encoding” / VirtualGL GPU access ?

    There are no differences in the free and commercial version when it comes to GPU/hardware acceleration/encoding/decoding. Check the article I mentioned before.

    However, VirtualGL support and GPU acceleration/HW encoding support are not the same. Whilst the latter is for leveraging the capabilities of the GPU available out-of-the-box to accelerate the encoding of NoMachine, VirtualGL is used to allow OpenGL applications running on NoMachine Terminal Server in NoMachine “virtual desktop sessions” to use server side graphics hardware installed on the application server. VirtualGL is a tool giving your OpenGL applications the ability to run with hardware acceleration even if they are using an X11 display that doesn’t have those capabilities. The VirtualGL library redirects the 3D primitives to capable graphics hardware so that OpenGL is rendered through the GPU if present rather than by software rendering. VirtualGL is a feature you can enable if you have a NoMachine Terminal Server product installed on your Linux machine.

    More about VirtualGL is available here:

    How rendering of applications is done in NoMachine

    How to enable VirtualGL support on Linux in NoMachine

    Why VirtualGL requires access to the display :0

    I hope you find this additional info helpful.


    Thanks, that helps, consistent with my understanding so far.

    Have a specific use case to satisfy: using NoMachine to work with Proxmox guest vms

    Proxmox is a Linux hypervisor, somewhat similar to Virtualbox but is installed as headless host and offers a browser based administration.

    One needs to get graphical access to guest vms – either Windows VM or Ubuntu desktop VM.

    One solution seems to be “GPU passthrough”: GPU cards are made available to VMs, either wholly or … piecewise

    There are many articles

    * https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

    One key application is for software development work (using an IDE, for example) but also for high quality video calls.

    Trying to decide between GPU passthrough and using NoMachine on guest VM (server) and Proxmox host (client) (could install a thin display and window manager …)

    Capture more here
    * https://forum.proxmox.com/threads/perfect-developer-desktop.133437/post-620198 (start)

    * https://forum.proxmox.com/threads/perfect-developer-desktop.133437/post-620594 (end)

    Read the articles you recommended above about how hardware based encoding and decoding are done and wondering if the NoMachine solution makes sense, especially given that encoding & decoding would happen on same machine hardware, just to cross the guest VM / host virtualization layer.

    Any thoughts please?
    I believe the combination between Proxmox and NoMachine can share a nice market niche.


    Not quite sure what to make of what you wrote. With HW encoding/decoding enabled, then NoMachine uses the graphics card available on the host in order to encode or decode. If that’s not possible, NoMachine reverts to H.264 software encoding. This is explained in the material that I pointed to.

    As for hypervisor technologies, NoMachine works out-of-the-box with most of them and we test with quite a few including Proxmox.

Viewing 4 posts - 1 through 4 (of 4 total)

This topic was marked as solved, you can't post.