Forum Replies Created
-
AuthorPosts
-
fra81Moderator
In theory yes, in practice unfortunately not. It’s true that 49 ms needed to transfer the same frame, on a 1 Gbps network, would become 4.9 ms, on a 10 Gbps network, but this is only in theory. In practice moving the data from the network layer to the video RAM is much, much more expensive. We did our experimentation, of course, and the real frame-rate that we were able to achieve, with uncompressed data, on a 10 Gbps network, with a dedicated switch, with only the client and server on the physical layer, were close to 20 frames-per-second. And this only at times, with a sustained rate much lower than that. And this with the code that is inside the production NoMachine software, code that is optimized to be zero-copy (except the copy from the network layer to the video RAM, of course). The fact is that data-tranfers are expensive. Operating systems can use DMA, but doing that in user-level code is basically impossible and, even if it was possible, it would only be at conditions that would greatly reduce the number of systems where the “feature” can be leveraged and users make real use of it. The point, in the end, is that even if we did such “uncompressed encoding”, and even if it worked, it would be of so little use for almost the totality of our users that it would just be a quirk, something to mess about with. Different is the approach we have taken, the continued work on algorithmically improving the “end quality” of the output, so much to appear visually lossless.
fra81ModeratorHi 🙂
Let’s do some math considering a 1Gbps Ethernet connection and a screen resolution of 1280*960, which, you will agree, is pretty low for today’s standards.
For a start, we can calculate how many MBs can be transferred on the network per second:
(1000000000 / 10) / 1024 = 97656.25
1Gbps = 97.656MBs
The size of a single 1280×960 frame is given by:
(1280 * 960 * 4) / 1024 = 4800.00
Size of 1 frame = 4.8MB
Now we can calculate how many frames can be transferred on the network per second:
97656.25 / 4800.00 = 20.34
It is possible to transfer 20.34 frames per second, which is very far from the 60 fps that would be considered a good frame rate.
We can also calculate how much time is needed to transfer a single frame, that will directly affect the latency:
1000 / 20.34 = 49.16
So it takes 49.16 ms to transfer a single frame through the Ethernet. This considering a direct gigabit Ethernet link between only the two computers. Any other computer on the same network could add more latency. And we can easily imagine what could happen with a FullHD (1920×1080) or a 4K resolution.
This explains why what you say makes perfectly sense in theory but not really in practice. It is far better making the CPU and GPU do the work to reduce the transferred size, as they will always be faster than any network.
fra81ModeratorHi,
I’m not sure I understand the scenario. Are you playing a video on the server and watching it on the client through the NoMachine connection? Can you provide some more details?
If you could record a video showing the issue, that would be very useful.
fra81ModeratorHi,
are the 2 physical monitors turned on? How are they connected to the server (HDMI, Displayport…)?
From the logs it seems you are connecting to the server’s login screen. Do you still get a black screen if the server is logged on to the user’s desktop?
Can you try to disable hardware encoding as shown in https://knowledgebase.nomachine.com/DT11R00180#2.5?
fra81ModeratorHi,
in your case I would not change any of the default settings and let NoMachine adapt automatically to network conditions and available hardware resources, but with one exception: check the ‘Disable client side image post-processing’ option, that can be a heavy operation for your pi. Do not choose a specific codec or a specific frame rate, as NoMachine will use, at any moment, the best values to optimize performance.
fra81ModeratorHi Jon,
just for the record, x264 is the software H.264 encoder, that doesn’t make use of the GPU, while you want to leverage the hardware encoding made available by the graphics card, namely NVENC. Hardware encoding support is not available in virtual desktop sessions when X11 vector graphics mode is enabled, as explained here. You can try to disable X11 vector graphics, so that hardware encoding will be used, and compare the results. It will mostly depend on the applications used.
The graphics card is also used to accelerate the applications running in the virtual desktop, by means of VirtualGL support (https://knowledgebase.nomachine.com/AR05P00982). This would only be useful if you run applications that use OpenGL for rendering.
fra81ModeratorHi,
are you running a virtual desktop session or connecting to the physical display of the server? Do you confirm that the ‘setxkbmap -print’ command you showed is run in the remote session and not on the client machine? And could you run it also on the other side?
Logs can be useful. Please find how to gather them in https://knowledgebase.nomachine.com/AR10K00697. You can send them to forum[at]nomachine[dot]com.
Finally, please do a test. Instead of creating a new KDE desktop (assuming you are connecting to a virtual desktop, as per my initial question), try to ‘Create a new custom session’, by selecting to run the console ‘in a virtual desktop’. Is the keyboard layout correct there?
March 1, 2022 at 09:31 in reply to: NXFrameBuffer failed to start on headless node after upgrade to version 7.7.4 #37741fra81ModeratorHi Axel,
did you check the system logs? They could show some hints. You can use the ‘journalctl -b’ command for that. Feel free to send the output to us so we can check.
fra81ModeratorHi,
we would need client and server logs to investigate. Please collect them by following the instructions in https://knowledgebase.nomachine.com/AR10K00697. You can send them to forum[at]nomachine[dot]com.
fra81ModeratorThe current behaviour is in place since NoMachine version 7.0.
fra81ModeratorCan you try to measure your network latency, for example with the ping command? You can also try to send the logs to us, in case they would provide any hint. You can find instructions in https://knowledgebase.nomachine.com/AR10K00697 and send everything to forum[at]nomachine[dot]com, by referencing this thread.
fra81ModeratorThere is no evidence of a NoMachine problem in the logs, so this really looks like a problem with the drivers, or anyway at the system level.
fra81ModeratorHi,
the NoMachine player translates the Alt+Scroll combination into a horizontal scroll purposedly. This is probably what you are experiencing. I understand this might not be the expected behaviour, even though it could be handy in some cases, and we will evaluate to change it in the next NoMachine version.
February 18, 2022 at 09:29 in reply to: Issues with 7.8.2 on Ubuntu 21.10 and Nvidia graphics #37616fra81ModeratorSorry, I made a mistake in my last post. To disable hardware decoding the key should be like this:
<option key=”Enable hardware accelerated decoding” value=”disabled” />
fra81ModeratorThanks for the info. However, the reasons I mentioned above remain true. OBS is a different kind of application.
-
AuthorPosts