Forum Replies Created
-
AuthorPosts
-
fra81
ModeratorI can’t find any documentation for how to enable this feature.
No configuration is needed. NoMachine is designed to select automatically the encoding method that will provide the best performance among the available ones. The hardware encoder will be chosen as the preferred one when supported.
Does it work with AVC?
It will work with any software or hardware decoder you may have on client side, NoMachine AVC pack included.
Does performance update rate improve if I use AVC and a server side graphics card?
Using the hardware encoder on server side can improve performance and most importantly will offload the CPU. AVC on client side should not be needed if you are running a Windows or Mac client. NoMachine will use anyway the hardware decoder that is provided on your client computer.
Lastly, will this work with Kepler based GPUs on AWS EC2?
Unfortunately only Maxwell based (or next ones) GPUs are supported. Hardware encoder is not used in Kepler or older GPUs due to their insufficient feature level.
fra81
ModeratorHi Ventec,
of course even on a slow network, a 15 seconds delay would have been very difficult to explain, so something must be very wrong.
We didn’t receive your email. Did you send it already?
fra81
ModeratorWould you exclude a bandwidth problem exacerbated by the more graphics-intensive desktop environment of CentOS 7? In this scenario a test with the lightweight desktop (Xfce) would help to clarify.
Another possibility is a problem with video drivers, like in the other cases you mentioned, even though the CPU is not overloaded in your case. This could be verified by running a debug library. If you are willing to test this library, please contact us to forum[at]nomachine[dot]com. We will send you the library and instructions.
fra81
ModeratorWould you help us debugging this issue by running a debug library? In this case, please contact us to forum[at]nomachine[dot]com so we can send you the library and instructions.
fra81
ModeratorHi!
Unfortunately we weren’t able to reproduce your issue in our labs with the same desktop environment. Are you sure that there are no NoMachine connections left at the moment of the black screen (e.g. you had connected from home and forgot to close the connection before going to work)?
More info about the hardware could help us match your environment more closely, mainly about graphics card and drivers.
fra81
ModeratorHi Ventec,
what does the ‘top’ command report about the CPU usage on the CentOS 7 box?
You may want to try a lightweight desktop environment, i.e. Xfce. That should also make your virtual sessions work (with built-in X server).
fra81
ModeratorHi Eric,
I’m happy that you found a way around the problem. Be aware that this case was already investigated in the past. We tested all methods (at least 5) provided by X for getting the screen content, but none of them demonstrated to be any faster than the method currently used by NoMachine.
That said, we are already working on a totally new method for screen capture that won’t require any interaction with the X server and that should solve that kind of slowness. It is currently in testing phase and it will be included in one of the next releases. It should be noted anyway that pulling data from the GPU shouldn’t be so slow, and this can’t be considered in any way a NoMachine bug.
fra81
ModeratorYou could say that this X.org bug is triggered by NoMachine, but only in the sense that the applications you mention usually perform drawing operations to the X server (in this case data is sent to video buffer) and very rarely ask for the screen content back (in this case data is received from video buffer).
NoMachine already makes sure of performing the minimum amount of operations to get the screen content, and this has been proven to not significantly overload the X server in the great majority of systems.
fra81
ModeratorI am not aware of any problem with Radeon or Nvidia cards (for example we tested CentOS 6 with Radeon HD 6570).
You may try to install proprietary drivers from AMD (or remove them if you already did).
fra81
ModeratorI am sorry to say that this is an issue which needs fixing in Ubuntu (and the xorg version they are using) and is out of own hands unfortunately.
fra81
ModeratorIt was a design choice that most of the session configuration happens at runtime based on information provided by the server after authentication to the server has taken place. Rather than put everything in the hands of the user who with v3 had to know a priori what to configure and what was available (or not), it’s now the server which tells the client what the user can do or not. So really, there is less training for new users of the latest version.
Specifically, all connections to a physical desktop happen in “shadow” mode, so that no choice by the user is necessary.
Also all connections to a virtual desktop owned by a different user can only happen in shadow mode.
So I assume that your problem is with connecting to a virtual desktop owned by yourself. By default the session is “migrated” to the new client and probably you want instead that the previously connected client is not disconnected. This can be achieved by changing the ‘automigrate’ option to ‘0’ in the ConnectPolicy key. To hide this complication to the user was a precise design choice. The main reason is that session sharing makes sense between different users. Allowing the user to connect to its own session in shadow mode would mean more or less to “share the session with yourself”.But in general, it should be easy for end-users to share sessions..ideally an invite/respond function, but even a method that you can use from within an already connected session, ie. “show running sessions”.
A system based on invitions will be part of the new NoMachine Network Service (or NoMachine Anywhere, we are still debating what name is the best), currently in its final stages of implementation.
fra81
ModeratorIn the past a delay was observed due to the initialization of the hardware decoder with specific graphics cards on windows. So one thing to try is disabling the hardware decoding by changing the following key to ‘false’:
<option key=”Enable hardware accelerated decoding” value=”true” />
The configuration key is in the ‘C:/Users/<username>/.nx/player.cfg’ file. If you are using NoMachine 5, the same setting can be changed via the ‘Disable client side hardware decoding’ checkbox in the Display settings GUI.
Should the problem persist, we would need your logs for further investigations. You can gather them as explained in https://www.nomachine.com/DT07M00098 and send them to forum[at]nomachine[dot]com.
fra81
ModeratorHello Eric,
this actually looks like a problem with your drivers. What NoMachine does is querying the screen content few times per second. This is not a high work load and the X servers normally handles it without any problem on the widest range of systems we tested.
Can you provide more info on your graphics card and the drivers? Is it possible to upgrade the drivers?
fra81
ModeratorHi Sean,
are you connecting with the same user that owns the remote desktop? That option disallows connections from different users unless the desktop owner explicitly accepts them. But if you authenticated to the server using the same credentials as the desktop owner, no further confirmation is required.
November 2, 2015 at 10:24 in reply to: Can I connect two Linux computers both behind NAT firewalls? #8916fra81
ModeratorSo what you ideally need is the NoMachine Network service above mentioned. It will be perfect for users who either don’t know their IP or are not able to configure their router for whatever reason.
NoMachine Anywhere is in its final stages of development as part of the version 5 roadmap.
-
AuthorPosts