Forum / General Discussions / H.264 codec at high resolution
- This topic has 13 replies, 3 voices, and was last updated 7 years, 2 months ago by fra81.
-
AuthorPosts
-
September 8, 2017 at 17:13 #15751antleParticipant
Hello
I’m using for evaluation your Workstation version.
Basically, I have the same problem described in this topic https://www.nomachine.com/forums/topic/using-h-264-at-high-resolutions: i can’t use h.264 codec at resolution higher than 1920×1080. When i go over this resolution the client falls back to VP8.
My configuration:
server: NoMachine 5.3.10 on ubuntu 16.04.01 (hardware: board based on Xeon E31505M v5).
client: NoMachine 5.2.11 on windows 7 (hardware: pc zbook 17 with i7-4600M and video card nvidia quadro K3100M).
What kind of logs do you need to examine the problem? both sides or the client side only?
A few other questions:
According to my tests, it seems that VP8 quality is lower than H.264 (all other things being equal); do you confirm that?
My remote desktop should (fluidly) show a 2000×1000 waterfall spectrogram refreshed at least every 100msec (preferably 30msec). I made a little application based on this parameters and it looks NoMachine is close to its limit (H.264 at 10Hz could be the only one suitable). could i have significantly better performances with a dedicated video card? Do you have any advice?
Thank you
September 18, 2017 at 12:12 #15810antleParticipantAnother question. is it possible to disable the 1920×1080 limit for h.264 hardware encoding?
thanks
September 18, 2017 at 14:56 #15814BritgirlKeymasterTry changing the key ‘Switch to software decoding on Windows if resolution is bigger than 1920×1080’ to false in the player.cfg file.
September 19, 2017 at 08:16 #15821antleParticipantThank you, it works.
Another question about the use of h.264 codec:
I notice that, when I use the Xeon based board (and its integrated GPU) as server, there is always a nxcodec.bin process running (even at resolution is <= 1920×1080). This never happens when I use the zbook laptop as server. It looks like there is no h.264 hardware acceleration available on Xeon E3105. anyone can confirm or deny this?
or do I have to enable the hw acceleration when compiling the libx264?
Thanks again
September 20, 2017 at 15:33 #15828antleParticipant… I try to put it in another form: is there a way to see when NoMachine is using the hardware accelerated h.264 encoding and when is using the software accelerated h.264 encoding?
thanks
September 21, 2017 at 08:44 #15833antleParticipantI think i figured out…
At the moment NoMachine is not able to exploit the H.264 hardware accelerator provided by Xeon E3 gpu since the latter has a QuickSync HW accelerator that is not currently supported (as described in https://www.nomachine.com/FR03M02905 ). Only the libx264 software library is available.
Am I right?
September 21, 2017 at 09:00 #15836BritgirlKeymasterNoMachine supports H.264 hardware encoding provided by graphics cards with Nvidia Kepler microarchitecture onward and will add support for Intel Quick Sync video cards: https://www.nomachine.com/FR03M02905
Have you read the documentation here? https://www.nomachine.com/AR10K00706
To go back to an earlier question you asked, you can see what codec is being used during the session by opening the menu (Ctrl-Alt-0) and going to Display -> Change Settings and looking at the bottom of the pane.
You can also see what’s being used in your session by looking in the C-*/session logs. There should be a line such as: Info: Using H.264 software encoder or decoder etc.
September 22, 2017 at 08:02 #15842antleParticipantHello and thank you for your time.
Yes, i read the documentation and i installed the libraries properly. I can see that NoMachine is using H.264 encoding (by using Ctrl-Alt-0 on the client and by checking the sessions logs as well). The H.264 encoding is available even at resolutions greater than 1920×1080 because, as you suggested, i set to false ‘Switch to software decoding…’ option.
What i meant to say is: since i’m using as NoMachine server a board based on Xeon E3 , without anything gpu but the HD530 integrated inside the xeon, NoMachine can exploit only the h.264 software encoding. To make available hardware accelerated h.264 encoding to the NoMachine server i have to add to my system a nvidia video card of the Kepler architecture at least.
My last question.
It happens that my zbook is based on a nvidia quadro of the kepler family so i was able to make some tests. Kepler H.264 encoding is fast enough for my needs but rendering is not perfect (because of the well-known problem of the artifacts). Since in my target system i can put a GTX1060 video card (Pascal microarchitecture, two generations ahead) my last question is: do you know if the Nvidia Pascal microarchitecture h.264 encoder has better performance regarding the artifacts problem?
thanks again
September 25, 2017 at 07:37 #15857antleParticipantI’m afraid i wrote something totally wrong about the ‘Switch to software decoding…’ option in the first part of my last post.
I meant that when i set ‘Switch to software decoding…’ = false on my notebook with the nvidia video card i can see that NoMachine exploits the H.264 hardware acceleration even at resolution 1920×1080.
I’m sorry but i was so worried about writing correctly in English that i didn’t pay attention to what i was writing.
September 25, 2017 at 10:10 #15875fra81ModeratorHello,
you may want to take a look at the recently updated:
https://www.nomachine.com/FR03M02905
Hardware encoding with QuickSync is now supported out of the box on Windows, while on Linux further configurations are needed, as explained in the referenced article.
I’m not sure I understand what you mean with “artifacts problem”. Are you referring to some known problem of the nVidia encoder? Or a problem with NoMachine? A screenshot of those artifacts could help to clear things out.
September 26, 2017 at 11:46 #15884antleParticipantHello
Thank you, i already knew it because i signed for updates about FR03M0295. I hope today i’ll have enough time to make some test and collect some data/image to show you.
By the way, i elaborate a bit more the artifact problem:
1)when i use the h.264 hw acceleration on the server machine (with video card nvidia quadro K3100m), if there are multiple windows opened on the desktop, a little bit of the windows in background appears sporadically over the window in foreground (for example if the window in background has tabs, one of the tab sporadically appear over the window in foreground). I can eliminate this by closing all the windows in background.
2) on the NoMachine client i see that my test application encoded with the h.264 hardware encoder (with video card nvidia quadro K3100m) has perceptibly worse quality respect to the h.264 sofware encoder (with libx264). Basically the waterfall seems more fine grained when the NoMachine server uses the software encoder (but i can’t use the software encoder because it’s not fast enough for my needs). Do you know if pascal-series nvidia video cards have better h.264 encoder?
September 26, 2017 at 18:01 #15886fra81Moderator1) Some screenshots, or even better a video, would be great.
2) Yes, definitely it is possible that achieved quality is not exactly the same with the hardware encoder, but at the moment I’m not able to say how much a Pascal card would improve things. Maybe you can try to compensate by increasing display quality in the Display settings panel?
October 2, 2017 at 09:00 #15916antleParticipantHello.
Sorry, I had to switch on another activity and I had no time to make tests. I’ll work on it this week.
Actually, I set the display quality to the maximum value in both cases by the NoMachine display settings window. Is there any other way to modify the quality in addition to the display setting window?
This week I should receive a GTX1060 video card so I should be able to make some test with pascal microarchitecture as well.
Bye
October 4, 2017 at 17:58 #15935fra81ModeratorIs there any other way to modify the quality in addition to the display setting window?
No other way besides checking ‘Disable multi-pass display encoding’ to disable “progressive encoding”. But this will consume much more bandwidth.
-
AuthorPosts
This topic was marked as solved, you can't post.