Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
What % GPU use for hardware encoding?
cdb [Avatar]
Newbie Joined: Aug 05, 2011 16:29 Messages: 19 Offline
[Post New]
I bought PD14 Ultra last night and fitted a second GTX460 in SLI in anticipation of doing some 4K editing.

I tried hardware encoding a file today and it only showed about 20% GPU usage on one of the GPUs and 0% on the other and 23% cpu usage.

Does anyone know what sort of %s are normal?

This message was edited 1 time. Last update was at Nov 02. 2015 13:04

JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: I bought PD14 Ultar last night and fitted a second GTX460 in SLI in anticipation of doing some 4K editing.

I tried hardware encoding a file today and it only showed about 20% GPU usage on one of the GPUs and 0% on the other and 23% cpu usage.

Does anyone know what sort of %s are normal?


From what I know, PD does not support SLI so 0% on second GPU would appear normal based on my experience. Single GPU load for hardware encoding is rather dependant on particular format of encoding and box.

Jeff
cdb [Avatar]
Newbie Joined: Aug 05, 2011 16:29 Messages: 19 Offline
[Post New]
Thank you.

I might try taking the second card out then, although it has helped with Sonys' PMB editing side, but I don't need to use that anymore.

"encoding and box" what do you mean by box?
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: Thank you.

I might try taking the second card out then, although it has helped with Sonys' PMB editing side, but I don't need to use that anymore.

"encoding and box" what do you mean by box?


Sorry, "box" = PC, most signficant side contributor CPU.

Jeff
cdb [Avatar]
Newbie Joined: Aug 05, 2011 16:29 Messages: 19 Offline
[Post New]
Ah, OK. I've got an I7 860 running at 3.8Ghz. Whilst an old cpu now it's still plenty fast enough, although I am looking at upgrading soon.

This message was edited 1 time. Last update was at Nov 01. 2015 04:19

kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Quote:
Quote: I bought PD14 Ultar last night and fitted a second GTX460 in SLI in anticipation of doing some 4K editing.

I tried hardware encoding a file today and it only showed about 20% GPU usage on one of the GPUs and 0% on the other and 23% cpu usage.

Does anyone know what sort of %s are normal?


From what I know, PD does not support SLI so 0% on second GPU would appear normal based on my experience. Single GPU load for hardware encoding is rather dependant on particular format of encoding and box.

Jeff




I have 2 480's but they don't show up as being availabe for any HA..I did have them connected in SLI, but disconnected and they still don't show..Any suggestions ? Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
cdb [Avatar]
Newbie Joined: Aug 05, 2011 16:29 Messages: 19 Offline
[Post New]
What driver are you using? I saw somewhere that you might have to roll back to around 337, so I did.
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: I have 2 480's but they don't show up as being availabe for any HA..I did have them connected in SLI, but disconnected and they still don't show..Any suggestions ?

I assume by don't show up as being available, you mean HA for encoding of supported profiles? Since the GTX480 is older Fermi technology, ie pre Kepler, are you compliant with CL specs, http://www.cyberlink.com/products/powerdirector-ultimate-suite/spec_en_US.html namely:

PLEASE NOTE: For users of NVIDIA cards using pre-Kepler architecture who have updated to graphics driver 340.43 or later, the CUDA video hardware video encoder feature in PowerDirector is no longer available. To re-enable hardware acceleration, please download and install an earlier driver.

I have not tested my GTX580 recently which is similar to your GTX480 but HA encoding was functional with the older Nvidia drivers as stated above by CL.

Jeff
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Quote: What driver are you using? I saw somewhere that you might have to roll back to around 337, so I did.




I went back to an earlier driver however that didn't work, so I went back to the one you suggested and that works !



Thanks for tip. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote:
Quote: What driver are you using? I saw somewhere that you might have to roll back to around 337, so I did.




I went back to an earlier driver however that didn't work, so I went back to the one you suggested and that works !



Thanks for tip.


The version prior to 340.43 is 337.88. To my knowledge this is the latest driver that will work. What earlier driver did you try that didn't work?

Jeff
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Quote:
Quote:
Quote: What driver are you using? I saw somewhere that you might have to roll back to around 337, so I did.




I went back to an earlier driver however that didn't work, so I went back to the one you suggested and that works !



Thanks for tip.


The version prior to 340.43 is 337.88. To my knowledge this is the latest driver that will work. What earlier driver did you try that didn't work?

Jeff




Sorry I didn't make it clear. The earlier driver was 347, which someone in another thread said worked. That was before I found this thread. And as a follow up, I can also add, that Windows 10 doesn't like the old version and immediatley tries to update to the latest. I have to figure out how to stop that. At the moment I have cloned the drive onto another disc and put a stop on all upgrades, not sure even then that it will be enough. Time will tell. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
I had used this successfully, http://www.intowindows.com/enable-or-disable-automatic-driver-updates-on-windows-10/

Jeff
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]




Interesting, I may give it a try. However my Device installation settings are different. I have yes, but my No option , only has (your device might not work as expected ) It might be because I am on the Insider programme and I get updates all the time. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Quote:



Interesting, I may give it a try. However my Device installation settings are different. I have yes, but my No option , only has (your device might not work as expected ) It might be because I am on the Insider programme and I get updates all the time.




Further info...I also have another PC with a GTX 680, and Win 10 with the latest Nvidia drivers and it works perfectly with HA. !

I then went back to my PC with the 480's and installed GPU-Z and , much to my surprise the HA is actually working. The check marks are not there on PD. But if I turn HA on and off, it responds accordingly. A bit of a puzzle. I compared timings with my 680 against the 480 with HA enabled and there is only a few minutes difference between them, so I am convinced it is working. The puzzle remains why no check box available. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: Further info...I also have another PC with a GTX 680, and Win 10 with the latest Nvidia drivers and it works perfectly with HA. !

I think this makes sense, the GTX680 is a Kepler based GPU, they work fine. I have a GTX650, Kepler based GPU and it also has no problems. I believe this issue with PD is only for pre-Kepler cards.

Quote: I then went back to my PC with the 480's and installed GPU-Z and , much to my surprise the HA is actually working. The check marks are not there on PD. But if I turn HA on and off, it responds accordingly. A bit of a puzzle. I compared timings with my 680 against the 480 with HA enabled and there is only a few minutes difference between them, so I am convinced it is working. The puzzle remains why no check box available.

That is odd to me. Maybe I'll try the GTX580 and see if it mimics your experience. Nvidia drivers past 337.88 removed the CUDA based encoding so that's why these pre Kepler cards lack HA. In my view there was also changes in PD14 relative to PD13 concerning this too. A bit of a puzzle to unravel based on testing as PD documentation of supported GPU's is very lacking.

If the check marks are not there, how are you turning HA on and off within PD? Can you provide a % difference between the two cards in encoding same source to try and get a better feel for what you are experiencing. A few minutes out of 4 minute encode is very different, a few minutes out of 30minute encode not so different. I used to have a GTX470 and it was nothing close to the GTX650 in performance. Make sure in Pref > HA both items are unchecked so one's not fooled by the GPU doing other activity that's still supported.

Jeff
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
That is odd to me. Maybe I'll try the GTX580 and see if it mimics your experience. Nvidia drivers past 337.88 removed the CUDA based encoding so that's why these pre Kepler cards lack HA. In my view there was also changes in PD14 relative to PD13 concerning this too. A bit of a puzzle to unravel based on testing as PD documentation of supported GPU's is very lacking.

If the check marks are not there, how are you turning HA on and off within PD? Can you provide a % difference between the two cards in encoding same source to try and get a better feel for what you are experiencing. A few minutes out of 4 minute encode is very different, a few minutes out of 30minute encode not so different. I used to have a GTX470 and it was nothing close to the GTX650 in performance. Make sure in Pref > HA both items are unchecked so one's not fooled by the GPU doing other activity that's still supported.

Jeff




I didn't make it very clear,,sorry, I am making many assumptions...



I created a very short 3 minute video for testing purposes. I rendered an MP4 without HA checked under prefs and it took 43 sec's to complete. I rendered the same video with HA checked under prefs, and it took 35 secs to complete, and also my GPU-Z showed about 10% GPU load with the HA checked and 0 load with the HA unchecked. I am convinced the HA is working. And it only works on one GPU, but that has always been the case, whether the newer nvidia cards now cope I do not know. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: I rendered an MP4 without HA checked under prefs and it took 43 sec's to complete.

Oh, yes, that I understand. As I said, don't be fooled, the pref HA options do not control HA encoding. They can affect, but do not control, and as I said, this activity is still supported. HA encoding is only controlled on the "Produce" page or in the "Create Disc" module when doing that function. There is a post that clearly discusses the difference, I will search for it.

Jeff
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
Quote:
Quote: I rendered an MP4 without HA checked under prefs and it took 43 sec's to complete.

Oh, yes, that I understand. As I said, don't be fooled, the pref HA options do not control HA encoding. They can affect, but do not control, and as I said, this activity is still supported. HA encoding is only controlled on the "Produce" page or in the "Create Disc" module when doing that function. There is a post that clearly discusses the difference, I will search for it.

Jeff




I did another test with the 377 drivers and I am now scratching my head ! The checkbox shows up under production so that's ok. I ran the same 3 minute video with both HA in prefs and the accelleration boxes in production checked. It took 35 seconds. I ran the same test with the boxes in production unchecked and it took 35 secs !! I then unchecked HA under prefs and the production boxes unchecked and it took 38 secs. There is a relationship between the prefs and the production boxes that I clearly do not understand. Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
kingsmeadow
Senior Member Location: Cambridge, UK Joined: Dec 06, 2011 11:52 Messages: 179 Offline
[Post New]
I decided I would do the same test on my PC with the 680 card.

I used the same 3 minute video and selected HA in Prefs and checked the boxes in Produce it took 9 secs. I unchecked only the boxes in Produce and it took 9 secs !

I then unchecked HA in Prefs as well as the boxes in Produce and it took 17 secs.

I then left the HA in Prefs unchecked and checked the boxes in Produce and it took 9 secs !!!

I am now really confused about the settings options.... Intel Core i7 3770K 3.6 Ghz,
GTX 680, 2 X Benq23 3D monitors,
6G DDR3, Win 7 64, Win 10 (Insider) 64
PCIE SSD, Intel Sata SSD 2 500 Gbyte Seagate,
Minoru 3D WebCam, NVIDIA 3D Vision-Ready
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: I decided I would do the same test on my PC with the 680 card.

I used the same 3 minute video and selected HA in Prefs and checked the boxes in Produce it took 9 secs. I unchecked only the boxes in Produce and it took 9 secs !

I then unchecked HA in Prefs as well as the boxes in Produce and it took 17 secs.

I then left the HA in Prefs unchecked and checked the boxes in Produce and it took 9 secs !!!

I am now really confused about the settings options....


A very long response but if you read through and mimic the suggested tests you should understand the options that are currently not making much sense to you.

CL provided the details of pref > HA options and Produce "Fast video rendering technology" in this FAQ, http://www.cyberlink.com/support/product-faq-conte...&CategoryId=-1&keyword=effects

Basically:
Preference > Hardware acceleration > Enable OpenCL technology: This feature determines what technology is to be used for these specialized effects (the ones with the GPU logo in the corner) when applied to the timeline. When activated the GPU will be used during preview and render. So IF "Enable hardware encoding" in "Produce" is selected as well as this option, these specialized effects if used in the timeline will get the assistance of the GPU and the video will be hardware encoded with the GPU. If this option is unselected, but "Enable hardware encoding" is selected, the video will be hardware encoded by the GPU but these special effects will have the CPU do the task.
Likewise when previewing a video in the timeline that has one of these special effects applied. If this option is selected, the GPU will assist in the render to the playback window, if not selected, the CPU will do the effect task.

Preference > Hardware acceleration > Enable hardware decoding: Is pretty self-explanatory, it only affects decoding, nothing to do with encoding so does not affect the ability to "Enable hardware encoding" in "Produce"

Produce > Fast video rendering technology > enable Hardware video encoder: Again, pretty self-explanatory, if your GPU supports hardware encoding, you can enable this feature. When enabled, the encoding will be done with the GPU, when unselected, encoding is done with the CPU. Whether faster or slower or a change in output quality really depends on the hardware involved and the timeline contents.

From above, the Pref > HA > decode has nothing to do with encoding so I will skip discussion of that feature in the demonstration below. Yes for some timeline content it can affect encoding elapsed times, but it does not control encoding, it controls decoding.

The following tests will may help you understand if HA encoding is working on your GTX680 successfully and what PD is actually doing. Disclaimer: The timeline used below is a test timeline, it does not reflect an actual user timeline nor performance one may get with a real timeline, its purpose is purely educational.

For these series of tests I am using a GTX650 and Nvidia driver 347.88 with PD14 v2019. I’m assuming you have your IntelGPU deactivated on your i7-3770k and are using the GTX680 for HA encoding. Use of other drivers or other versions of PD14 will yield differing results. I’ve noticed many recent Nvidia drivers and OpenCL (active in Pref > HA > OpenCL) have dire consequences in PD14 when a acclerated effect is applied.

TestA CPU encoding:
1) Place 5 default video “Kite Surfing.wmv” in timeline
2) Pref > HA both options unchecked
3) Produce, set H.264, 1920x1080/60i 24Mbps, M2TS container
4) Produce, make sure "Fast video rendering technology:" is unchecked
5) Produce, record elapsed time as TimeA (For your i7-3770k this is probably about 40 seconds)

TestB HA GPU encoding:
1) Just use "Previous" to go back to Produce functionality
2) Produce, make sure "Fast video rendering technology" with the "Hardware video encoder" is checked
3) Produce, record elapsed time as TimeB (For my GTX650 this is was about 20 seconds, your GTX680 should be significantly faster)

For your i7-3770k and a GTX680, TimeB should be significantly faster than TimeA. You can play with Pref > HA settings but they will have no significant effect on the encode times of TestA or TestB. We have nothing in the timeline for Pref HA > OpenCL to aid in encoding or in this case with a wmv file in the timeline nothing to decode.

TestC CPU encoding with a fx applied:
1) Go back to the basic timeline
2) Add the effect Bloom to every clip. (I picked Bloom for a very specific reason, this is an educational timeline!)
3) Produce, verify H.264, 1920x1080/60i 24Mbps, M2TS container is still selected
4) Produce, make sure "Fast video rendering technology:" is unchecked
5) Produce, record elapsed time as TimeC (For your i7-3770k this is probably about 190 seconds)

TestD HA GPU encoding with a fx applied:
1) Just use "Previous" to go back to Produce functionality
2) Produce, make sure "Fast video rendering technology" with the "Hardware video encoder" is checked
3) Produce, record elapsed time as TimeD (For my GTX650 this was about 170 seconds, your GTX680 should be significantly faster)

For your i7-3770k and a GTX680, TimeC and TimeD are probably rather close. If monitoring with GPU-Z you would have seen a little activity under Video Engine Load, basically you are CPU governed, CPU load probably near 100% TestD was not significantly faster like it was in TestB vs TestA because this Bloom effect used requires the CPU to render the effect. It loads the CPU extensively so this dominates encoding times. The GPU in TestD is doing the encoding but all the prep work of the special effect applied to every clip is being done by the CPU.

TestE CPU encoding with a fx applied utilizing OpenCL functionality with Pref > HA > OpenCL checked:
1) Just use "Previous" to go back to Produce functionality
2) Pref > HA make sure OpenCL is checked
3) Produce, make sure "Fast video rendering technology" is unchecked
4) Produce, record elapsed time as TimeE

TestF GPU encoding with a fx applied utilizing OpenCL functionality with Pref > HA > OpenCL checked:
1) Just use "Previous" to go back to Produce functionality
2) Produce, make sure "Fast video rendering technology" with the "Hardware video encoder" is checked
3) Produce, record elapsed time as TimeF

For your i7-3770k and a GTX680, TimeE and TimeF are probably somewhat close, however, both these times significantly faster than TestC and TestD which was the identical timeline. This occurs because this Bloom accelerated effect that was applied utilizes OpenCL effects (we activated that setting in pref) to use the GPU cores to create this specialized effect we had applied to the timeline. For your particular box, your GTX680 is substantially faster than your CPU for encoding this particular educational timeline. If monitoring TestE or TestF with GPU-Z you should have seen significant load in the GPU Load sensor because this will show the OpenCL usage from the GPU accelerating the effect.

Jeff
Powered by JForum 2.1.8 © JForum Team