CyberLink Community Forum
where the experts meet
| Advanced Search >
Here we go again... no option to enable NVidia high performance graphic card with Power Director 16
Reply to this topic
GGRussell [Avatar]
Senior Contributor Private Message Joined: Jan 08, 2012 11:38 Messages: 708 Offline
[Post New]
We shouldn't have to go through all this and I complained about this issue several years ago so Cyberlink should be aware. They need to fix their software. This can also be an issue on desktops, but most desktop mobos have a way to only use the dGPU. No clue why most laptops can't. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Reply
Alevtina8606 [Avatar]
Newbie Private Message Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Quote We shouldn't have to go through all this and I complained about this issue several years ago so Cyberlink should be aware. They need to fix their software. This can also be an issue on desktops, but most desktop mobos have a way to only use the dGPU. No clue why most laptops can't.




Hi guys,



Just a small comment now as I am quite busy atm,



It seems that PD uses OpenCL interface to hardware, when available. You still need to trick it with NVidia profile to enable Optimus.

Once enabled, you typically have several OpenCL devices available:

-CPU device

-Intel Integrated graphics

-NVidia high performance card



Every device when connected to OpenCL interface is referred as "OpenCL platform".

And then, there is a generic openCL interface (IMHO) that connects to hardware specific platforms.

Each hardware platform is responsible for providing resources for calculation in hardware specific manner;

While generic OpenCL interface controls them all by means of unified OpenCL language.

It also provides control over hardware OpenCL devices, which in turn report bak their utilization and GFlops caps.

With this architecture, generic OpenCL provides task sharing between the hardware specific OpenCL platforms which in turn, assign the assined part of the job to their resources (cores and memory).

It also provides load balancing and profiling.



With this said, IMHO, True Velocity is just another name for that.

There is little Cyberlink can do about how OpenCL performs.



Note: not nesessarily this is the best option. For example, manually disabling software (CPU_ openCL platform on my system renders it more productive (15%) with replacing execution model to CPU c++ threaded code instead of OpenCL on the main processor; while two other platforms are still OpenCL controlled.



With OpenCL enabled on all three devices, none of them will peak up to 100. For example, CPU may only stay at 65-70%.

With OpenCL platform disabled on CPU, the CPU code peaks to 99-100%, and the other OpenCL devices are also more loaded (by 15-20%) compared to purely OpenCL configuration.

Finally, when you use OpenCL, real time performance anyway poor. Just imagine how many layers of abstraction, task splitting, controlling, balancing, asynchrous computing and profiling this baby adds. Hence, if you check the box, you may notice your preview is not continous or degrade or stops suddenly.

When you disable OpenCL, you benefit from Intel realtime sycronous processing (especially with Intl optimized effects), smooth previews but high render time...

That is why I encourage you to subscribe the petition to Cyberlink^



ADD SEPARATE OPENCL CHOICE FOR PREVIEW AND RENDER

So far Cyberlink ignores this...and I have to repeatedly check and uncheck switching between the modes.



My fortune:




  • CPU at 99%

  • Intel HD at 55%

  • NVidia card at 45%


When rendering from 4K source to FHD target full color, full effect set (Intl optimized), some enchancing (levels) and packing that into MKV container with H.265 codec at high profile and quality at 11 Mbps.

So yes.. you can do that, if you want it badly.



P.S.: 2313 seems to be some 10% slower than 2258, still have to check on that.



VOTE!!!!
Reply
Legacystar9 [Avatar]
Newbie Private Message Joined: Dec 04, 2017 14:00 Messages: 6 Offline
[Post New]
Quote
Quote We shouldn't have to go through all this and I complained about this issue several years ago so Cyberlink should be aware. They need to fix their software. This can also be an issue on desktops, but most desktop mobos have a way to only use the dGPU. No clue why most laptops can't.




Hi guys,



Just a small comment now as I am quite busy atm,



It seems that PD uses OpenCL interface to hardware, when available. You still need to trick it with NVidia profile to enable Optimus.

Once enabled, you typically have several OpenCL devices available:

-CPU device

-Intel Integrated graphics

-NVidia high performance card



Every device when connected to OpenCL interface is referred as "OpenCL platform".

And then, there is a generic openCL interface (IMHO) that connects to hardware specific platforms.

Each hardware platform is responsible for providing resources for calculation in hardware specific manner;

While generic OpenCL interface controls them all by means of unified OpenCL language.

It also provides control over hardware OpenCL devices, which in turn report bak their utilization and GFlops caps.

With this architecture, generic OpenCL provides task sharing between the hardware specific OpenCL platforms which in turn, assign the assined part of the job to their resources (cores and memory).

It also provides load balancing and profiling.



With this said, IMHO, True Velocity is just another name for that.

There is little Cyberlink can do about how OpenCL performs.



Note: not nesessarily this is the best option. For example, manually disabling software (CPU_ openCL platform on my system renders it more productive (15%) with replacing execution model to CPU c++ threaded code instead of OpenCL on the main processor; while two other platforms are still OpenCL controlled.



With OpenCL enabled on all three devices, none of them will peak up to 100. For example, CPU may only stay at 65-70%.

With OpenCL platform disabled on CPU, the CPU code peaks to 99-100%, and the other OpenCL devices are also more loaded (by 15-20%) compared to purely OpenCL configuration.

Finally, when you use OpenCL, real time performance anyway poor. Just imagine how many layers of abstraction, task splitting, controlling, balancing, asynchrous computing and profiling this baby adds. Hence, if you check the box, you may notice your preview is not continous or degrade or stops suddenly.

When you disable OpenCL, you benefit from Intel realtime sycronous processing (especially with Intl optimized effects), smooth previews but high render time...

That is why I encourage you to subscribe the petition to Cyberlink^



ADD SEPARATE OPENCL CHOICE FOR PREVIEW AND RENDER

So far Cyberlink ignores this...and I have to repeatedly check and uncheck switching between the modes.



My fortune:




  • CPU at 99%

  • Intel HD at 55%

  • NVidia card at 45%


When rendering from 4K source to FHD target full color, full effect set (Intl optimized), some enchancing (levels) and packing that into MKV container with H.265 codec at high profile and quality at 11 Mbps.

So yes.. you can do that, if you want it badly.



P.S.: 2313 seems to be some 10% slower than 2258, still have to check on that.



VOTE!!!!




where is the petition?



CUDA seems like a way better option if we can get them to properly address optimus. i like PD16 a lot, but it's performance against Premiere is laskluster during editing, scrubbing, and effects. I bought a $2000 laptop to have a really good mobile editing experience, and at this point the software is athe bottleneck.
Reply
Alevtina8606 [Avatar]
Newbie Private Message Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
This thread itself is The Petition! )))



Just add a post: "I am with you!"



I feel your pain... though CUDA support will gradually be replaced with OpenCL support over time, in fact yes, OpenCL uses CUDA interface to work with NVidia hardware, thus indeed, it might be faster. But not obviously: when I enable CUDA and use NVidia optimized effects, my performance is 10-15% less compared to Open CL (2 hardware platforms Intel +NVidia) plus CPU c++ code.



Vote for hope!
Reply
Legacystar9 [Avatar]
Newbie Private Message Joined: Dec 04, 2017 14:00 Messages: 6 Offline
[Post New]
Quote This thread itself is The Petition! )))



Just add a post: "I am with you!"



I feel your pain... though CUDA support will gradually be replaced with OpenCL support over time, in fact yes, OpenCL uses CUDA interface to work with NVidia hardware, thus indeed, it might be faster. But not obviously: when I enable CUDA and use NVidia optimized effects, my performance is 10-15% less compared to Open CL (2 hardware platforms Intel +NVidia) plus CPU c++ code.



Vote for hope!




OpenCL has to work better on distcrete GPU's over intigrated ones right? in my case, not even OpenCL is using my nvidia card on my Surface Book, it's always using GPU-0 which is the intel HD620. Premiere is using the nvidia card by default, and it's butty smooth, but i don't want to pay $20 a month as a hobbiest.
Reply
GGRussell [Avatar]
Senior Contributor Private Message Joined: Jan 08, 2012 11:38 Messages: 708 Offline
[Post New]
I think this discussion is getting away from the original post -- which is more about MSHybrid mode and Vidia Optimus. I don't think that OpenCL has anything to do with that.

OpenCL may work well for games, but I can't see any advantage to it over custom code for each GPU technology. As ointed out in the previous post, OpenCL just adding layers of additional processing slowing video rendering. Cyberlink should go back to true NVenc/CUDA support direct to the GPU card. I've already started looking for an editor that does. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Reply
Legacystar9 [Avatar]
Newbie Private Message Joined: Dec 04, 2017 14:00 Messages: 6 Offline
[Post New]
Quote I think this discussion is getting away from the original post -- which is more about MSHybrid mode and Vidia Optimus. I don't think that OpenCL has anything to do with that.

OpenCL may work well for games, but I can't see any advantage to it over custom code for each GPU technology. As ointed out in the previous post, OpenCL just adding layers of additional processing slowing video rendering. Cyberlink should go back to true NVenc/CUDA support direct to the GPU card. I've already started looking for an editor that does.




GGrussell, Powerdirector does support CUDA and others, just not when paired with intigrated graphics. Premiere does support these technologies on latoptops with optimus, i tested it on the trial version. i don't want to switch to Adobe becuase of the extreme cost, so i hope Clyberlink can address this very soon.
Reply
browniee112 [Avatar]
Member Private Message Joined: Dec 28, 2017 03:57 Messages: 66 Offline
[Post New]
Hi

Is this defect resolved yet? I have come across quite a few incidents being logged in various forums/youtube etc but still seem to be an open issue.

It is not recommended to do reg edits in your PC and absolutely not a requirement in order to get your video editing software working! Surely there is a way to get this fixed?

$2000 computer and the software is holding it back from making the most out of it!

Thanks
Reply
Legacystar9 [Avatar]
Newbie Private Message Joined: Dec 04, 2017 14:00 Messages: 6 Offline
[Post New]
i contacted customer service to see if i could get in touch with an engineer to resolve this issue, but it seems they think this is an nvidia problem....

here is my conversation (ATTACHED)
 Filename
clyberlink1.JPG
[Disk]
 Description
 Filesize
68 Kbytes
 Downloaded:
23 time(s)
 Filename
cyberlink2.JPG
[Disk]
 Description
 Filesize
102 Kbytes
 Downloaded:
16 time(s)
 Filename
cyberlink3.JPG
[Disk]
 Description
 Filesize
131 Kbytes
 Downloaded:
16 time(s)
Reply
GGRussell [Avatar]
Senior Contributor Private Message Joined: Jan 08, 2012 11:38 Messages: 708 Offline
[Post New]
Quote i contacted customer service to see if i could get in touch with an engineer to resolve this issue, but it seems they think this is an nvidia problem....
Typical pass the buck. THis is definitely a Cyberlink PD issue not recognizing the technology. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Reply
Legacystar9 [Avatar]
Newbie Private Message Joined: Dec 04, 2017 14:00 Messages: 6 Offline
[Post New]
Quote
Quote i contacted customer service to see if i could get in touch with an engineer to resolve this issue, but it seems they think this is an nvidia problem....
Typical pass the buck. THis is definitely a Cyberlink PD issue not recognizing the technology.


we need to get a hold of an engineer.
Reply
PepsiMan
Senior Contributor Private Message Location: Clarksville, TN Joined: Dec 29, 2010 01:20 Messages: 912 Offline
[Post New]
here's forum, browniee112, member's Instructions to get Nvidia cards to be used with Powerdirector 16 - Semi Solved . browniee112 reference PD14 will only use integrated graphics, will not use nVidia GPU on my system is forum member Suxsem's work around...

happy happy joy joy

PepsiMan
'garbage in garbage out' 'no bridge too far'

Yashica Electro 8 LD-6 Super 8mm
Asrock 970 Extreme4, W7Pro 64, AMD FX 8370E, MSI GTX1060 6GB, Corsair 16GB/RAM
Dell XPS L702X i7-2860QM, W7P / W10P 64, Intel HD3000/nVidia GT 550M 1GB, Micron 16GB/RAM
Samsung Galaxy Note3/NX1
Reply
[Post New]
Hi guys,
I have the same problem; I am not able to choose to use my GeForce GTX960M when using CyberLink.
My laptop is a Notebook UX501VW; I actually bought it to have a better option when making videos :

Do anyone know if there is a workarround?

My version is PowerDirector 14 (bundler), and I got a lot of other programs with it.

I wanted to upgrade, but if I can not utilize the program with my Notebook, is there an alternativ program with the same specifications that i should buy instead?

All the best from Denmark
Hans
Reply
blasiusxx [Avatar]
Contributor Private Message Joined: Mar 12, 2011 09:44 Messages: 330 Offline
[Post New]
Quote i contacted customer service to see if i could get in touch with an engineer to resolve this issue, but it seems they think this is an nvidia problem....

here is my conversation (ATTACHED)



I have excatly the same conversations, beginning from PD14 to now in PD16. So long this "Bug" exists.

Nothing, really nothing changes.

On the contrary, the only thing Cyberlink has come to mind is that you have completely disabled the hardware video encoding for Nvidia GPU's since PD15.

Even if it is allegedly a Nvidia problem, Cyberlink would finally have to communicate with Nvidia, many users have already complained.

I do not believe that it is Nvidia's Bug, because other video software does not have such problems with it.
I think CL is just too "lazy" to customize the rendering engine. And as I said, you can see it almost everywhere, the Intel (U) HD Graphics is definitely preferred, there are little or no problems.

Well, I use both the integrated Intel UHD 630 Graphics, as well as my Geforce GTX1060 and switch to the input that I need right now.
If I had a Ryzen (without GPU), I would have a big problem.

This message was edited 1 time. Last update was at Jan 19. 2018 08:47

Reply
browniee112 [Avatar]
Member Private Message Joined: Dec 28, 2017 03:57 Messages: 66 Offline
[Post New]
Quote Hi guys,
I have the same problem; I am not able to choose to use my GeForce GTX960M when using CyberLink.
My laptop is a Notebook UX501VW; I actually bought it to have a better option when making videos :

Do anyone know if there is a workarround?

My version is PowerDirector 14 (bundler), and I got a lot of other programs with it.

I wanted to upgrade, but if I can not utilize the program with my Notebook, is there an alternativ program with the same specifications that i should buy instead?

All the best from Denmark
Hans


Hey mate,
Yes I do know a work around, I had posted it on this forum. I seemed to not able to copy paste link here. "Instructions to enable graphics card with powerdirector - semi solved"

Issue is that, even after activating the card, PD only uses like 2-12% of GPU load which is a waste.
Reply
[Post New]
Quote
Quote i contacted customer service to see if i could get in touch with an engineer to resolve this issue, but it seems they think this is an nvidia problem....

here is my conversation (ATTACHED)



I have excatly the same conversations, beginning from PD14 to now in PD16. So long this "Bug" exists.

Nothing, really nothing changes.

On the contrary, the only thing Cyberlink has come to mind is that you have completely disabled the hardware video encoding for Nvidia GPU's since PD15.

Even if it is allegedly a Nvidia problem, Cyberlink would finally have to communicate with Nvidia, many users have already complained.

I do not believe that it is Nvidia's Bug, because other video software does not have such problems with it.
I think CL is just too "lazy" to customize the rendering engine. And as I said, you can see it almost everywhere, the Intel (U) HD Graphics is definitely preferred, there are little or no problems.

Well, I use both the integrated Intel UHD 630 Graphics, as well as my Geforce GTX1060 and switch to the input that I need right now.
If I had a Ryzen (without GPU), I would have a big problem.



Hi thx for the input, it does not look like any changes are comming soon
Reply
Anders Bixbe [Avatar]
Newbie Private Message Location: Sweden Joined: Apr 05, 2012 16:55 Messages: 44 Offline
[Post New]
I have succesfully done many, many projects with PD16 and HWA with my Nvidia GTX1060 and my desktop pc now for 6 months. The rendering time was 18 min for a 30min video which is 0,6X real time.


The Nvidia driver is 390.65. Yesterday I updated PD16 to v.2816 and now the hardware acceleration is greyed out for all HEVC H.265 settings. A 30 min rendering now takes over 6 hours! Is there a solution to this and why did it happen? I´ve had similar issues with earlier versions of PD16 and it was fixed with upgrades of PD.
Do I dare to update the Nvidia driver to the latest? Does that fix it? I have the Cuda and HWA in PD16 enabled of course.

Edit: I updated to the latest driver and the HWA came back and made me happy again.

This message was edited 1 time. Last update was at Jul 02. 2018 13:19

Corsair Vengeance C70, Asus Geforce GTX 1060
Asus Prime Z370-A, Corsair RMx 850, Samsung SSD 960 EVO M.2. Windows 10,
Corsair Vengeance LP 2X16Gb DDR4 RAM,Intel i7-8700K

Reply
[Post New]
People should just stop buying laptops for video editing.
The problem is because a duct-tape solution between two GPU's - Intel and Nvidia. Both they will be pointing fingers at eachother but eventually Intel wins because it has the underlying CPU. Nvidia has to be "allowed" to pass-trough. There is little that CL can do about it, since the Optimus software is targeting the GPU cores while the video editing software uses mostly the NVENC core (that's a separated hardware component).

Just get a desktop that costs less than that "gaming" laptop and be happy.

This message was edited 1 time. Last update was at Jul 02. 2018 18:49

Reply
GGRussell [Avatar]
Senior Contributor Private Message Joined: Jan 08, 2012 11:38 Messages: 708 Offline
[Post New]
Quote People should just stop buying laptops for video editing.
You apparantly didn't read his post since he DOES have a desktop! Cyberlink software seems to get broken with every other nVidia driver release. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Reply
Anonymous [Avatar]
[Post New]
Quote People should just stop buying laptops for video editing.
The problem is because a duct-tape solution between two GPU's - Intel and Nvidia. Both they will be pointing fingers at eachother but eventually Intel wins because it has the underlying CPU. Nvidia has to be "allowed" to pass-trough. There is little that CL can do about it, since the Optimus software is targeting the GPU cores while the video editing software uses mostly the NVENC core (that's a separated hardware component).

Just get a desktop that costs less than that "gaming" laptop and be happy.


kts broken with AMD switchable graphics, as well.

Issue isnt laptop. It’s lazy developers.

Corel VideoStudio has the same problem.

I’m ready to go to Premiere Pro CC. Tried a trial. Amazing performance on my Laptop, cause they actually use the hardware. DaVinci Resolve also uses the dGPU for processing. Avid used the dGPU. Magix Bideo Pro X and Movie Edit Pro uses the dGPU. Vegas can run on that GPU.

This is issue is not the norm. These two NLEs not working properly are anomalies.

Thisbis really problematic because it also affects any plugin spawned by these NLEs. Boris, NewBlue Totler, etc. the render performance in them is terrible because they aren’t allowed to run on a GPU different from the NLE when spawned as a sub process of it.

Its a a major issue.

This message was edited 1 time. Last update was at Aug 28. 2018 05:12

Reply
Reply to this topic
Powered by JForum 2.1.8 © JForum Team