Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
GPU selection?
ne0031 [Avatar]
Newbie Joined: Sep 21, 2020 15:15 Messages: 8 Offline
[Post New]
I found some references that PD doesn't support SLI or crossfire, but I don't use those. I do however, have multiple gpus in my system. Interestingly, PD chooses to use the oldest and slowest of them when encoding. Is there any way to set a preference for which gpu is used?

HW acceleration is enabled and fast rendering is selected with hardware encoding. I've tried no fast, fast with svrt, and fast with hardware (which is the fastest.) My gpu won't get above 20%, the cpu gets to 50%, and the storage is all but idle.

GTX1070, GTX960, i7-8700k (6c 3.6), various nvme storage, win 10 ent.
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
SVRT doesn't use hardware at all but it's only available when minimal changes are made to your source clips. If you've used LUTs those will have to be handled by the CPU regardless of what GPUs are attached.

As far as getting PD to utilize a specific card, your best bet is to read this post and see which two cards Windows decides is "power saving" and which is "high performance." You should then be able to select the one you'd like for PD to use.
[Post New]
Quote
As far as getting PD to utilize a specific card, your best bet is to read this post and see which two cards Windows decides is "power saving" and which is "high performance." You should then be able to select the one you'd like for PD to use.

That applies only to laptops (and Surface tablet). He has two discrete GPUs.

Maybe this way - in nvidia control panel there are several Cyberlink software - assign all of them to the faster GPU. It won't matter too much, but at least he can experiment:

[Thumb - settings.png]
 Filename
settings.png
[Disk]
 Description
 Filesize
94 Kbytes
 Downloaded:
1 time(s)

This message was edited 3 times. Last update was at Sep 23. 2020 21:27

ne0031 [Avatar]
Newbie Joined: Sep 21, 2020 15:15 Messages: 8 Offline
[Post New]
Quote

That applies only to laptops (and Surface tablet). He has two discrete GPUs.

Maybe this way - in nvidia control panel there are several Cyberlink software - assign all of them to the faster GPU. It won't matter too much, but at least he can experiment:



Wow, all the years I've used nVidia and I had no idea of the per application controls. I've already adjusted some for specific applications and am now seeing the expected behavior per gpu for those.

With PD, I've specified each GPU, ran the PD optimizer, yet the slower GPU is the only one that continues to be used. It doesn't appear that these setting have much, if any, impact.
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
Quote With PD, I've specified each GPU, ran the PD optimizer, yet the slower GPU is the only one that continues to be used. It doesn't appear that these setting have much, if any, impact.

PD doesn't use CUDA or 3D, so I'm afraid those settings won't make any difference.

Was sonic67 correct that the Graphics Settings option I suggested doesn't work with discrete GPU cards? I've never had more than one external card but I assumed you would still have that option available. If it's only tied to which GPU is driving that specific monitor, then I can see why it wouldn't work here.

However, that may be your answer. Are you driving a monitor from the higher-performance GPU? Launch PD and place it on that monitor then close it. See if it comes back up on that monitor after relaunching it, then see if PD will use the GPU it's actually running on.
[Post New]
Sadly nvidia control panel doesn't let us choose the card that does the video encoding (NVENC), only the 3D CUDA cores.
In my rig, the weaker Quadro P620 NVENC is used instead of the GTX1080 one. That's valid when when recording games.
Thhose are the same generation, but P620 has just one NVENC chip, while GTX1080 has two:
https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

This message was edited 1 time. Last update was at Sep 25. 2020 15:48

Powered by JForum 2.1.8 © JForum Team