Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
Multi-monitor problems with multiple GPUs
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
I have 2 monitors connected through HDMI switches to my desktop, and I can run either monitor from my Intel UHD Graphics 630 GPU or my nVidia RTX 2070 GPU.

When both monitors are connected to the same GPU, PD behaves as expected: The logos on the About screen match the GPU and the hardware producing options for that GPU are available.

However, if I have one monitor connected to each GPU, PD cannot accurately determine which GPU to use and displays incorrect logo information when compared to the producing options.

In each of these screenshots, I show how Win10 lists my monitors and note which GPU is active. I combine that with PD's About screen and the available producing options. The monitor that PD is displayed on is shown in blue, and sometimes there is a 3rd (virtual) monitor that Windows created but that I've disabled.

The first image is when both monitors are connected to the Intel iGPU, and everything works as expected:



The second image is with both monitors connected to the nVidia GPU, and everything here is normal as well:



The problems start when the monitors are connected to different GPUs. Here PD detects the nVidia card but shows Quick Sync for producing. Win10 shows no extra virtual monitor:



The final configuration is the worst, with opposite information on the About and Produce windows and the hardware produce option completely unavailable. There is also a virtual monitor that requires an out-of-sequence monitor order:



Of the 4 combinations, 3 show nVidia hardware on the About screen but 3 show Quick Sync on the Produce screen (only 2 are viable).

All of these are with the Win10 Graphics performance preference tool (to select high power and low power GPUs by app) unchanged from the default of no apps selected - this is only what happens when the monitors are connected to different GPU ports.

YouTube/optodata


DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors

Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
All of the above images were made after switching the monitors with PD closed. I then opened PD to take the screenshots, but I did not reboot Windows.

I have reported the issue to tech support on ticket # CS002147826

YouTube/optodata


DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors

Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Similar issue with two Nvidia GPU's and two monitors. A much harder to detect issue, but it appears very similar.

I've just used the both monitors connected to the same GPU approach via switches and PD behaves as expected, very simple effective solution. Or in this case, one can also use the Nvidia control panel and "Set up multiple displays" to temporary disable the one you don't want to use in PD and that works too (but you lose a monitor when one is driven by each GPU) for changing the hardware encoding GPU in PD.

Jeff
[Post New]
Why would you still keep the Intel GPU active on a desktop with an RTX card inside? It just takes power and adds heat to the CPU die (posibly throtting the CPU earlier).
You have plenty of ports on that nvidia card, use them.

Disable that Intel GPU either in BIOS or in Windows.

This message was edited 1 time. Last update was at Apr 27. 2020 06:16

tomasc [Avatar]
Senior Contributor Joined: Aug 25, 2011 12:33 Messages: 6464 Offline
[Post New]
Quote Why would you still keep the Intel GPU active on a desktop with an RTX card inside? It just takes power and adds heat to the CPU die (posibly throtting the CPU earlier).
You have plenty of ports on that nvidia card, use them.

Disable that Intel GPU either in BIOS or in Windows.

The intel iGPU is very useful and should not be disabled in the bios for Nvidia card users in my opinion.

WinMediaCenter recordings are in Mpeg-2. One can use Intel Quick Sync to do the hardware encoding if desired. Mpeg-2 hardware encoding is not available in the Nvidia.

DVD and Blu-ray disc authoring and creation using Mpeg-2 can again use the Quick Sync.

Enabling Quick sync in production of h.264 and h.265 will lower the cpu utilization and the cpu temperature monitored while speeding up production. I haven’t figured out this one yet as both the cpu and the iGPU are on the same chip. The cpu temp in the HW monitor shows that the cpu temp goes down, not up. You should be safe from throttling. The CPU fan rpm also goes down, not up so it must be true. Try it yourself...

Contributors may also want to use the iGPU to understand the posts that users with such computer reporting an issue and duplicate it to find a solution.
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
Quote Contributors may also want to use the iGPU to understand the posts that users with such computer reporting an issue and duplicate it to find a solution.

I haven't looked into the power/thermal consequences, and my focus is mostly on having both options available for troubleshooting and verification of various forum issues.

The HDMI switches allow me clean access to each ASIC, but there is a clearly a conflict on how/when/where PD determines the system's hardware capbility. This is even before we get into how the Graphics performance preferences from pmikep's thread come into play, so I'm trying to get a better picture of this situation from CL before adding another layer of complexity.

YouTube/optodata


DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors

Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°
pmikep [Avatar]
Senior Member Joined: Nov 26, 2016 22:51 Messages: 285 Offline
[Post New]
Nice post. I especially like your last screenshot, where you show that PD is seeing the Nvidia (as evidenced by the icons) but not allowing users to select it in systems that have an Intel iGPU active.

I don't know what that "phantom" (virtual) monitor is that shows up in Windows Display in your last screenshot. But perhaps you stumbled on to something. Perhaps this is some kind of "enumeration problem" in WIndows that confuses PD? (Since the Nvidia monitor is now #3, and #2 was kind of a black hole and perhaps stopped the count. (Or maybe PD only counts up to two?))

This also brings up a question which I've alluded to in my thread: Is there ever a time when PD offers users three choices under Fast Rendering Video Technology: SVRT, Intel QuickSync, AND Hardware Acceleration?

Should it?

Am guessing probably not, since the icon array never shows both the Intel and the Nvidia icons at the same time. (That is, since PD only sees/uses one GPU or the other.) Although if PD saw both GPU's and showed icons for all (which I suppose would mean two CL icons in some instances - one for Intel, one for Nvidia.))

Which brings up a question for later: Should PD limit itself to using only one or the other GPU? Is it possible to take advantage of both for speed? (E.g., one used for calcualting effects, while the other is used for Encoding.)
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
Quote Is there ever a time when PD offers users three choices under Fast Rendering Video Technology: SVRT, Intel QuickSync, AND Hardware Acceleration?

No. Besides Quick Sync and NVENC, there's a third one for AMD cards - but only one hardware option is ever available. I think you answered your own question as to why that is

Your comment about 3 monitors and PD "only counting to 2" may have some truth to it. In the 1st screen capture in my OP, the RTX 2070 is on a disabled 3rd monitor and it doesn't impact the iGPU, but what if that's because it's specifically on monitor #3 so PD doesn't even notice (and can't get conflicting info)?

Since monitor #3 is active in the last screenshot, I did two more tests where I combined monitors 1&2 and then 2&3 to see what would happen.

I can combine displays 1&2 (even though they're on separate graphics cards!) and leave monitor 3 unchanged. Somehow, this allows PD to utilize NVENC:



If instead I combine displays 2&3 (so they're both running off the nVidia card), I'm stuck with the original broken Quick Sync situation that occurs without combining displays:



There's certainly a clue here, and hopefully this will help uncover where and why PD is getting confused.

This message was edited 1 time. Last update was at Apr 28. 2020 00:15

optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
Ok, to be thorough I took on the other 3 monitor case and found that combining displays 2&3 doesn't change anything as compared to having display 3 disconnected:




However, if I combine displays 1&3, Quick Sync remains available but the nVidia logos are displayed:

This message was edited 1 time. Last update was at Apr 28. 2020 00:38



YouTube/optodata


DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors

Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°
pmikep [Avatar]
Senior Member Joined: Nov 26, 2016 22:51 Messages: 285 Offline
[Post New]
Fantastic Detective Work! Is this the first time anyone has gotten the Nvidia HA option to show while an Intel GPU is active?

I definitely think you found a (big) clue.

This message was edited 2 times. Last update was at Apr 28. 2020 01:54

JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote Which brings up a question for later: Should PD limit itself to using only one or the other GPU? Is it possible to take advantage of both for speed? (E.g., one used for calcualting effects, while the other is used for Encoding.)

PD at one time advertised that, they called it multi-GPGPU. You can see their promo of it in the attached pic when PD11 was released. It created much discussion and much controversy in the forums on what it really did. I made a test case at the time to show what I thought they were doing, under the right conditions, it could speed things up substantially as their chart showed. However, these conditions really didn't represent what I'd call a typical timeline content so I took a fair amount of criticism for the test at the time.

Ironically, an esteemed MVPer was invited to CES in Las Vegas in Jan, 2013 and offered this post CES. "I did speak with Cyberlink representatives in person at the CES show in Las Vegas and tried to get some further clarification on this subject over dinner. Some of the increased speed advertised was a result of utilizing effects on the timeline that displayed the NvIdea or ATI logo on the effect in the effects room. The second GPU if it was of the correct type, could render these effects more quickly would result in an overall shorter render time."

This was exactly my test case, the multi-GPGPU was very effective at handling the OpenCL effects to offload the CPU and encoding GPU. This capability to my knowledge was significantly changed or dropped in PD16. I just ran the old test case in PD18 and it does not benefit one bit from multi-GPGPU, my old PD15 still does, 45% faster with 2 GPU's for that unique test case.

My guess is for them to take advantage of any particular GPU or GPU configuration as you'd like, they would need to divorce the user GUI interface and the encoding. I think whatever adapter is driving the GUI, that's also the encoder you have at your disposal. This appears clear cut when driven by one GPU alone in any dual monitor combinations, and PD traps strange configurations when any type of mixed crossover exists.

Jeff
[Thumb - PD11_GPGPU.PNG]
 Filename
PD11_GPGPU.PNG
[Disk]
 Description
 Filesize
185 Kbytes
 Downloaded:
55 time(s)
pmikep [Avatar]
Senior Member Joined: Nov 26, 2016 22:51 Messages: 285 Offline
[Post New]
Interesting. I have PD15 on this Win10 box too, but stopped using it.

I assumed that PD18, being newer, would be faster. (Although I remember your performance test to see if 18 was any faster than 17. IIRC, you said it isn't.)

And, too, PD15 can't use the newer Nvidia drivers after 411.x. Which is one reason that I finally bought PD 18 for this box.

So I would try goofing around with it, but I don't think it can see my Nvidia with the latest drivers.

Maybe the multi-GPU thing was easier to do in the CUDA days?

(Ironically, the guys in the Magix forum did some benchmarking recently and found that when "Calcuate Video Effects on GPU" was enabled, encodes generally took longer. The guys said that in the old days of slower CPU's it helped. But nowadays, the opposite. Maybe there are some parallel processes that result in wait-state latency nowadays with very fast processors.)

This message was edited 1 time. Last update was at Apr 29. 2020 01:14

[Post New]
Quote PD at one time advertised that, they called it multi-GPGPU.

That would be DirectX12 feature?

https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12
pmikep [Avatar]
Senior Member Joined: Nov 26, 2016 22:51 Messages: 285 Offline
[Post New]
The blog date is 2017. And, IIRC, DirectX 12 wasn't out in PD 11 days. Having said that, it sounds like PD could be made to take advantage of this -12 feature. (Applying the Murphy's Law Rule of Management that says "No job is impossible for the man who doesn't have to do it himself.")
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]

I don't think that was PD's implementation in this case. That capability to my knowledge requires SLI or the equivalent, PD has never been compatible with SLI. Additionally, PD still only uses DirectX10 except for a few features. My view, it was simply using the capability OpenCL provides, which PD started adopting at PD10. OpenCL is effective to execute across heterogeneous platforms, and that’s what’s in the CL comparison chart. (i5-3550 CPU, HD4000 iGPU vs i5-3550 CPU, HD4000 iGPU, and GTX580 dGPU).

Curious to read the technical support response from the ticket submitted on dual monitors and GPU's.

Jeff
optodata
Senior Contributor Location: California, USA Joined: Sep 16, 2011 16:04 Messages: 8630 Offline
[Post New]
CL's response:
Thank you for contacting CyberLink Technical Support.

We understand that this is regarding your previous producing concern in PowerDirector 365 subscription software. We are more than willing to assist you.

With reference to your query, by pilot checking, we have reproduced similar conditions in our laboratory. This case has been escalated to the engineering team for further investigation.

We really appreciate your feedback and time.


No details on what they found or what they might be doing to address the issues, but still progress in bringing it to the attention of the appropriate people.

This message was edited 1 time. Last update was at Apr 30. 2020 19:40



YouTube/optodata


DS365 | Win11 Pro | Ryzen 9 3950X | RTX 4070 Ti | 32GB RAM | 10TB SSDs | 5K+4K HDR monitors

Canon Vixia GX10 (4K 60p) | HF G30 (HD 60p) | Yi Action+ 4K | 360Fly 4K 360°
pmikep [Avatar]
Senior Member Joined: Nov 26, 2016 22:51 Messages: 285 Offline
[Post New]
Cool! That's basically the same message they sent me too.

(The process went a lot faster for you than for me.)

I really like the Synergy here in the forum.
[Post New]
Quote
I don't think that was PD's implementation in this case. That capability to my knowledge requires SLI or the equivalent, PD has never been compatible with SLI.
Jeff


No, that is allegedly very different from SLI, it can work with different cards, from different manufacturers even. It's just supposed to create a "pool" of processing capabilities, transparent to the application.
At leas that's what that blog explains.

This message was edited 1 time. Last update was at May 01. 2020 18:25

Powered by JForum 2.1.8 © JForum Team