Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
Multi-GPGPU revisited - again
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
JL_JL -- what app were you using to show CPU and GPU simultaneously? Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: JL_JL -- what app were you using to show CPU and GPU simultaneously?

All % load of CPU or GPU's are plotted vs computer time (system clock) on the x axis so characteristics are appropriately synched. The plot was created in Excel from basic performance tabular data. Data was acquired at ~1 sec intervals during the encoding process.


I was asked separately if I could substantiate the claim I made below.
Quote: The OpenCL option in preferences can even be used by those that like CPU encoding, if ticked, any accelerated effects used in the timeline will get the GPGPU features provided by OpenCL. So if the user has a AMD CPU and a Nvidia GTX470, the GTX470 GPU will be used as a general processor to aid encoding the effects in the timeline that are accelerated. The GPU will be used as a general processor (GPGPU).

The attached picture shows it rather clearly. The same generic timeline used previously for understanding was used. In this case all 4 runs use CPU encoding (Produce "Fast video rendering technology" unselected).

Summary of chart:
Run 1: Basic CPU encoding, the Preferences > Hardware Acceleration > Enable OpenCL technology... was unselected so the ENTIRE timeline will be CPU encoded.

Run 2: This run essentially shows the effect when OpenCL is activate in Preferences > Hardware Acceleration > Enable OpenCL technology... and a accelerated effect is used in the timeline. During the encoding of the first and last 2 min sections of the timeline I get reasonable participation from the GTX470 discrete video card. Again it was CPU encoded yet the GTX470 participates, this is the result of using OpenCL. The discrete GPU was used as a general processor (GPGPU) to aid in the CPU encoding of the timeline. The effect of this was a significant reduction in encoding time. Will all timelines benefit, no, only those timelines that have significant use of HA effects (these are shown in PD with the GPU logo in the lower left corner of the effect in the effect room) will benefit.

Run 3 and 4: For a non accelerated effect in the timeline, OpenCL used in preferences has no effect, both runs match each other and the entire timeline is CPU encoded with no assist from the discrete GTX470.

Jeff
[Thumb - OpenCL_1090T.png]
 Filename
OpenCL_1090T.png
[Disk]
 Description
 Filesize
96 Kbytes
 Downloaded:
91 time(s)
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
Quote: Run 3 and 4: For a non accelerated effect in the timeline, OpenCL used in preferences has no effect, both runs match each other and the entire timeline is CPU encoded with no assist from the discrete GTX470.
When I did my last test above, I used an HDV 1440x1080i (MPEG2) file that I made cuts for a 5min project. NO FX filters, no fades, etc. Just the clip with cuts.

I then produced to MP4 and AVC and as you can see, both graphics GPUs I have were used to transcode. I monitored GPU load using GPU-Z. What technology was used isn't really that important. Would seem to me that it just depends on what you are doing with the original file whether or not the GPU will be utilized. I wouldn't think that SVRT would use the CPU or GPU except for the parts that need rendering.

This message was edited 1 time. Last update was at Mar 09. 2014 15:41

Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: What technology was used isn't really that important. Would seem to me that it just depends on what you are doing with the original file whether or not the GPU will be utilized. I wouldn't think that SVRT would use the CPU or GPU except for the parts that need rendering.

The technology is important to some, pure Hardware encoding using the GPU selected in "Produce" or "Create Disc" (Nvidia or AMD) has led to artifacts for some and non consistent bitrate for others, hence the reason some users only like to use CPU encoding. SVRT is a whole different ballgame and so far not discussed here. None of the results I presented use SVRT.

Jeff

This message was edited 1 time. Last update was at Mar 09. 2014 15:59

GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
Guess I was lucky. I saw no artifacts with any of the files that I produced with the HD4600 or the ATI HD7870. Since i rarely use FX filters, I'm more interested in how fast PD12 can transcode to MP4. Once in awhile I might output to MPEG2 HD or AVCHD to create a Bluray with other software.

This is been a very informative exercise.

Not sure what you consider 'non consistent', but variable bitrate and variable frame rate are both very common these days with latest camcorders, tablets and phones. Video editors like PD need to keep up.

This message was edited 2 times. Last update was at Mar 09. 2014 17:22

Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: Guess I was lucky. I saw no artifacts with any of the files that I produced with the HD4600 or the ATI HD7870. Since i rarely use FX filters, I'm more interested in how fast PD12 can transcode to MP4. Once in awhile I might output to MPEG2 HD or AVCHD to create a Bluray with other software.

This is been a very informative exercise.

Not sure what you consider 'non consistent', but variable bitrate and variable frame rate are both very common these days with latest camcorders, tablets and phones. Video editors like PD need to keep up.


Here is a classic example of "non consistent" http://forum.cyberlink.com/forum/posts/list/10697.page#47739 in the last post, the user in 'Produce' selected H.264, 1920x1080 (24 mbps), with GPU, he notes he got around 10Mbps in his output file, that's rather non consistent. He didn't get close to the 24Mbps that he asked for. Also note the comment near the end, he found it ATI specific. In other posts users have got around 16Mbps. My experience is these "features" with PD tend to change release to release , GPU driver to driver, and always offer a learning curve to resolve yet again with each change. However, very little of this has ever been an issue with PD CPU encoding. That technology has been very robust.

Artifacts or blocky areas for me periodically only happen when certain editing effects have been added to the timeline in PD. I have not seen them in a simple timeline with just cuts and then encoding with any of my Nvidia cards (GTX210, GTX470, GTX580) or the HD4000. Your HD4600 should even be better quality wise as they tightened some of the video quality specs internally for the H.264 Main and High profiles at the expense of speed, maybe not interesting for you as you want speed.

Quality is often in the eye of the operator as well. What some see as artifacts or blocky, others are totally happy with.

Jeff
Powered by JForum 2.1.8 © JForum Team