Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
Hardware Acceleration
The Shadowman
Senior Contributor Location: UK Joined: Dec 15, 2014 13:06 Messages: 1831 Offline
[Post New]
Would somebody please give an explanation as to the meaning of "Hardware Acceleration" and how it effects PD. Particularly, I would like to know the difference between having it checked and not. What physically happens in PD when you check it, and why does sometimes it make things worse.

Thanks for any help



Robert Panny TM10, GH2, GH4,
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote Would somebody please give an explanation as to the meaning of "Hardware Acceleration" and how it effects PD. Particularly, I would like to know the difference between having it checked and not. What physically happens in PD when you check it, and why does sometimes it make things worse.


This has all been discussed in the forum many times, all the HA controls are pretty explanatory, what really matters is the real functionality within PD. Below are gleaned results from various evaluations of the CL implementations with primarily PD14/PD15 products and current releases as of 1/18/2017. As I said, gleaned information with supportive links, if I’ve overlooked some aspect of CL PD capability, my apologies, show me the error with supporting data and I’ll own up to the error. I’ve got no inside information or any CL internal leads to PD RD for insight on implementation so the best reverse knowledge is simply gained by controlled tests and hardware monitoring, something anyone can do.

CL provided some details in this FAQ, http://www.cyberlink.com/support/product-faq-content.do?id=12777&prodId=4&prodVerId=-1&CategoryId=-1&keyword=effects

Preference > Hardware Acceleration > Enable OpenCL technology....
This feature determines what technology is to be used for these specialized effects (the ones with the GPU logo in the corner) when applied to the timeline. When selected, the GPU will be used to create the effect preview for that specific Fx. So *IF* "Enable hardware encoding" in "Produce" is selected as well as this option, these specialized effects if used in the timeline will get the assistance of the GPU to handle the additional frame by frame workload to create for instance the Abstractionism effect. If this option is unselected, but "Enable hardware encoding" is selected in like the “Produce” section, the video will be hardware encoded by the GPU but these special effects will have the CPU do the frame by frame effect task prior to handing off the prepared frame to the encoder.
Likewise when previewing a video in the timeline that has one of these special effects applied. If this option is selected, the GPU will assist in the render of these effects to the playback window, if not selected, the CPU will do the effect task. OpenCL runs on the GPU cores and any modern mid-range GPU supports OpenCL, be it AMD, Intel, or Nvidia.
In the link here http://forum.cyberlink.com/forum/posts/list/25/43784.page#post_box_226790 attached was a sample PD13 project with boats.wmv which mimics a 3x3 video wall and highlights the benefits of OpenCL. As you can see, a great technology and can provide a significant encoding time benefit when OpenCL is used for PD13 fx’s, there it was shown an ~3x reduction in encode times.

Preference > Hardware Acceleration > Enable hardware decoding
Is pretty self-explanatory, it only affects decoding, nothing to do with encoding so does not affect the ability to "Enable hardware encoding" in "Produce" or "Create Disc". Not all video formats are supported by the card for hardware decoding. It's GPU specific, [AMD(UVD), Intel(Clearvideo), Nvidia(PureVideo/NVDEC)] are the technologies under the hood. Additionally, full hardware decoding is not always supported in PD even if selected and even if the GPU supports it. One recent discussion in the PD15 beta thread. http://forum.cyberlink.com/forum/posts/list/50731.page#post_box_266429 with CL RD still working the issue so maybe good news soon. So, buying or recommending a GPU because it the supports H.264 decoding can be less than futile if the software is not fully supporting the feature, which has been the case all through PD14 and is still the case with the current release of PD15 2309. Yes, the box is available to check, that does not mean it functions as one might think. However, for video formats that are fully support by the software and hardware, splitting what does the decoding task can be of significant benefit in edit timeline playback and also encoding. One such benefit discussed and presented here with performance data: http://forum.cyberlink.com/forum/posts/list/15/49597.page#post_box_261126 highlighting different device loads with different settings. Obviously to do such a test as shown in the link with a WMV in the timeline would make no sense as WMV is typically not supported in hardware GPU decoding. Difference in quality can exist as it’s not the same decoding logic, CPU and GPU.
Basic high level links of decoder technology:
AMD: https://en.wikipedia.org/wiki/Unified_Video_Decoder
Intel http://www.intel.com/products/chipsets/clear_video/prod_brief.pdf
Nvidia: https://en.wikipedia.org/wiki/Nvidia_PureVideo

Produce > Fast video rendering technology > Hardware video encoder
Again, pretty self-explanatory, if your GPU supports hardware encoding, you can enable this feature. When enabled, the final encoding will be done with the GPU, when unselected, encoding is done with the CPU provided SVRT (another topic) is not selected. Whether faster or slower or a change in output quality really depends on the hardware involved and the timeline contents. For recent PD versions and GPU's, hardware encoding with the GPU utilizes the newer rendering engines [AMD (VCE), Intel (QS), Nvidia (NVENC)] and use their proprietary ASIC IP block and interface API's to perform the video encoding. This special integrated circuit of the GPU does the encoding, not for instance the CUDA cores available for a Nvidia GPU. That’s why posting and comparing GPU’s by quoting core capability, primarily suited for game physic calculations, is not real indicative of PD encode performance with stated GPU. Difference in quality can exist as it’s not the same encoding algorithms that are used between PD software (CPU) and hardware implementations. With what's deployed in PD, not all formats, features, controls that maybe available in the specific GPU API code is exposed. For instance, multi-pass encoding maybe supported in the API calls and the encoder, but not implemented by the application (PD). Additionally, not all scanning methods are supported. For instance, hardware encoding 60i may be supported with one framesize but only 60p in another framesize. More often than not, it appears PD biases support to progressive scan formats for standard Profile name/Quality default menu selections. Why are these subtleties important, well if one buys a new AMD R9 Fury for instance with the hope of using hardware encoding to create some nice standard 1920x1080/60i H.264 BD’s to share, well out of luck, not a supported format in PD15, however, a Nvidia GTX1070 will support the interlaced format. Additionally, each of these encoding technologies and features constantly evolves through various revs as the linked sources below reveal. For instance H.265 encoding supported on AMD GPU’s which are VCE3.0 capable, however, not currently functional in PD and CL RD advices it’s under development so hopefully available soon for those AMD users, http://forum.cyberlink.com/forum/posts/list/15/50731.page#post_box_267005 . This AMD technology released ~early 2015. Conversely, H.265 encoding is available in PD15 on all GeForce 900 series GPU’s and newer, initially also released ~early 2015.
Basic high level links of encoder technology:
AMD: https://en.wikipedia.org/wiki/Video_Coding_Engine
Intel: https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video
Nvidea: https://en.wikipedia.org/wiki/Nvidia_NVENC

As seen with the above descriptions, ALL 3 HA type options, the two in pref > Hardware Acceleration and the hardware encoder (available in either the Produce or Create Disc) all perform a unique function and can be used independently.

Jeff
CS2014
Senior Contributor Location: USA-Eastern Time Zone Joined: Sep 16, 2014 16:44 Messages: 629 Offline
[Post New]
Thank you Jeff for those explainations.

CS PD13 Ultimate - Build 3516, WIN 8.1, 64 Bit, 16G RAM, Intel Core i5 4460, CPU @ 3.2GHz, NVIDIA GeForce GT720, Graphics Memory(total avail.)-4093MB
LG WH14NS40 Blu-Ray Drive
The Shadowman
Senior Contributor Location: UK Joined: Dec 15, 2014 13:06 Messages: 1831 Offline
[Post New]
Quote
Quote Would somebody please give an explanation as to the meaning of "Hardware Acceleration" and how it effects PD. Particularly, I would like to know the difference between having it checked and not. What physically happens in PD when you check it, and why does sometimes it make things worse.


This has all been discussed in the forum many times, all the HA controls are pretty explanatory, what really matters is the real functionality within PD. Below are gleaned results from various evaluations of the CL implementations with primarily PD14/PD15 products and current releases as of 1/18/2017. As I said, gleaned information with supportive links, if I’ve overlooked some aspect of CL PD capability, my apologies, show me the error with supporting data and I’ll own up to the error. I’ve got no inside information or any CL internal leads to PD RD for insight on implementation so the best reverse knowledge is simply gained by controlled tests and hardware monitoring, something anyone can do.

CL provided some details in this FAQ, http://www.cyberlink.com/support/product-faq-content.do?id=12777&prodId=4&prodVerId=-1&CategoryId=-1&keyword=effects

Preference > Hardware Acceleration > Enable OpenCL technology....
This feature determines what technology is to be used for these specialized effects (the ones with the GPU logo in the corner) when applied to the timeline. When selected, the GPU will be used to create the effect preview for that specific Fx. So *IF* "Enable hardware encoding" in "Produce" is selected as well as this option, these specialized effects if used in the timeline will get the assistance of the GPU to handle the additional frame by frame workload to create for instance the Abstractionism effect. If this option is unselected, but "Enable hardware encoding" is selected in like the “Produce” section, the video will be hardware encoded by the GPU but these special effects will have the CPU do the frame by frame effect task prior to handing off the prepared frame to the encoder.
Likewise when previewing a video in the timeline that has one of these special effects applied. If this option is selected, the GPU will assist in the render of these effects to the playback window, if not selected, the CPU will do the effect task. OpenCL runs on the GPU cores and any modern mid-range GPU supports OpenCL, be it AMD, Intel, or Nvidia.
In the link here http://forum.cyberlink.com/forum/posts/list/25/43784.page#post_box_226790 attached was a sample PD13 project with boats.wmv which mimics a 3x3 video wall and highlights the benefits of OpenCL. As you can see, a great technology and can provide a significant encoding time benefit when OpenCL is used for PD13 fx’s, there it was shown an ~3x reduction in encode times.

Preference > Hardware Acceleration > Enable hardware decoding
Is pretty self-explanatory, it only affects decoding, nothing to do with encoding so does not affect the ability to "Enable hardware encoding" in "Produce" or "Create Disc". Not all video formats are supported by the card for hardware decoding. It's GPU specific, [AMD(UVD), Intel(Clearvideo), Nvidia(PureVideo/NVDEC)] are the technologies under the hood. Additionally, full hardware decoding is not always supported in PD even if selected and even if the GPU supports it. One recent discussion in the PD15 beta thread. http://forum.cyberlink.com/forum/posts/list/50731.page#post_box_266429 with CL RD still working the issue so maybe good news soon. So, buying or recommending a GPU because it the supports H.264 decoding can be less than futile if the software is not fully supporting the feature, which has been the case all through PD14 and is still the case with the current release of PD15 2309. Yes, the box is available to check, that does not mean it functions as one might think. However, for video formats that are fully support by the software and hardware, splitting what does the decoding task can be of significant benefit in edit timeline playback and also encoding. One such benefit discussed and presented here with performance data: http://forum.cyberlink.com/forum/posts/list/15/49597.page#post_box_261126 highlighting different device loads with different settings. Obviously to do such a test as shown in the link with a WMV in the timeline would make no sense as WMV is typically not supported in hardware GPU decoding. Difference in quality can exist as it’s not the same decoding logic, CPU and GPU.
Basic high level links of decoder technology:
AMD: https://en.wikipedia.org/wiki/Unified_Video_Decoder
Intel http://www.intel.com/products/chipsets/clear_video/prod_brief.pdf
Nvidia: https://en.wikipedia.org/wiki/Nvidia_PureVideo

Produce > Fast video rendering technology > Hardware video encoder
Again, pretty self-explanatory, if your GPU supports hardware encoding, you can enable this feature. When enabled, the final encoding will be done with the GPU, when unselected, encoding is done with the CPU provided SVRT (another topic) is not selected. Whether faster or slower or a change in output quality really depends on the hardware involved and the timeline contents. For recent PD versions and GPU's, hardware encoding with the GPU utilizes the newer rendering engines [AMD (VCE), Intel (QS), Nvidia (NVENC)] and use their proprietary ASIC IP block and interface API's to perform the video encoding. This special integrated circuit of the GPU does the encoding, not for instance the CUDA cores available for a Nvidia GPU. That’s why posting and comparing GPU’s by quoting core capability, primarily suited for game physic calculations, is not real indicative of PD encode performance with stated GPU. Difference in quality can exist as it’s not the same encoding algorithms that are used between PD software (CPU) and hardware implementations. With what's deployed in PD, not all formats, features, controls that maybe available in the specific GPU API code is exposed. For instance, multi-pass encoding maybe supported in the API calls and the encoder, but not implemented by the application (PD). Additionally, not all scanning methods are supported. For instance, hardware encoding 60i may be supported with one framesize but only 60p in another framesize. More often than not, it appears PD biases support to progressive scan formats for standard Profile name/Quality default menu selections. Why are these subtleties important, well if one buys a new AMD R9 Fury for instance with the hope of using hardware encoding to create some nice standard 1920x1080/60i H.264 BD’s to share, well out of luck, not a supported format in PD15, however, a Nvidia GTX1070 will support the interlaced format. Additionally, each of these encoding technologies and features constantly evolves through various revs as the linked sources below reveal. For instance H.265 encoding supported on AMD GPU’s which are VCE3.0 capable, however, not currently functional in PD and CL RD advices it’s under development so hopefully available soon for those AMD users, http://forum.cyberlink.com/forum/posts/list/15/50731.page#post_box_267005 . This AMD technology released ~early 2015. Conversely, H.265 encoding is available in PD15 on all GeForce 900 series GPU’s and newer, initially also released ~early 2015.
Basic high level links of encoder technology:
AMD: https://en.wikipedia.org/wiki/Video_Coding_Engine
Intel: https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video
Nvidea: https://en.wikipedia.org/wiki/Nvidia_NVENC

As seen with the above descriptions, ALL 3 HA type options, the two in pref > Hardware Acceleration and the hardware encoder (available in either the Produce or Create Disc) all perform a unique function and can be used independently.

Jeff


Thanks Jeff, for going to all the trouble and creating such a detailed article.

It's a lot to take in first time round, so I will read it again and again, each time seeing something new. What I can say is, even the first read taught me a lot.

Thanks very much again.

Robert Panny TM10, GH2, GH4,
Richard Palmisano
Newbie Location: Waggaman Louisiana Joined: Mar 13, 2017 17:59 Messages: 7 Offline
[Post New]
Quote
Quote
Quote Would somebody please give an explanation as to the meaning of "Hardware Acceleration" and how it effects PD. Particularly, I would like to know the difference between having it checked and not. What physically happens in PD when you check it, and why does sometimes it make things worse.


This has all been discussed in the forum many times, all the HA controls are pretty explanatory, what really matters is the real functionality within PD. Below are gleaned results from various evaluations of the CL implementations with primarily PD14/PD15 products and current releases as of 1/18/2017. As I said, gleaned information with supportive links, if I’ve overlooked some aspect of CL PD capability, my apologies, show me the error with supporting data and I’ll own up to the error. I’ve got no inside information or any CL internal leads to PD RD for insight on implementation so the best reverse knowledge is simply gained by controlled tests and hardware monitoring, something anyone can do.

CL provided some details in this FAQ, http://www.cyberlink.com/support/product-faq-content.do?id=12777&prodId=4&prodVerId=-1&CategoryId=-1&keyword=effects

Preference > Hardware Acceleration > Enable OpenCL technology....
This feature determines what technology is to be used for these specialized effects (the ones with the GPU logo in the corner) when applied to the timeline. When selected, the GPU will be used to create the effect preview for that specific Fx. So *IF* "Enable hardware encoding" in "Produce" is selected as well as this option, these specialized effects if used in the timeline will get the assistance of the GPU to handle the additional frame by frame workload to create for instance the Abstractionism effect. If this option is unselected, but "Enable hardware encoding" is selected in like the “Produce” section, the video will be hardware encoded by the GPU but these special effects will have the CPU do the frame by frame effect task prior to handing off the prepared frame to the encoder.
Likewise when previewing a video in the timeline that has one of these special effects applied. If this option is selected, the GPU will assist in the render of these effects to the playback window, if not selected, the CPU will do the effect task. OpenCL runs on the GPU cores and any modern mid-range GPU supports OpenCL, be it AMD, Intel, or Nvidia.
In the link here http://forum.cyberlink.com/forum/posts/list/25/43784.page#post_box_226790 attached was a sample PD13 project with boats.wmv which mimics a 3x3 video wall and highlights the benefits of OpenCL. As you can see, a great technology and can provide a significant encoding time benefit when OpenCL is used for PD13 fx’s, there it was shown an ~3x reduction in encode times.

Preference > Hardware Acceleration > Enable hardware decoding
Is pretty self-explanatory, it only affects decoding, nothing to do with encoding so does not affect the ability to "Enable hardware encoding" in "Produce" or "Create Disc". Not all video formats are supported by the card for hardware decoding. It's GPU specific, [AMD(UVD), Intel(Clearvideo), Nvidia(PureVideo/NVDEC)] are the technologies under the hood. Additionally, full hardware decoding is not always supported in PD even if selected and even if the GPU supports it. One recent discussion in the PD15 beta thread. http://forum.cyberlink.com/forum/posts/list/50731.page#post_box_266429 with CL RD still working the issue so maybe good news soon. So, buying or recommending a GPU because it the supports H.264 decoding can be less than futile if the software is not fully supporting the feature, which has been the case all through PD14 and is still the case with the current release of PD15 2309. Yes, the box is available to check, that does not mean it functions as one might think. However, for video formats that are fully support by the software and hardware, splitting what does the decoding task can be of significant benefit in edit timeline playback and also encoding. One such benefit discussed and presented here with performance data: http://forum.cyberlink.com/forum/posts/list/15/49597.page#post_box_261126 highlighting different device loads with different settings. Obviously to do such a test as shown in the link with a WMV in the timeline would make no sense as WMV is typically not supported in hardware GPU decoding. Difference in quality can exist as it’s not the same decoding logic, CPU and GPU.
Basic high level links of decoder technology:
AMD: https://en.wikipedia.org/wiki/Unified_Video_Decoder
Intel http://www.intel.com/products/chipsets/clear_video/prod_brief.pdf
Nvidia: https://en.wikipedia.org/wiki/Nvidia_PureVideo

Produce > Fast video rendering technology > Hardware video encoder
Again, pretty self-explanatory, if your GPU supports hardware encoding, you can enable this feature. When enabled, the final encoding will be done with the GPU, when unselected, encoding is done with the CPU provided SVRT (another topic) is not selected. Whether faster or slower or a change in output quality really depends on the hardware involved and the timeline contents. For recent PD versions and GPU's, hardware encoding with the GPU utilizes the newer rendering engines [AMD (VCE), Intel (QS), Nvidia (NVENC)] and use their proprietary ASIC IP block and interface API's to perform the video encoding. This special integrated circuit of the GPU does the encoding, not for instance the CUDA cores available for a Nvidia GPU. That’s why posting and comparing GPU’s by quoting core capability, primarily suited for game physic calculations, is not real indicative of PD encode performance with stated GPU. Difference in quality can exist as it’s not the same encoding algorithms that are used between PD software (CPU) and hardware implementations. With what's deployed in PD, not all formats, features, controls that maybe available in the specific GPU API code is exposed. For instance, multi-pass encoding maybe supported in the API calls and the encoder, but not implemented by the application (PD). Additionally, not all scanning methods are supported. For instance, hardware encoding 60i may be supported with one framesize but only 60p in another framesize. More often than not, it appears PD biases support to progressive scan formats for standard Profile name/Quality default menu selections. Why are these subtleties important, well if one buys a new AMD R9 Fury for instance with the hope of using hardware encoding to create some nice standard 1920x1080/60i H.264 BD’s to share, well out of luck, not a supported format in PD15, however, a Nvidia GTX1070 will support the interlaced format. Additionally, each of these encoding technologies and features constantly evolves through various revs as the linked sources below reveal. For instance H.265 encoding supported on AMD GPU’s which are VCE3.0 capable, however, not currently functional in PD and CL RD advices it’s under development so hopefully available soon for those AMD users, http://forum.cyberlink.com/forum/posts/list/15/50731.page#post_box_267005 . This AMD technology released ~early 2015. Conversely, H.265 encoding is available in PD15 on all GeForce 900 series GPU’s and newer, initially also released ~early 2015.
Basic high level links of encoder technology:
AMD: https://en.wikipedia.org/wiki/Video_Coding_Engine
Intel: https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video
Nvidea: https://en.wikipedia.org/wiki/Nvidia_NVENC

As seen with the above descriptions, ALL 3 HA type options, the two in pref > Hardware Acceleration and the hardware encoder (available in either the Produce or Create Disc) all perform a unique function and can be used independently.

Jeff


Thanks Jeff, for going to all the trouble and creating such a detailed article.

It's a lot to take in first time round, so I will read it again and again, each time seeing something new. What I can say is, even the first read taught me a lot.

Thanks very much again.

Robert
Powered by JForum 2.1.8 © JForum Team