Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
PD20 Encode Quality
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
The attached pdf tries to take a look at PD20 encode quality of 3 sample clips at various bitrates. The quality comparison uses a standard VMAF metric and compares CPU, Nvidia (NVENC) and AMD (VCE) encoding quality vs the source. Some may find the results interesting.

Hopefully PD21 release in about a week will provide some real improvements in the core engine vs just a plethora of templates like recent prior releases.

Jeff
 Filename
PD20_VMAF.pdf
[Disk]
 Description
PD20 Encode Quality with VMAF Metric
 Filesize
307 Kbytes
 Downloaded:
265 time(s)
Warry [Avatar]
Senior Contributor Location: The Netherlands Joined: Oct 13, 2014 11:42 Messages: 853 Offline
[Post New]
This is a very impressive and interesting piece of work Jeff!
Would it be possible for the laymen amongst us in this forum (and maybe in the Cyberlink Development department) to draw some simple overall conclusions and recommendations in addition to “Nvidia NVENC GPU encoding is the best route”? Is that always the case, or can you recommend which route to take, within specific parameters for the best results, etc. And maybe also recommendations for CL (should they read this).
I see that you don’t say too much about the speed of the various options. I think I understand why. But is it a factor at all?
I am also interested to learn what it is you (overall) have to say about the SVRT encoding. I understand that if it is at all applicable, it sometimes delivers a shorter video, but also sometimes good quality?
Good work, much appreciated!
Thanks again!
JOF [Avatar]
Newbie Joined: Jan 19, 2019 01:38 Messages: 31 Offline
[Post New]
YES!! They need to improve the core features of the program vs the mickey mouse templates and other fluff!

This message was edited 1 time. Last update was at Sep 06. 2022 21:05

JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote Would it be possible for the laymen amongst us in this forum (and maybe in the Cyberlink Development department) to draw some simple overall conclusions and recommendations in addition to “Nvidia NVENC GPU encoding is the best route”? Is that always the case, or can you recommend which route to take, within specific parameters for the best results, etc. And maybe also recommendations for CL (should they read this).

That's kind of tough as somethings don't bother some viewers/creators and create havoc for others. I typically only test my source clip formats and workflows. Years ago CL had a list of probably 300 cameras they said PD was tested against, so I'm sure CL Dev has amassed 1000's of clips from 100's of cameras that they could and should baseline every pertinent code mod against. Automated regression testing.

Sometimes transitions don't behave as expected with GPU decoding/encoding, so an option is to change transition or try CPU encoding. Some use basic fades nearly 100% of the time so never see the issue. I've seen timeline encodes just stop at some random point and not finish, turning GPU decoding/encoding off allowed full timeline encoding.

Since PD transitioned to GPU SIP core for encode/decode, Nvidia over AMD GPU's has long been the better solution for PD in my evaluations, I've stated that several times. This older post gave some comparisons, https://forum.cyberlink.com/forum/posts/list/65974.page#post_box_300990 AMD has excellent GPU's but have lagged Nvidia in this specialty niche SIP core hardware encoding block on the GPU. It also appears AMD GPU's are not well vetted in PD. Kind of like this post, https://forum.cyberlink.com/forum/posts/list/15/50731.page#post_box_267002 to say patch adds support only to instantly realize from user comments it doesn't. That simply shouldn't happen. Not that Nvidia has been without issue in PD, it's gone through trying periods too.

I'm sure parts of CL Dev are well aware of many issues often mentioned in the forum posts, probably more a simple business decision on fixing. For the most part it appears the business side has identified consumers swallow up "freebies" vs core code improvements. That's basically what has pushed their whole B2C business transformation to SaaS/Subscription model the past 4 years vs perpetual. Huge revenue generator, read some of their quarterly reports, they document unprecedented growth as more consumers adopt as they are getting continued freebies (fonts, songs, sound Fx, stickers, pics, templates) monthly, although many never use them. That's business.


Quote I see that you don’t say too much about the speed of the various options. I think I understand why. But is it a factor at all?

I have speeds for all, just too much information, this pdf was purely to try and look at quality. Speed is also a huge function on timeline content, do lots of color grading, a high end GPU will do nothing to improve encode performance. These are very simplistic timelines so frame by frame quality can be compared back to source. Yes, it does highlight speed differences between CPU, Nvidia, or AMD encoding features but may offer little insight into someone's unique timeline bottlenecks.

Quote I am also interested to learn what it is you (overall) have to say about the SVRT encoding. I understand that if it is at all applicable, it sometimes delivers a shorter video, but also sometimes good quality?
Good work, much appreciated!

SVRT by design does not encode a clip, it simply passes it through, so SVRT is by far the best produce option as you get source clip quality, when it works properly. Especially those that have few added timeline editing features and mostly just trimmed captured video. Again, what bothers some, others don't see/hear. SVRT has had issues with transition behavior, volume changes, dead audio spots, and clicks and sound loss at SVRT to CPU encode transition regions. If it works for you, it will provide the best quality in non edited regions as any encoding always introduces distortion (it's often small/acceptable most of the time). However, with camera encoding bitrates available these days, often good downsampling is needed for viable distribution of produced video product, which SVRT is not a means for that, so other encoding options with excellent downsampling need to be in the toolbox too.

Jeff

This message was edited 3 times. Last update was at Sep 07. 2022 13:43

Warry [Avatar]
Senior Contributor Location: The Netherlands Joined: Oct 13, 2014 11:42 Messages: 853 Offline
[Post New]
Thanks Jeff, for taking the efforts to explain and share!
Much appreciated!
Warry
Treysvideo [Avatar]
Newbie Joined: Apr 10, 2017 20:25 Messages: 11 Offline
[Post New]
Thank you Jeff!

You answered a question I had. I was pulling my hair out as to why when I did a lot of color grading or smothing grainy video that the GPU did not seem to being utilized and it was taking forever. About 7 hours on a 1hr and 15min clip. You quote below helped me with that:

"I have speeds for all, just too much information, this pdf was purely to try and look at quality. Speed is also a huge function on timeline content, do lots of color grading, a high end GPU will do nothing to improve encode performance. These are very simplistic timelines so frame by frame quality can be compared back to source. Yes, it does highlight speed differences between CPU, Nvidia, or AMD encoding features but may offer little insight into someone's unique timeline bottlenecks." Trey
In His Grip! ><>
Powered by JForum 2.1.8 © JForum Team