Andrew, the attached chart summarizes my thought referenced earlier and is pretty self-explanatory. I tried to replicate your source and target formats in the conversion, 1920x1080 50p 28Mbs m2ts to MP4 1920x1080 25p 16Mbps, however I only used 10min duration for simplicity. I also have pure transcoding, no timeline edits. The realtime ratio below is just the (timeline duration)/(encode time) ratio, so a factor 2.0 would imply one can encode a timeline 2 times faster than the timeline duration.
Encoder | Decoder | Encode Time (s) | Realtime Ratio |
CPU | CPU | 252 | 2.38 |
CPU | GPU | 147 | 4.08 |
GPU | GPU | 95 | 6.31 |
Andrew GPU | Andrew GPU | | 3.13 |
As can be seen, unloading the CPU from decoding task when one desires to use CPU encoding can be very beneficial. It allows the CPU encode task to be nearly 2 times faster. The benefit depends on CPU and GPU capability, quality of timeline content, and target format so results will vary for each user substantially. The issue of some formats not supporting timeline GPU decoding highlighted here
http://forum.cyberlink.com//forum/posts/list/25/45503.page#236759 has been PARTIALLY corrected and extended in PD15.
From your posted CPU loads, I'd assume you may have Hyper-threading activated which will create a artificially low perceived load on the CPU during encoding but still a good relative assessment. Overall Hyper-threading does improve overall encode performance some so having it activated is not a bad thing.
Jeff
|
Filename |
PD15_Encode_Perf.png |
|
Description |
|
Filesize |
60 Kbytes
|
Downloaded: |
151 time(s) |
This message was edited 3 times. Last update was at Jan 19. 2017 20:03