Jeff,
Quote:
To me, PD offers several decoding/encoding options and what works best is very platform dependant from both speed and quality perspective. CL provides very little details in the docs on what the different settings do so one needs to use GPU monitors, CPU monitors, and the like to reverse engineer what the settings are doing.
Right. I really wish that they would provide more info.
Ideally, I think PD should run some sort of benchmark on each machine when it's installed, sort of what Windows does for its "scoring" system. It could prompt users to rerun the benchmark when there is a relevant hardware change.
Then users might not have to worry so much about the settings.
The commentary below is for your single SVRT compatible clip in the timeline with NO editing and your results table in the link.
To clarify, there is no editing done whatsoever in any of the tests. It is always the same project. I just change the target rendering options and program preferences for the individual tests.
column 1, SVRT is used so performance really driven by platform I/O capability, columns 4, 7, 10 same settings
Right, that's my takeaway too. CUDA/hardware decoding/hardware encoding become irrelevant for those cases, which is great.
column 2, GPU is used to decode video stream and GPU is used to encode video stream, so GPU/GPU. column 5 same settings
Well, that's in theory... But as I observed, I don't think the hardware encoder is getting used at all on the AMD box.
Compare results from column 2 and 3 on the AMD - they are identical, both 145.
Same for column 5 and 6, 145 again. It would have to be an enormous coincidence for the hardware encoder to take the exact same amount of time as the software encoder.
So, my guess is that the hardware encoder is actually off for columns 2, 3, 5, 6 on the AMD - but it is on for the Intel for columns 2 and 5.
column 3, GPU is used to decode video stream and CPU is used to encode video stream, so GPU/CPU. column 6 same settings
Yes, the only difference between columns 3 and 6 is the CUDA, but that has no effect in any of my tests. Maybe because I don't use effects, or live preview.
column 8, CPU is used to decode video stream and GPU is used to encode video stream, so CPU/GPU. column 11 same settings
Right, once again the only difference between columns 8 and 11 is CUDA / no CUDA, and it made no difference.
column 9, CPU is used to decode video stream and CPU is used to encode video stream, so CPU/CPU. column 12 same settings
Yes, again, 9 and 12 are identical parameters except for CUDA.
The times are slightly different, but since those tests are long, I didn't rerun the tests multiple times to figure out if this was statistically significant - my guess is that it was not.
You should get same results for same setting columns provided no other windows process played a roll. What you don't provide is the setting of the preview window and if you indeed had it the same between non SVRT tests.
I had openoffice running on the Intel box just to record the data. No other apps running on either box.
Power settings were set to "high performance" on both systems.
I was not using the preview in Powerdirector.
SVRT is fast, but depending on what transitions/editing one has done, quality (jumps/jerks) can be an issue. Obviously, if/when they get the SVRT route working correctly; it could offer excellent results for quality and speed.
I haven't seen problems with the SVRT but I have only done very basic things with it.
I have seen cases where just changing the audio triggered a new encode when it should not have.
But I just did a simple test with one audio track, and muting the audio from the video footage, and that problem did not occur, SVRT worked full speed without a reencode of the video as I expected.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)
2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)