CyberLink Community Forum
where the experts meet
| Advanced Search >
Hardware questions - building a new pc
Reply to this topic
18tillidie [Avatar]
Newbie Private Message Location: Los Angeles, CA Joined: Sep 29, 2014 12:24 Messages: 46 Offline
[Post New]
Hoping the hardware guru's can provide some sage advice...

I'm about to build a new desktop... my son is studying industrial design, and most of the apps are Windows based, so his MacBook is becoming redundant. My wife (bless her little heart) said "get a new computer for yourself and give him your still pretty solid one..."

Since I'm building for me, not my son, I'm being a little more open minded with budget. I don't play games, and the most resource taxing things I do are video editing (PD14, multicam editing, sports mostly with 4 camera, slomo replays, titling etc), some pretty heavy duty data modeling in Excel (don't laugh, I have to work on some files at home to use my 64 bit version, as my work pc dies under the load when it recalculates some of the models). I should also think ahead to my son's future needs, so Rhino, Solid Works etc. (He's also working on me pretty hard to buy a 3D printer!)

Here are my two key questions:

CPU - The newer Skylake i7 6700K Quad core or the Haswell-E i7 5930 6 core

The 6700K looks faster for single core operations, but the 5930 is better for multi core, which should be better for video editing? Also, the 5930 supports up to 64GB of RAM vs 32GB for the 6700K... I'm thinking the 6 core cpu with 64GB of memory is going to be better in many of these applications... how much would PD14 take advantage of that? (would you add the extra memory if you had the chance?)

GPU - The GTX980Ti seems to be best bang for the buck, but I see a lot of video and animation workstations using Quadro GPU's... the option would be the K4200 Quadro. The Quadro is more expensive than the GTX, seems slower on paper, and it uses an older chip set (both NVIDIA). Is this just because most of the comparisons use gamers benchmarks? What's the real world video editing and rendering (read non-gaming) answer?

Hope this makes sense... I'm looking to build a pretty future proof system, and the build is currently running around $3,500. I can trim it back a little if I need to, but I'd rather stretch a little now that skimp and regret it later.

Thanks in advance!
https://vimeo.com/timsanson
Reply
Carl312 [Avatar]
Senior Contributor Private Message Location: Texas, USA Joined: Mar 16, 2010 20:11 Messages: 9090 Offline
[Post New]
This advice may not help you to decide.

My idea is to buy the fastest cpu and fastest video card you can afford. Plenty of RAM (about 8 GB or more on a 64 bit OS) and plenty of hard drive space 1TB and above. SSD drives (250 GB and above) are a plus for speed, but not a good place to store your video files while working on them.

You need a powersupply with plenty of power 600 Watts and more (700 Watts is a pretty good Power Supply).

It just happens that computers that are built for Game playing also work pretty good for video editing.

Good luck on finding your perfect computer! Carl312: Windows 10 64-bit 8 GB RAM,AMD Phenom II X4 965 3.4 GHz,ATI Radeon HD 5770 1GB,240GB SSD,two 1TB HDs.

Reply
18tillidie [Avatar]
Newbie Private Message Location: Los Angeles, CA Joined: Sep 29, 2014 12:24 Messages: 46 Offline
[Post New]
Thanks Carl,

That's pretty sound advice... I upgraded the video card and power supply a while back to try to improve the performance of my current system, and it's already pretty high end by the standards you describe. (i7 4770 quad core processor, 16GB RAM, GTX960 4GB video, 850W PSU, 1TB Samsung SSD, 2TB internal HHD and a 5TB external drive for archive storage. 64 bit Win 8 OS)

With the video editing that I do, I still get substantial hangs and crashes when I get well into the edit... I am finally figuring out that the type of editing I do is pretty taxing on hardware!

I am editing 4 camera angles at 1080p 60fps. A 75 minute hockey game with multiple clips per camera synced and rough edited in MultiCam, then brought to the main timeline. Each goal, penalty, breakaway play will then include a replay from at least one and generally two camera angles, each replay cropped & zoomed, and slow motion, speed adjusted to fit the available time to the next face off, and with transitions in and out to mark them as replays. Replay and scoreboard titling and replay music is added. Performance is fine for 80% of most edits, but when I have a high scoring game, or lots of penalties etc, I start to run into serious performance issues during the edit process, and have to save the project almost after every single edit process. The fact that this only occurs after I have 20 or 30 replay clips done tells me that it's just struggling to keep up.

Hence my desire to upgrade to what some might consider an over the top spec. I am always under time pressure to get the game out before mid week practice, so any system performance issues drive me crazy!

Cheers.
https://vimeo.com/timsanson
Reply
Carl312 [Avatar]
Senior Contributor Private Message Location: Texas, USA Joined: Mar 16, 2010 20:11 Messages: 9090 Offline
[Post New]
The computer you have is very good spec. The type of editing you are doing may be why you have slow response.

The more you do in enhancing, the slower the preview will be.

Powerdirector 14 gets slower, the more it has to do on the timeline. Pre-rendering can help, but then you have to wait for the render.

One method is to do the editing in smaller chunks, then combine those bits and pieces into one whole. That is the way Hollywood creates movies. Carl312: Windows 10 64-bit 8 GB RAM,AMD Phenom II X4 965 3.4 GHz,ATI Radeon HD 5770 1GB,240GB SSD,two 1TB HDs.

Reply
18tillidie [Avatar]
Newbie Private Message Location: Los Angeles, CA Joined: Sep 29, 2014 12:24 Messages: 46 Offline
[Post New]
Thanks Carl,

I totally agree that there are additional workflow steps I could add to break the project down to manage within the resources I have. My challenge is time...

My current workflow takes a 2-3 hours to load all the raw files from the cameras into a project folder and create the PD14 project.

Another hour to load them into MultiCam editor, run audio sync, then realign the clips that it couldn't place.

The multi cam edit has to be real time, and requires focus, so it's difficult to do that in one straight sitting, so I tend to do that in 15-20 minute bites, with a break in between.

With pauses to get clip transitions in the right spot, and ensuring I insert a small clip from any camera I later plan to use for replays, it takes around 3 hours to process 90 minutes of raw camera files.

I can shuffle through the timeline sequentially adding titles, and picking up replay locations by the clip fragments I inserted during multi cam editing. Each replay takes maybe 15 min, sometimes more, to aligh the clip and trim it to the section of play, crop and zoom the clip, move it it's starting point on a new track, (then the same process if there's a second camera angle to replay). Then slowing the replays to fit the space between the stop of play and the next faceoff (I try keep the game to it's original duration) and addins transitions into, between and out of the replays, adding replay music, adjusting music clip to fit, and adding replay titling and scoreboard titling after each replay... each replay event takes 20-30 mintes.

A game with 10 goals, and 10 or so penalties and other key events, can that 8-10 hours of edit time. Then producing the final output, 2-3 hours, and finally uploading to Vimeo and waiting for it to process so I can add the final info and add it to the team channel is probably 6 hours depending on how quickly the Vimeo conversion processes.

So all of that is around 18 to 20 hours of process time. I am doing this on Sunday afternoon, then after work on Monday, Tuesday night and as much time as I can grab in the morning before work, so I can upoad by Wednesday. Hockey season, that's my life every week... and then there's tournament weekends! I hit a wall when I get busy at work!

So adding more proces steps vs increasing processing capacity is my dilema. If I was making a movie, I'd think differently, but sport or news events are time sensitive.

Cheers.
https://vimeo.com/timsanson
Reply
SoNic67 [Avatar]
Senior Contributor Private Message Joined: Sep 27, 2014 14:14 Messages: 1286 Offline
[Post New]
Quote: I should also think ahead to my son's future needs, so Rhino, Solid Works etc. (He's also working on me pretty hard to buy a 3D printer!)


You are absolutelly correct. Lots of design products need the fastest single-core CPU possible and a Quadro video card. Some renedering operations can use multi-cores but most of the time, when you are not rendering, it's just one core.

Also, I am working with Autodesk products (AutoCAD, Revit) and I have tried using GeForce video cards. They just don't work well, don't know why, and who's fault is, but a Quadro card is absolutelly necessary.

Now, maybe not really to the tune of Quadro K4200, I really would recomend the K2200, costs half the price, it is based on the newer Maxwell architecture and has teh newer NVENC encoder, useful for video editing. And as a plus, it has the same amount of memory.

If you are really looking at spending $800 for a video card, definitelly get the Quadro M4000 - newer generation, 8GB memory, good for video encoding too.

This message was edited 3 times. Last update was at Mar 29. 2016 20:17

Reply
18tillidie [Avatar]
Newbie Private Message Location: Los Angeles, CA Joined: Sep 29, 2014 12:24 Messages: 46 Offline
[Post New]
Thanks SoNic67,

Appreciate your comments. That's pretty much what I would have thought, however, the thing that confuses me is that a lot of the discussion on the animantion and 3D modeling forums are saying that the Quadro cards are over hyped and that the GTX980Ti is way better...

The Quadro cards are certainly certified for a lot of the pro applications, and they do have more floating point capability unlocked, but if the pro users on those platforms are having the same debate, I'm wondering if it's just a Tier 1 vs Tier 2 issue...

The Quadro is tier 1, guaranteed to work with key applications, certified and tested, higher precision, and more reliable due to premium component selection, but actual performance is lower than the high end consumer card... if I was responsible for IT in a medical imaging environment, or designing automotive brake components, I'd certainly opt for the tier 1 hardware... but if noone's life is depending on the precision of my video editing, I'd like extra performance!

All the benchmarks I've seen are saying the same thing, so I'm just bewildered!
https://vimeo.com/timsanson
Reply
SoNic67 [Avatar]
Senior Contributor Private Message Joined: Sep 27, 2014 14:14 Messages: 1286 Offline
[Post New]
I have run benchmarks for AutoCAD and Revit (last time in 2015) and the Quadro won easily in front of a GT with double the cores and higher frequency. We had Quadro K600, K2000 (1 SMX and 2 SMX) at work at that point and I have brought in my GTX640 (3 SMX, Kepler version). K600 was on par with the GTX640.

Earlier I even tested a modded GT450 (Firmware mod) that was "reporting" it is a Quadro 2000 (everywhere, in BIOS, drivers, nVidia CP). Everyone on the net said that is the same hardware, but with a different driver. Well... it still sucked with the Quadro drivers.
I even went all the way to a GTX480 modded like a Quadro 6000 (with less memory of course). Altough it unlocked all the exclusive features from the Q6000, the benchmarks didn't really changed. For later families the modding got harder, nVidia put some hardware locks too (resistors on board encoding the type, besides the firmware registers), so I stoped doing it.

https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#Quadro_Kxxx_Series

Don't know if other apps are acting the same, benchmark tests for professional apps are hard to find, the gaming tests don't really apply.
I had to find specific tests for those professional apps:

http://www.cadalyst.com/benchmark-test (this one requires to have AutoCAD installed on the workstation).

https://www.spec.org/gwpg/gpc.static/vp12info.html

This is the best IMO, free for home use, attached are my results, my GTX960 is comparable with a K620.

Others: https://www.spec.org/benchmarks.html#gpc

.
 Filename
Specviewperf12_results_20160401T0934_r1.pdf
[Disk]
 Description
 Filesize
103 Kbytes
 Downloaded:
148 time(s)

This message was edited 7 times. Last update was at Apr 01. 2016 09:44

Reply
JL_JL [Avatar]
Senior Contributor Private Message Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 4963 Offline
[Post New]
18tillidie , in an attempt to show the difference between a GeForce series and a Quadro series in relation to PD14 I've utilized a GeForce GTX650 and a K2000 Quadro. Both these cards have the same GK107 chipset, the first generation of NVENC encoder, and very similar core clock speeds and the same 384 CUDA cores. Yes, the Quadro has more memory as typical of the product line.

Two PD14 performance evaluation tests were done, one a simple encoding of 10 Kite Surfing.wmv to a 4K H.264 M2TS 50Mbps default profile to load and evaluate the NVENC encode engine. A side by side comparison is shown in PD14_K2000_GTX650_NVENC.png. Basically no difference in NVENC encode performance.

The second PD14 test was the same timeline but with an added fx to load the GPU CUDA cores via OpenCL vs just the NVENC encode engine of test 1. A side by side comparison is shown in PD14_K2000_GTX650_CUDA.png. The GTX650 is a little faster, ~12%.

I also scrubbed around in the timeline in PD14 with edits, multicam, enhancements, and effects, I noticed no significant difference in editing fluidity, both cards behaved the same.

The last test was the SPECviewperf 12 evaluation test SoNic67 provided and a side by side comparison is shown in PD14_GTX650_K2000.png. Quadro K2000 hands down a better option for graphic load present in CAD/CAM applications.

I didn't have available a Quadro K2200 or K620 which would be more inline with the popular GTX950 or 960 used by several editors for a comparison but would expect similar relative comparisons.

Jeff
[Thumb - PD14_K2000_GTX650_NVENC.png]
 Filename
PD14_K2000_GTX650_NVENC.png
[Disk]
 Description
 Filesize
274 Kbytes
 Downloaded:
42 time(s)
[Thumb - PD14_GTX650_K2000.png]
 Filename
PD14_GTX650_K2000.png
[Disk]
 Description
 Filesize
162 Kbytes
 Downloaded:
55 time(s)
[Thumb - PD14_K2000_GTX650_CUDA.png]
 Filename
PD14_K2000_GTX650_CUDA.png
[Disk]
 Description
 Filesize
281 Kbytes
 Downloaded:
45 time(s)
Reply
Theolilou [Avatar]
Member Private Message Location: France Joined: Feb 07, 2016 11:27 Messages: 107 Offline
[Post New]
Some test with nvidia has been performed in the below topic with
10 Kite Surfing.wmv
http://forum.cyberlink.com/forum/posts/list/46836.page

Reply
SoNic67 [Avatar]
Senior Contributor Private Message Joined: Sep 27, 2014 14:14 Messages: 1286 Offline
[Post New]
As always I could count on Jeff for some testing
As I was expecting, the nvenc module performs similar. The limitation for GeForce is to encode only 2 concurrent streams while the Quadro has no limit - this limitation will not be seen in PD14 because it renders only one stream anyway.

As for the results in SPECViewperf_12 for K2000 (384 cores @ 954MHz, 732GFlops SP, 30 GFlops DP) - they are close of what my GTX960 can do with three times the processing power in Single Precision (1024 cores @ 1127MHz, 2300 GFlops SP, 72 GFlops DP).

In OpenGL apps Quadro is absolutelly crushing, but looks like in some DirectX apps results are the other way around (AutoDesk Maya can use DirectX, I don't know if that is actually used in this test, but results point in that direction).



[Thumb - PD14_GTX960_K2000.png]
 Filename
PD14_GTX960_K2000.png
[Disk]
 Description
 Filesize
174 Kbytes
 Downloaded:
5645 time(s)
[Thumb - Compare.png]
 Filename
Compare.png
[Disk]
 Description
GTX 960 vs K2200
 Filesize
50 Kbytes
 Downloaded:
5266 time(s)

This message was edited 8 times. Last update was at Apr 08. 2016 20:15

Reply
AlS [Avatar]
Senior Member Private Message Location: South Africa Joined: Sep 23, 2014 18:07 Messages: 290 Offline
[Post New]
Quote: Hoping the hardware guru's can provide some sage advice...

CPU - The newer Skylake i7 6700K Quad core or the Haswell-E i7 5930 6 core

The 6700K looks faster for single core operations, but the 5930 is better for multi core, which should be better for video editing? Also, the 5930 supports up to 64GB of RAM vs 32GB for the 6700K... I'm thinking the 6 core cpu with 64GB of memory is going to be better in many of these applications... how much would PD14 take advantage of that? (would you add the extra memory if you had the chance?)


Thanks in advance!


Hi 18tillidie

I've been having the same upgrade debate. I like the new 6 (and 8 ) core cpus like the i7-5930 but am confused about two things:

1) The 5930 has more cores but slower clock (3.5-3.7ghz) speed than the 6700K (4.0-4.2ghz) and the 6700K can still be overclocked. I see more cores as more CPUs which assumes a degree of parallel processing. I'm not sure how 6 cores and 12 threads will affect PDR14 vs the 6700K with 4 cores and 8 threads at faster speed.

2) Last time I checked the 6700K has latest Intel HD Graphics 530 engine with 4K and DirX 12 support and 5930 did not. Intel Quick Sync (part of Intel HD Graphics) has had a significant positive effect on PDR14 H.264 render times.

So I'm leaning toward the faster 4 core 6700K with Quick Sync vs 6 slower cores and no Quick Sync.

I'm hoping Jeff can shed some light.

Al

This message was edited 3 times. Last update was at Apr 04. 2016 06:00

Power Director 13&14 Ultimate, Photo Director 6, Audio Dir, Pwr2Go 10
Win 10 64, Intel MB DH87MC, Intel i5-4670 CPU @ 3.40GHz, 16Gb DDR3 1600, 128Gb SSD, 2x1Tb WDBlue 7200rpmSATA6, Intel 4600 GPU, Gigabyte G1 GTX960 4GB, LG BluRay Writer
Reply
JL_JL [Avatar]
Senior Contributor Private Message Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 4963 Offline
[Post New]
Quote: In OpenGL apps Quadro is absolutelly crushing, but looks like in some DirectX apps results are the other way around (AutoDesk Maya can use DirectX, I don't know if that is actually used in this test, but results point in that direction).

SoNic67, looks like 18tillidie must have what’s needed as no return but since I had some K620 and K2200 results, I thought I’d finish this thread and post them.

No surprise, Quadro product line hands down a better option for graphic load present in CAD/CAM applications per SPECviewperf 12 results shown in PD14_K620_K2200.png pic for these two GM107 GPU’s. Last I looked, all the DirectX API calls in PD are DirectX9 on my system so I’d doubt any significant benefit with any newer DirectX supoport.

As would be expected, for PD hardware encoding the K620 and K2200 performed as one would expect any second generation NVENC encoder. No significant encode time difference between the two GPU’s for the PD NVENC test case as shown in pic PD14_K620_K2200_NVENC.png.

The K2200 did perform better for the second PD test case utilizing the GPU for OpenCL effects, this would be expected due to the ~1.5X CUDA cores vs the K620 (640 vs 384) as shown in PD14_K620_K2200_CUDA.png pic with a ~25% reduction in encode time (5:41 vs 7:17).

Jeff
[Thumb - PD14_K620_K2200_CUDA.png]
 Filename
PD14_K620_K2200_CUDA.png
[Disk]
 Description
 Filesize
218 Kbytes
 Downloaded:
50 time(s)
[Thumb - PD14_K620_K2200.png]
 Filename
PD14_K620_K2200.png
[Disk]
 Description
 Filesize
163 Kbytes
 Downloaded:
47 time(s)
[Thumb - PD14_K620_K2200_NVENC.png]
 Filename
PD14_K620_K2200_NVENC.png
[Disk]
 Description
 Filesize
196 Kbytes
 Downloaded:
48 time(s)

This message was edited 1 time. Last update was at Apr 09. 2016 01:11

Reply
AlS [Avatar]
Senior Member Private Message Location: South Africa Joined: Sep 23, 2014 18:07 Messages: 290 Offline
[Post New]
Thanks Jeff,

What is your feeling regarding his CPU choice for PDR14?

Will a six core i7-5930K running at 3.5ghz be better than an i7-6700K running at 4.0ghz?

What about Intel 530 Graphics with Quick Sync on the 6700K vs none on the 5930K?

Al

Power Director 13&14 Ultimate, Photo Director 6, Audio Dir, Pwr2Go 10
Win 10 64, Intel MB DH87MC, Intel i5-4670 CPU @ 3.40GHz, 16Gb DDR3 1600, 128Gb SSD, 2x1Tb WDBlue 7200rpmSATA6, Intel 4600 GPU, Gigabyte G1 GTX960 4GB, LG BluRay Writer
Reply
SoNic67 [Avatar]
Senior Contributor Private Message Joined: Sep 27, 2014 14:14 Messages: 1286 Offline
[Post New]
Jeff,

I will continue this discussion because if is interesting in general, even if OP is not interested anymore.

The Quadro cards perform better in OpenCL situations (effects, transitions) and the same for the NVENC encoding.What you have posted there in PD encoding shows something that I gripe for a long time - the GPU is not fully utilized, basically if you get a faster GPU, it will be used at lower %, and in the same time the CPU or memory are not maxed either (you don't have that posted, but that was my experience).

Almost like there are some latencies that break the flow, I wonder if the much discussed issue of Async Compute will be able to help with this - of course if PD would support those DX12 features.



LE: This is a cool analogy of what I think happens here: https://forum.beyond3d.com/threads/dx12-performance-discussion-and-analysis-thread.57188/page-18#post-1869621

This message was edited 4 times. Last update was at Apr 09. 2016 14:30

Reply
JL_JL [Avatar]
Senior Contributor Private Message Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 4963 Offline
[Post New]
Quote: The Quadro cards perform better in OpenCL situations (effects, transitions) and the same for the NVENC encoding.What you have posted there in PD encoding shows something that I gripe for a long time - the GPU is not fully utilized, basically if you get a faster GPU, it will be used at lower %, and in the same time the CPU or memory are not maxed either (you don't have that posted, but that was my experience).

Not sure on that, a Quadro will perform no better with PD than it's similar GeForce cousin, just drain the pocketbook. Be it encoding, playback, or scrubbing a timeline. This was shown earlier with the first generation NVENC with the K2000, GTX650 comparisons of very similar cousin cards. I've attached yet another PD chart as verification, a second generation NVENC GPU comparison of a Ti750 and PD test 2, as anticipated, it essentially matches a Quadro K2200, its cousin. Some minor base clock difference with the two GPU’s but overall very comparable encode times and utilization, ~5:40 vs ~5:20. The Ti750 SPECviewperf12 is a small fraction of the K2200 capability however.

I'm not familiar with any PD transitions using OpenCL and GPU for acceleration, which ones?

Since PD started supporting the new Nvidia NVENC encoder, the GPU CUDA cores are no longer needed for the CUDA based encoder hence will be nearly idle most of the time. Currently the timeline would need some PD editing feature to utilize and the only items I see that use GPU CUDA for hardware acceleration with OpenCL in PD are the accelerated effects (properly annotated in PD), PhotoDirector export of pics with some common adjustments like WB, tone,.. to name a few. Likewise the same for some adjustments within ColorDirector. Lots of neat tech stuff always on the horizon, I guess one can only wait and see what CL developers decide to take advantage of and provide to video editing enthusiast. Or, as a few have written, one shouldn't have to be a rocket engineer to edit some video.

A Quadro model is great for any CAD/CAM type applications and the only model we utilize for that. Great primarily for speed gains with anti-aliased lines and points done in hardware, and OpenGL pipeline for many intense screen overlays used in these apps. None of these attributes important with PD so no benefit that I see. If a home video enthusiast also exercises engineering type apps as simulated in SPECviewperf12 tests, definitely worth consideration for this unique use case.

Jeff
[Thumb - PD14_Ti750_K2200_CUDA.png]
 Filename
PD14_Ti750_K2200_CUDA.png
[Disk]
 Description
 Filesize
263 Kbytes
 Downloaded:
31 time(s)
Reply
SoNic67 [Avatar]
Senior Contributor Private Message Joined: Sep 27, 2014 14:14 Messages: 1286 Offline
[Post New]
Yeah, this tread was about a dual-purpose PC, combining video editing with workstation apps, that's why it diverged towards Quadro cards.
Of course a Maxwell from second generation will be the best investment for purely video editing in PD.

In your pic from above I see that the GPU usage was 60-70% while video encoding (NVENC) was at 40%.
I tought PD will use OpenCL for effects, not CUDA anymore (and for AMD/intel cards), at least that's what the settings will suggest.

However PD it still didn't manage to maximize the GPU usage.
If the CPU was less than 99% usage, it means that those latencies in the processing are really a bottleneck.
[Thumb - OpenCL.PNG]
 Filename
OpenCL.PNG
[Disk]
 Description
 Filesize
15 Kbytes
 Downloaded:
38 time(s)

This message was edited 3 times. Last update was at Apr 09. 2016 20:14

Reply
JL_JL [Avatar]
Senior Contributor Private Message Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 4963 Offline
[Post New]
Quote: In your pic from above I see that the GPU usage was 60-70% while video encoding (NVENC) was at 40%.
I tought PD will use OpenCL for effects, not CUDA anymore (and for AMD/intel cards), at least that's what the settings will suggest.

Probably just terminology, OpenCL is an API that runs on CUDA powered GPUs for Nvidia. Not all Nvidia GPU's are CUDA capable but virtually any modern Nvidia GPU is, and apps can take advantage of OpenCL acceleration. Yes, OpenCL capability on AMD/Intel GPU's as well.

The settings terminology is correct, certain effects will use OpenCL technology to speedup render/preview, if unchecked, CPU does the effect frame preparation task, if checked, GPU does it.

Quote: However PD it still didn't manage to maximize the GPU usage.
If the CPU was less than 99% usage, it means that those latencies in the processing are really a bottleneck.

The GPU is only doing the frame preparation for the effect that was accelerated via OpenCL, when the pref option checked. The GPU load would depend on the complexity of the effect, a very complicated effect would be required for the GPU to be 100% loaded. If you allow the CPU to do this task (unchecked pref), GPU load will be zero, encoding still done via NVENC IP block (Video Engine Load) when hardware video encoder technology is selected.

Obviously that's why two discrete test were discussed above to evaluate GPU performance with PD14. One to simply show the NVENC encoder performance alone and one to show how the GPU performed with a PD timeline with significant GPU OpenCL load. My view, one can't evaluate a GPU properly if you don't put a proper timeline together to evaluate features. How often you use those features a different item.

Jeff
Reply
Reply to this topic
Powered by JForum 2.1.8 © JForum Team