Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
CUDA encoder vs nVidia GTX 750Ti - 10 times slower hardware encode at 1920x1080 24 Mbps
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
I just purchased an nVidia GTX 750 Ti (Maxwell) at Fry's on wednesday.
http://www.frys.com/product/8075754?site=sr:SEARCH:MAIN_RSLT_PG

In PD9 through PD12, only the CUDA encoder can be used, using NVCUVENC from nVidia drivers <= 337.88 .

I am running into a strange problem :

When encoding H264 AVC as M2TS 1920x1080 24 Mbps with the hardware encoder, encodes are taking an enormous amount of time.

My 67 second test clip, which is itself 1920x1080 24 Mbps M2TS source material, takes over 6 minutes to encode with the CUDA hardware encoder on the 750 Ti . It is always 6 minutes + regardless of PD9 through PD12 .

By comparison, a software encode only takes between 42 and 61 seconds on the same machine, depending on the version of PD I use.

With the previous GTX 560 Ti GPU, still on the same machine, the hardware CUDA encode only took between 25 and 29 seconds depending on the version of PD. Now it is 360+ with the 750 Ti !

The really strange thing is, if I switch the profile to 1920x1080x1080 28 Mbps instead of 24 Mbps, the CUDA hardware encode proceeds at expected speed - the encoding time goes down from 360+ seconds to under 30 seconds.
Same thing if I encode at 16 Mbps . The issue only seems to occur when the profile matches the source material.

Now, I know that the 750Ti has the nvENC chip . The nvENC works OK in PD13 if I upgrade to the latest build and latest nVidia drivers. But when using PD9-PD12, the CUDA encoder is the only accelerated choice.
I feel like it should not be 10 times slower.

I will publish some data since I have done so much testing with various GPUs in the last 2 days. I will also attach the test clip.
 Filename
Benchmark2.MTS
[Disk]
 Description
 Filesize
185076 Kbytes
 Downloaded:
283 time(s)

This message was edited 1 time. Last update was at Nov 14. 2014 09:03

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
[Post New]
CUDA sucks in Maxwell. They dialed down the floating point capability (relative to integer) from 1/8 in Fermi, to 1/24 in Kepler to 1/32 in Maxwell (and those with shaders frequency halved), just to be able to cram more cores and keep the temperature in check.
Install the latest PD patch in PD12. Delete temporarily the CUDA files and try it.
It was reported that CL added support for nvenc (but it was a Maxwell Gen2 card - GTX970, your is Maxwell Gen1) in PD13, nobody reported back for PD12.
Maybe your issue is also the SVRT? It might be 'broken' and slow down the rendering when sources matched the output...

PS: I have only Fermi cards. You can test your CUDA speed with http://cuda-z.sourceforge.net/

This message was edited 3 times. Last update was at Nov 14. 2014 08:32

Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote: CUDA sucks in Maxwell. They dialed down the floating point capability (relative to integer) from 1/8 in Fermi, to 1/24 in Kepler to 1/32 in Maxwell (and those with shaders frequency halved), just to be able to cram more cores and keep the temperature in check.
Install the latest PD patch in PD12. Delete temporarily the CUDA files and try it.


The latest patch in PD12 and Maxwell only works for Intel CPU - I have an AMD FX-8350 as my primary machine (and a few other machines, all with various AMD CPUs, none Intel).

Anyway, I find it strange that the CUDA encoder works fine for H264 16 Mbps and 28 Mbps, but is so slow at 24 Mbps.


It was reported that CL added support for nvenc (but it was a Maxwell Gen2 card - GTX970, your is Maxwell Gen1) in PD13, nobody reported back for PD12.


Yes, I didn't want to pay the premium for Maxwell Gen2.


Maybe your issue is also the SVRT? It might be 'broken' and slow down the rendering when sources matched the output...

PS: I have only Fermi cards. You can test your CUDA speed with http://cuda-z.sourceforge.net/


SVRT works fine - encoding time is 6-10 secs, seems to be more I/O bound than anything. SSD write time tends to vary.

I have a bunch of nVidia cards. My oldest is an 8600 GT, which is 7 years old.
The encode time with that card was 3 minutes, about the same with all versions of PD.
The Maxwell is doing 6 minutes . Something is seriously wrong IMO and there is something not right in the software stack.
Of course, nVidia has dropped support for NVCUVENC in the newest drivers so I'm not sure if it will get fixed ...
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: The latest patch in PD12 and Maxwell only works for Intel CPU - I have an AMD FX-8350 as my primary machine (and a few other machines, all with various AMD CPUs, none Intel).

Not my experience, it works on my AMD platform with PD12. HA support with Maxwell is limited to progressive formats though, still a CL issue I believe. The PD12 patch was released 10/28 and they didn't even have i/p support in PD13 yet at that time, it was beta and noted here http://forum.cyberlink.com/forum/posts/list/40892.page#211198 in the release notes on 10/28. It became official on the 11/10 patch update to PD13 so I surely wouldn't think the same support was in the PD12 10/28 patch release.

The (p) support is all that existed in PD13 prior to beta release or full patch update as well. My guess, the PD12 released update for Maxwell HA i/p support just not available yet, if CL even plans to update.

Jeff
[Post New]
Quote: Of course, nVidia has dropped support for NVCUVENC in the newest drivers so I'm not sure if it will get fixed ...

Not likely, my guess is that nVidia wants us to buy the newer cards. Forced technical obsolesce
Other developers might step in.
If MainConcept wasn't sold to DivX and then DivX to Sonic Solutions and then Sonic Solutions sold to Rovi Corporation, maybe we would have something better by now (they stopped developing at Fermi generation cards, Dec 2010 was last build).

BTW, what is the CUDA-Z score for Maxwell's double-precision? I am tempted to add one in my system (have two Fermi now, one modded GTX480 and one real Quadro600).

[Thumb - Q600.PNG]
 Filename
Q600.PNG
[Disk]
 Description
 Filesize
26 Kbytes
 Downloaded:
1909 time(s)
[Thumb - Q6000.PNG]
 Filename
Q6000.PNG
[Disk]
 Description
 Filesize
27 Kbytes
 Downloaded:
1791 time(s)

This message was edited 6 times. Last update was at Nov 14. 2014 10:53

Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
Quote: The latest patch in PD12 and Maxwell only works for Intel CPU - I have an AMD FX-8350 as my primary machine (and a few other machines, all with various AMD CPUs, none Intel).

Not my experience, it works on my AMD platform with PD12. HA support with Maxwell is limited to progressive formats though, still a CL issue I believe. The PD12 patch was released 10/28 and they didn't even have i/p support in PD13 yet at that time, it was beta and noted here http://forum.cyberlink.com/forum/posts/list/40892.page#211198 in the release notes on 10/28.


OK, that's one case I still have to try. I have on the 344.65 drivers with PD13 when I verified the Maxwell support on my 750Ti.
After that, I went back to 337.88 drivers and checked CUDA perf with PD9-PD13 . My system is still in that state. Later tonight I will upgrade the drivers again and check NVENC support with PD12.

With the latest PD13 patch, CUDA hardware encoder no longer works with Maxwell with the 337.88 drivers - but it does on the original PD13 release bits - though, with the perf problem I am reporting here.


It became official on the 11/10 patch update to PD13 so I surely wouldn't think the same support was in the PD12 10/28 patch release.


Well, the PD12 build 3403 release notes claim the hardware encoder is only with Intel+nVidia hybrid platforms with driver 340+.
This would imply this is NVENC with Intel CPU machines.
PD13 build 2307 has NVENC working regardless of CPU with driver 340+. Dafydd confirmed that, and that case works for me.

I have done some tests with the following nVidia cards that I currently own (in different systems) :
8600 GT . This old GPU is in my 2nd desktop to drive monitor #1.
9800 GT . This old GPU is also in my 2nd desktop to driver monitor #2-3.
GTX 560 Ti (Fermi) . These were in my primary desktop and were driving 3 monitors. I pulled them out now for a single 750 Ti which drives all 3 monitors.
GT630/384 cores (Kepler) . I have 2 of these also , in 2 different HTPC. Very nice low power card, PCIE x8.
GTX 750 Ti (Maxwell) . Now in my main video editing box.

The last 2 cards have NVENC and I still have to do more testing with those.

All my CPUs are AMD - one Phenom II x4 945, Phenom II x6 1055T, FX-8120, FX-8350 .
The FX-8350 box is my primary machine. I only installed PD on the others to benchmark the different GPUs.

Hopefully these results will be helpful to others when I publish them.

I can already say, however, that the NVENC encoder on the 750 Ti isn't any faster than the CUDA encoder on the 560 Ti .
And the encodes from NVENC appear to be lower quality. So, NVENC isn't really a big step forward.

Also, NVENC on Maxwell does allow 4K encodes, and my LG G3 cell phone is now a 4K video camera, so this interests me..
It also records HD 1080p 60.

To me, the main attraction of Maxwell is that I can drive my 3 monitors with one card which I couldn't do with the 560 Ti - that's why I had to have 2 of them. That means half the GPU fans. It's also much less power consumption, so my home office won't get as hot.

I haven't checked yet if NVENC on Kepler will allow 4K encodes as well.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
Quote: Of course, nVidia has dropped support for NVCUVENC in the newest drivers so I'm not sure if it will get fixed ...

Not likely, my guess is that nVidia wants us to buy the newer cards. Forced technical obsolesce


Yes, clearly they do. But I don't think they can just expect everyone to upgrade. Also, there are a lot of devices in which the cards simply can't be upgraded, such as laptops.

The NVENC just isn't very compelling. The chip may be capable of 8x HD encode, but in the current implementation with PowerDirector, it is closer to 2.3x encode, which is the same as the CUDA encoder on Fermi (560 Ti).


If MainConcept wasn't sold to DivX and then DivX to Sonic Solutions and then Sonic Solutions sold to Rovi Corporation, maybe we would have something better by now (they stopped developing at Fermi generation cards, Dec 2010 was last build).


Ah, I didn't know the history. That's too bad. But the MainConcept CUDA encoder really doesn't perform well anyway.


BTW, what is the CUDA-Z score for Maxwell's double-precision? I am tempted to add one in my system (have two Fermi now, one modded GTX480 and one real Quadro600).




I wasn't aware of this program. I will download it and get you the data for the 750 Ti , and my other GPUs - at least the ones that are still plugged in.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
JL_JL [Avatar]
Senior Contributor Location: Arizona, USA Joined: Oct 01, 2006 20:01 Messages: 6091 Offline
[Post New]
Quote: Well, the PD12 build 3403 release notes claim the hardware encoder is only with Intel+nVidia hybrid platforms with driver 340+.
This would imply this is NVENC with Intel CPU machines.

My guess is the hybrid platform note CL refers to is laptops with dual Intel+NVIDIA support (battery life/performance). To my knowledge, this was a PD issue depending on which laptop manufacturer and what type of logic for hybrid video support. Some manufacturers disable (permanently) Intel HD support, other used Optimus, others control in BIOS, others.... my guess is this note was possibly to resolve some issues there and not intended for desktops with discrete graphic platforms.

Again, just a guess, I find these Notes with releases are often rather cryptic. I am reminded of a release note that read:
"5. Resolves program crash issue when deleting specific symbols in the Menu Designer."

When I asked for clarification, this was the real intent:
"5. Resolves program crash issue when deleting a character after inputting more than 255 characters in the Menu Designer."

That's some serious reading between the lines to decipher from initial release note.

Jeff
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
That's some serious reading between the lines to decipher from initial release note.


Yes. To be fair, most software makers don't provide complete fix lists. Much less detailed test cases, which often do not fit in a single line, but can be multiple pages long. Race condition bugs are the worst.

Going back to the original issue in this thread - both the CPU and the GPU usage are under 10% during this very slow encode - this is what leads to be believe that it is a software problem, not a hardware problem with Maxwell.
When I use the other bit rates to encode, the CPU and GPU usage are much higher.

This message was edited 1 time. Last update was at Nov 14. 2014 21:45

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
Not my experience, it works on my AMD platform with PD12. HA support with Maxwell is limited to progressive formats though, still a CL issue I believe.


Thanks !

The way I test is first set "intelligent SVRT" to detect the profile.
And then I try to use the hardware encoder.

When I do that in PD12 with the clip I attached near the top of the thread, it selects "Top field first" with profile AVC 1920x1060/60i (24 Mbps).

And at that point, the hardware encoding option is greyed out in PD12 with drivers 344.65 on the 750Ti.
If I edit the profile to switch it to Progressive, it works, indeed.

However, the resulting progressive video clip doesn't look good compared to the 1080i encode.
And of course, I can't compare the encode time with any of the other encodes that I did previously, which were all 1080i.
I don't have the courage to start benchmarking all over again.

In PD13 build 2307, both interlaced and progressive encodes work with the 750 Ti and drivers 344.65 .
As well as 2K/4K encodes. But I haven't started benchmarking those.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Looks like the "10x" slower case has to do with interlaced vs progressive encodes.

Even a 720x480x60i CUDA encode took 71 seconds on the 750 Ti, ie. less than 1x since the clip is 63 seconds.
Software encode is 20 seconds for the same case.
720x480x24p CUDA encode is 8 seconds on the 750 Ti.

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
SoNic67
Senior Contributor Joined: Sep 27, 2014 14:14 Messages: 1308 Offline
[Post New]
SVRT works by not re-encoding material that has similar characteristics, it should re-encode just 'around' cuts and effects.
In theory it should be fastest processing... but is not truly an encoding (not for testing purposes).

This message was edited 1 time. Last update was at Nov 15. 2014 06:39

Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Here is the data.

I highlighted in red the variable changes and the surprising results.

Notable findings :
- CUDA 750 Ti (Maxwell) is very slow for interlaced encodes with PD9 through PD13 . This is in cells K12 though K16 .
- CUDA 750 Ti (Maxwell) is still very slow for progressive with PD9 through PD 11 . this is in cells K19 through K22
- CUDA 750 Ti (Maxwell) is fast for progressive with PD12/PD13 . Cells K22 - K23

- NVENC on 750 Ti (Maxwell) doesn't work for interlaced in PD9 - PD11 - see cells K25 - K28
- NVENC on 750 Ti (Maxwell) works with PD13 . See cell K29 . Unfortunately, it's essentially the same speed as the 560 Ti (Fermi) hardware encoder - refer to cells K6 - K10 .

- CUDA on GT 630/384 cores (Kepler) is 50% slower in PD12/PD13 than PD10/PD11 . See cells K32 through K35 .
- NVENC on GT630/384 cores (Kepler) is 20% slower in PD13 than CUDA on the same card in PD10/PD11 . See cell K42 vs K32/K33 .

- CUDA on 9800 GT doesn't provide any acceleration on my Phenom X6 machine, vs software encoding, in PD10 through PD13 . See cells K45 - K48 vs L45 -L48
- CUDA on 8600 GT is a 3x decelerator vs the CPU on that same box . See cells K50 - K54 vs L44 - L48 . This is not that surprising for a 7-year old GPU going against a 4-year old CPU .

- software encoder got a significant boost in performance between PD9 and PD10 for all the CPUs I tested .
- software encoder performs fairly close between PD10 - PD12 on all CPUs .
The one exception is on the AMD FX-8120 . On that CPU, the software encoder speed varies greatly between PD versions.
It is much slower in PD12/PD13 - speed got cut in half vs PDPD11 . See cells L31 - L36 .



[Thumb - pd.png]
 Filename
pd.png
[Disk]
 Description
 Filesize
139 Kbytes
 Downloaded:
1717 time(s)

This message was edited 1 time. Last update was at Nov 15. 2014 09:53

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote:
Going back to the original issue in this thread - both the CPU and the GPU usage are under 10% during this very slow encode - this is what leads to be believe that it is a software problem, not a hardware problem with Maxwell.
When I use the other bit rates to encode, the CPU and GPU usage are much higher.


I was relying on GPU-Z updating every second only, but the GPU usage was not consistent enough. When I changed it to 0.1sec I could see that the GPU usage was 99% . So it the 750 Ti Maxwell card really does seem to be choking doing those interlaced encodes with CUDA.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote: BTW, what is the CUDA-Z score for Maxwell's double-precision? I am tempted to add one in my system (have two Fermi now, one modded GTX480 and one real Quadro600).


Here you go.

Looks like for double-precision, it's a quarter of the speed of one of your cards, presumably the genuine Quadro, and a little over double over the other modded one.

I'm not doing heavy computations though. Just using the card to drive 3 monitors for 2D mainly, and video encoding/decoding.

[Thumb - 750ti.png]
 Filename
750ti.png
[Disk]
 Description
 Filesize
45 Kbytes
 Downloaded:
1676 time(s)

This message was edited 2 times. Last update was at Nov 15. 2014 10:30

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
SoNic67
Senior Contributor Joined: Sep 27, 2014 14:14 Messages: 1308 Offline
[Post New]
Thanks for the tests, probably took a while.

And, no my fast one is a modded GTX480 into Quadro 6000
The slow one is a low-end Quadro 600 that I use mostly as spare, since it doesn't require an auxiliary power connector.
Why I care about the double precision? Because I think that's what is used in CUDA encoder.

This message was edited 1 time. Last update was at Nov 15. 2014 15:36

Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote: Thanks for the tests, probably took a while.


Yes, it did, but it's much easier now that all 5 versions of PD can coexist.


And, no my fast one is a modded GTX480 into Quadro 6000
The slow one is a low-end Quadro 600 that I use mostly as spare, since it doesn't require an auxiliary power connector.


Ah, I see. FYI, most 750 Ti do not need a power connector either. The Gigabyte version I got does, for no good reason - it doesn't really need the extra power, they just designed it pull power externally instead of from the PCI-e slot. Strange. Apparently their engineers didn't trust the motherboard makers to provide enough power.


Why I care about the double precision? Because I think that's what is used in CUDA encoder.


It's an interesting theory, but according to CUDA-Z, neither my 8600 GT or 9800 GT even support double precision floating point, and the CUDA H.264 works fine on them.

See these screenshots from CUDA-Z . These GPUs are both in the same machine with a Phenom II x6 CPU OC at 3.6 GHz.





I would say that the very slow performance of the CUDA encoder on Maxwell is likely a bug.
But given that nVidia has NVENC on that chip and wants to push that new API, I don't know how we get that bug fixed.
And it could still partly be a Cyberlink bug too. The 750Ti has very slow progressive CUDA encodes in the older versions of PowerDirector, but they work fine in PD12-PD13.
[Thumb - 9800gt.png]
 Filename
9800gt.png
[Disk]
 Description
9800GT
 Filesize
32 Kbytes
 Downloaded:
1237 time(s)
[Thumb - 8600gt.png]
 Filename
8600gt.png
[Disk]
 Description
8600GT
 Filesize
30 Kbytes
 Downloaded:
1312 time(s)

This message was edited 1 time. Last update was at Nov 16. 2014 10:10

MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
SoNic67
Senior Contributor Joined: Sep 27, 2014 14:14 Messages: 1308 Offline
[Post New]
Cool, I didn't think to install my old 9500GT to see if double is used or not in the CUDA encoder - that's for the nVidia engineers to answer. It would be good for the newer cards, since all have good single point performance.
Wonder if it does a difference in encoding quality...

I was planning to replace the Quadro 600 with a 750Ti, but then I found out that it's nvenc is a weirdo - between Kepler and Maxwell2. So I am not sure how well will play in future (buggy?) because of this in-between generation.
Julien Pierre [Avatar]
Contributor Joined: Apr 14, 2011 01:34 Messages: 476 Offline
[Post New]
Quote: Cool, I didn't think to install my old 9500GT to see if double is used or not in the CUDA encoder - that's for the nVidia engineers to answer. It would be good for the newer cards, since all have good single point performance.
Wonder if it does a difference in encoding quality...


While my field of works is not graphics, it doesn't make much sense to me that a CODEC would be using FPU. Floating-point makes more sense for things like 3D rendering.


I was planning to replace the Quadro 600 with a 750Ti, but then I found out that it's nvenc is a weirdo - between Kepler and Maxwell2. So I am not sure how well will play in future (buggy?) because of this in-between generation.


I think whenever there is a new piece of software/hardware, it is guaranteed to be pretty buggy. This is just reality these days with consumer-level stuff. If you buy the hardware and software from 12-18 months ago, they tend to have worked out the bugs, and you save a bunch doing that as well. There is a big premium to pay to get the latest and "greatest" and it's not just money.

The Maxwell 2 looks good on paper. I want the HDMI 2.0 in particular for 4K progressive display support.
The Gigabyte GTX 750 Ti I got supports 4k progressive with dual HDMI - but very few monitors actually support that.
New displays are using HDMI 2.0 .

But $350 - $400 for a GTX 970 is too high of a price to pay to be a beta-tester. I already feel I paid too much to test the GTX 750 Ti.
When there is a sub-$200 card, I may consider it. Maybe there will be a decent and affordable 4K monitor to go with it too.
I also want a 4K projector for my home theater that doesn't cost $10k . Or $4k Guess I'll wait a few more years for that.
MSI X99A Raider
Intel i7-5820k @ 4.4 GHz
32GB DDR4 RAM
Gigabyte nVidia GTX 960 4GB
480 GB Patriot Ignite SSD (boot)
2 x 480 GB Sandisk Ultra II SSD (striped)
6 x 1 TB Samsung 860 SSD (striped)

2 x LG 32UD59-B 32" 4K
Asus PB238 23" HD (portrait)
PepsiMan
Senior Contributor Location: Clarksville, TN Joined: Dec 29, 2010 01:20 Messages: 1054 Offline
[Post New]
Quote:
Quote: Cool, I didn't think to install my old 9500GT to see if double is used or not in the CUDA encoder - that's for the nVidia engineers to answer. It would be good for the newer cards, since all have good single point performance.
Wonder if it does a difference in encoding quality...


While my field of works is not graphics, it doesn't make much sense to me that a CODEC would be using FPU. Floating-point makes more sense for things like 3D rendering.


I was planning to replace the Quadro 600 with a 750Ti, but then I found out that it's nvenc is a weirdo - between Kepler and Maxwell2. So I am not sure how well will play in future (buggy?) because of this in-between generation.


I think whenever there is a new piece of software/hardware, it is guaranteed to be pretty buggy. This is just reality these days with consumer-level stuff. If you buy the hardware and software from 12-18 months ago, they tend to have worked out the bugs, and you save a bunch doing that as well. There is a big premium to pay to get the latest and "greatest" and it's not just money.

The Maxwell 2 looks good on paper. I want the HDMI 2.0 in particular for 4K progressive display support.
The Gigabyte GTX 750 Ti I got supports 4k progressive with dual HDMI - but very few monitors actually support that.
New displays are using HDMI 2.0 .

But $350 - $400 for a GTX 970 is too high of a price to pay to be a beta-tester. I already feel I paid too much to test the GTX 750 Ti.
When there is a sub-$200 card, I may consider it. Maybe there will be a decent and affordable 4K monitor to go with it too.
I also want a 4K projector for my home theater that doesn't cost $10k . Or $4k Guess I'll wait a few more years for that.


Samsung 55" LED TV UN55HU8550FXZA 4K Smart TV under 2K! on sale NOW!(sounds like salesman...)

whenever you're working with 4K videos from your LG G3 with PD13, would you post'em.
I'd love to seat with my popcorn and watch...
thanks a million.
p.s.
meanwhile I'm eyeing on an ASUS ROG G751JT with GTX 970M for $1499...

Yes, I do believe in Santa and Mrs. Claus!!!

This message was edited 1 time. Last update was at Nov 16. 2014 13:46

'no bridge too far'

Yashica Electro 8 LD-6 Super 8mm
Asrock TaiChi X470, AMD R7 2700X, W7P 64, MSI GTX1060 6GB, Corsair 16GB/RAM
Dell XPS L702X i7-2860QM, W7P / W10P 64, Intel HD3000/nVidia GT 550M 1GB, Micron 16GB/RAM
Samsung Galaxy Note3/NX1
Powered by JForum 2.1.8 © JForum Team