Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
Here we go again... no option to enable NVidia high performance graphic card with Power Director 16
Alevtina8606 [Avatar]
Newbie Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Hi guys..

If you have the same problem, please add your reports and comments. Here comes the description, and then my tears...

[Prelude]

Since introduction of the newer generations of Intel processors it is common that the laptop systems contain two graphics solutions: the intnernal Intel HD Graphics and the external NVidia high performance graphics. These are installed independently, normally with Intel as a primary option but, NVidia driver allows to set the NVidia card to be the default one.

This switch may be set to "Auto", or to the "Preferred" settings. Also, one can launch certain app with a selected (in the right-click context menu from your desktop) graphics adapter. NVidia control panel also allows to set the preferred choice on per-application basis, or system-wide.

With this said, I wanted to underline the fact that this configuration and systems are quite common and become progressively more common today. So, it would be only natural if we consumers expect for developers (i.e. Cyberlink) would account for this fact, and NOT find themselves surprised when a user reports "strange configuration". It is not uncommon anymore!



[Where problem appears..]

What I have now on my new laptop that uses the like configuration, and running a brand new Power Director 16 and the Windows 10 with all the patches and updates correctly installed and configured with all the side-by-side codecs, Intel/NVidia drivers etc., etc., including all the runtimes like dot.Nets, VBs, Microsoft C++, Java, Flash as well as Direct X, Vulcan, OpenGL and who knows what else, runing smoothly and bug free,

Lots of RAM, fast SSD, fast pprocessor, and powerful NVidia card,

And, what is important, perfectly fine running other manufacturer CUDA-enabled apps, like Adobe, MediaCoder, WinX etc., etc., all are running fine and all of them can be configured to start either with Intel or with the NVidia adaptor. And they really do - I use monitoring to know if my CUDA app really loads my GPU and how. So I don't just assume - I made many tests before I wrote this.



[NOW. THE PROBLEM]

Power Director does not work with my NVidia card. Only once it launched with the "non OpenCL" option when the hardware accelerated effects, render and previews option was greyed out, i.e. not switchable on, anyway. After that, no matter how I tried, it just have the "Enable Open CL" option. Enabling it applies Intel optimizations which may speed up the supported effects by certain -10-15%, and employs Intel hardware H.264/H.265 encoding, but this is all!

Never I was able to detect any NVidia GPU activity, even when rendering, even if I launch the Power Director 16 with the right-click context menu and explicitly selecting "Launch with High-performance NVidia adapter".



[What I tried..]

The power Director also ignores system-wide setting the priority for NVidia Graphics card. It always launches with Intel HD graphics.

I also cannot set the NVidia solution as the default on per-application basis in the NVidia control panel, because guess what - while all the othe apps listed there have this choice between Intel HD and the NVidia High-Performance graphics cards - ...

[The outcome..]

The Power Director is defaulted to the "Integrated Graphics Intel solution", and this is not even clickable, this is the only option!



[Why is that?]

My guess is that Power Director 16 reports to the switching driver that it is not compatible with NVidia card driver and hence, there is no choice possible. Always Intel HD Graphics.



[Tears and Pain..]

Now, you can imagine my total disappointement from all these investements... My pain is almost cosmic!

Why why why you Cyberlink did that to me? Why not include the list of supported cards just like the Adobe did, so one could at least manually add the missing card? But CUDA interface is pretty standard, all the other apps are working just fine, so why the Power Director 16 fails? It even restricts me from trying by the disabled selection!



[Tech details]

My GPU cores are GK107. These should be supported.



[What may calm me down..]

I am requesting Cyberlink to fix this, because I cannot change the card!

I would also like here some official note on this issue. Cmon guys from Cyberlink, tell me, why this failure? Why it is there only with Power Director? What is the use of this TrueVelocity if everything is so slowwwww...



[How you community may help?]



Share your thoughts and expeirence! If you have the same conf and the same type of problem, please submit your pain too.

Maybe then Cyberlink will disover there is not everything okay.



[My thanks..]



To all readers and participants!



Alya (me)





[Update 1]

Just installed the 2127 beta patch.. nothing changed. Again, on the first lauch the first option changed from "OpenCL" to "Hardware supported effects, preview and rendering.." but again it was greyed out. On the next launches it again changed to "OpenCL".. nothing changed in NVidia Control panel - Intel HD graphics defaulted, no choice possible.



[Update 2]

Browsed system registry for NVidia interface keys.. found that there actually is OpenCL support for my NVidia card - OpenCL64.dll is registered. Thus, hypothetically, Power Director would use OpenCL for hardware acceleration through NVidia solution. Indeed, why not?

But OpenCL interfaces to the Intel HD graphics only. Because there is only "Intel Optimized" labels on the effects, and, what is more important, there is no GPU activity (lowest clocks, zero load) while rendering this effects or encoding. Sorry guys..



[Update 3]

Found that the hybrid video solution (2 switched graphics adapters) is governed by NVidia Optimus technology. This one is known to have problems with proper detection of video modes by an application. The technology is supported on Windows 7 and up, so my Windows 10 should support it. But here might be a hint for developers - where to look. After introduction of this technology, some games are known to stop being able using NVidia solutions and defaulted to Internal Intel HD cards..



[Update 3.1]

Regarding the Optimus thechnology. I discovered that at least officially my laptop manufacturer does not support it. Therefore, I cannot choose in my BIOS which card I am using. It is always Integrated Solution by default there. Also, I only have 3 display ports (in total, including eDP laptop panel, HD and old style VGA), not 6 (would expect from Optimus).

I checked the Registry and found that NVidia Optimus is installed, but is it used (?).

So maybe the problem is really related to the Optimus technology - more specifically, the Power Director 16 might expect it is there, and NVidia driver say it might be there, but it might be not there! Hence the mess. If anybody could provide me with more detailed method how to detect this Optimus beast for sure, I would test it more specifically.

After all, somehow other developers resolved this situation, if any - thur software is working with both the graphics adapters, Power Director 16 is not.

This message was edited 6 times. Last update was at Oct 19. 2017 02:38

[Post New]
Hi,

i've got an Asus ROG G series laptop w/ Optimus and it's very easy to bypass this limitation.

just have a look here:

Nvidia w/ Optimus

it works like a charm.

the only thing is to modify the option each time you update the Nvidia driver.
Alevtina8606 [Avatar]
Newbie Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Quote Hi,

i've got an Asus ROG G series laptop w/ Optimus and it's very easy to bypass this limitation.

just have a look here:

Nvidia w/ Optimus

it works like a charm.

the only thing is to modify the option each time you update the Nvidia driver.






Thank you BVR!!!!



However, there are only two screenshots, one for demo, another for what to change, but none for how (((

Thus, I cannot recognize the software in which the driver was modified, don't know the procedure, and whihc files to change.

I did a quick search on Internet, but the only page I found is 404 now..



I would welcome a private message with detail.



Thank you!

Alya



[Update]



Okay, I fixed it.. the Software is called NVidiaProfileInspector, which is standalone part of NVidiaInspector - pretty useful utility, in which you can tweak clocks, hardware flags and software parameters of the driver both system-wide or application-specific.



I changed the settings to "enabled" and finally, I was able to launch the software with the NVidia graphics adaptor. Awesome!

Now all the effects are labelled with NVidia logo instaed of Intel logo. Good,



But(t)!!!



Still I have only OpenCL option available (or grayed outEnable Hardware support for effects, preview and render on its place ONLY on the first launch of the PowerDirector after application/NVidia drivers update).



And still, with this OpenCL checked, NVidia card remains idle and cold.



So we tricked the Power Director to beleive that there is a compliant card from NVidia. But we did not make the driver beleive that Power Director is eligible to work with it )))



Guess1:



[WhatIf] Power Director configures itself once it detects change in its version or fresh driver installed, and once it is configured it locks with these settings.

Since there wasn't the checked "ENABLED" option in the profile by the moment of the update, it just switched off till the next update.



If so, the question is how to make Power Director initiate self configuring just after the NVidia driver setting was altered manually, so it locks in the right state..

[End guess]

This message was edited 1 time. Last update was at Oct 19. 2017 06:30

[Post New]
PM sent
Alevtina8606 [Avatar]
Newbie Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Quote PM sent




Thank you BVR and those who are reading and sharing,

Yes, I performed the actions in the PM. But still no good result.

Yes, the NVidia label appeared instead of the Intel label on the accelerated effects.

Yes, when I enable an accelerated effect on a video track in the preview, I can see "pwr.exe" executable running in the NVidia GPU tasks. But is it actually running?

No! It tries to start something, judging from the monitor in the Inspector app, the clocks are rising for a short while and enen GPU load momentarily go up to 30%... even the temperature of the GPU core rises by 5-7 C degrees; but very soon (ten seconds or so), all returns to idle.. looks like it tries to run the task on the cores but then something goes wrong and it again switches to the software rendering. Slow and even slower than with Intel optimizations.

And again, there is only "OpenCL" option shown in the first place on the Hardware acceleration tab of the Settings window;

No "Hardware support for NVidia/Intel/AMD..."...



Can I explicitly disable OpenCL by NVidia panel or the Inspector for the Power Director?



Any other suggestions?
[Post New]
Hi,

i did differents tests and it works, with Nvidia it tooks 54 sec to render, with Integrated Intel 1mn 16 sec.
[Thumb - nvidia on.jpg]
 Filename
nvidia on.jpg
[Disk]
 Description
activation nvidia
 Filesize
411 Kbytes
 Downloaded:
166 time(s)
[Post New]
so PowerDirector use OpenCL over the Nvidia GC, Cuda isn't implemented.

when i apply an effect from New Blue (that doesn't use OpenCL) the GPU load is only 1 or 2 % .

i've got a second PC w/ AMD Radeon GC and it's the same, PowerDirector use OpenCL.

and i prefer to use dedicated GC over integrated one because they have their own memory.
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
Quote Hi guys..After all, somehow other developers resolved this situation, if any - thur software is working with both the graphics adapters, Power Director 16 is not.
Power Director has never been compatible with Nvidia Optimus or MSHybrid mode. I went through all this over a year ago when I purchased a high end MSI laptop specifically to replace my desktop. I wanted something powerful enough to edit video. I tried ever suggestion and hack I could find and nothing worked. USers shouldn't have to figure out workings for software. I ended up just returning it and let the manufacturer know why.

I later purchased a Sager laptop (Clevo from Taiwan) which has an option in BIOS to disable MSHybrid mode. I have been very happy with it. I just boot with the nVidia GTX1060 and PD is happy with it. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Alevtina8606 [Avatar]
Newbie Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Quote
Quote Hi guys..After all, somehow other developers resolved this situation, if any - thur software is working with both the graphics adapters, Power Director 16 is not.
Power Director has never been compatible with Nvidia Optimus or MSHybrid mode. I went through all this over a year ago when I purchased a high end MSI laptop specifically to replace my desktop. I wanted something powerful enough to edit video. I tried ever suggestion and hack I could find and nothing worked. USers shouldn't have to figure out workings for software. I ended up just returning it and let the manufacturer know why.

I later purchased a Sager laptop (Clevo from Taiwan) which has an option in BIOS to disable MSHybrid mode. I have been very happy with it. I just boot with the nVidia GTX1060 and PD is happy with it.




Well, good for you!



Meanwhile, over the daysof test I somehow found configuration, when the PD16 starts with Integrated Intel, but then it uses Open CL with NVidia to render and preview.. I dunno, how this is possible.

This is a very interesting questions for developers! Maybe OpenCL for Intel platform is considered inferior for the NVidia platform, or just damaged with all these un-installs/reinstalls?



I have a weird expierence with such the conf: H.264, Windows Media, Mpeg-2 and AVI hardware acceleration support for FHD is missing; I now have H.265 encoding faster than H.264 because it is optimized (IntelQuick Sync) and the H.264 is not.

Sometimes something happens and it switches to the Intel HD Grapics which seems to be faster (more effective)

Consider:

for the same job (H.264, FHD from 4K, 24 fps, 7 effects all Intel-optimized):


  • CPU - 14 min

  • CPU with OpenCL through NVidia: 11 min (mean GPU load is 50-60%)

  • CPU with Open CL through Intel GPU: 8 min


The problem is that the latest configuration cannot be accessed as it were: with lauching PD16 with Integrated Graphics option!

It sarts with Intel, but then again uses NVidia OpenCL. This sucks.

(Yes, I have tried reinstalling PD16, codec packs, the Intel and NVidia drivers)

Another weird thing is that when it runs like this, it is poor real-timer. It starts playback smoothly, but in several secong goes laggy and then just slide show. NVidia monitor shows that it tries to sync to real-time but then fails, and then gives up and clock down..

Interestingly, when I uncheck OpenCL, I have both smooth playback (with effects) and faster rendering!!!!

GUYS FROM CYBERLINK!!! PLEASE!!!! HEAR MY VOICE!

If you cannot do a reliable autodetect of the enumerated hardware accelerators, and quickly and reliably rate them (obviously the case), do us all a favour and allow NORMAL MANUAL SELECTION!

We all would welcome a Performance Window where there is a link: Manual Setup!

Where a user could set explicitly, what hardware, thru which interface will be used, separetely for Preview and Render.

This is so obvious thing to have by hand in every good video rendering software, that I simply cannot imaging WHY it is not there yet!!!



VOTE FOR MANUAL SETUP and screw the TrueVelocity which fails!!!



Submit your votes here if you like ))



Thank you everybody!
Warry [Avatar]
Senior Contributor Location: The Netherlands Joined: Oct 13, 2014 11:42 Messages: 853 Offline
[Post New]
An option to manually select the hardware is of course really nice, specially for the techies. But I assume that most users just want to focus on the Video Editing without having to worry about the settings too much. It should simply work as fast as the available hardware allows it to.

I would think that the Cyberlink developers, who are capable of offering such fine Video Editing functionality, should see this as a challange and get the Cyberlink software to utilise the available hardware as optimal as possible. "Cyberlink will get the most out of your hardware" can be the next slogan.

A statement that Cyberlink software does not support any fancy hardware will also stop the discussion of course.
Alevtina8606 [Avatar]
Newbie Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Quote An option to manually select the hardware is of course really nice, specially for the techies. But I assume that most users just want to focus on the Video Editing without having to worry about the settings too much. It should simply work as fast as the available hardware allows it to.

I would think that the Cyberlink developers, who are capable of offering such fine Video Editing functionality, should see this as a challange and get the Cyberlink software to utilise the available hardware as optimal as possible. "Cyberlink will get the most out of your hardware" can be the next slogan.

A statement that Cyberlink software does not support any fancy hardware will also stop the discussion of course.




Dear Warry,



I agree that overall concept “keep it as simple as possible” is a key to large number of users and is beneficial in every sense.

I agree that for those needing professional features there are professional instruments for a fair price.

And I agree that if the resources of computer are properly enumerated, rated and used optimally, there is simply no need for any manual adjusting.

But if they are not?

Look.. there are only 2 checkboxes for settings. Basically, these are “Enable hardware support for encoding” and “Enable hardware support for decoding”.

Nothing more simple can be imagined.



But then, imagine how COMPLEX and time consuming becomes tweakin and diagnosis of your system (hardware/codecs/interfaces/software) if there is no other mean for adjustment:


  • decoding can be performed on Intel or NVidia or CPU

  • encoding can be performed on Intel, or NVidia or CPU

  • processing can be performed on CPU, Intel, Nvidia and any combination of them all.

  • availability of file formats/codecs depends on the their support on particular hardware.


These have ((3!*3!)/2)*3! = 18*6 = 108 possible scenarios of computation configuration.

Personally I would welcome if True Velocity rated them all and used the most efficient scenario. However, it DOES NOT.

Consider my example system:

(what I say here goes from extensive tests and monitoring, not just my assumptions),

figures for rendering 7 Intel Core optimized effects applied to source 4K H.264 and encoding them to FHD target:

With OpenCL enabled,


  • Effects are calculated on CPU engine with some offload to NVidia GPU though OpenCL with typical loads near 100% on CPU and 50-60% on NVidia GPU - giving non-real time expierences - not suitable for previews.

  • Decoding is performed in Intel GPU with negligible loads of several %

  • Encoding is performed on Intel GPU, with loads from 5% on XAVC, 10% on H.264, and up to 40-50% on H.265


When OpenCL is disabled,


  • effects are calculated on streamed CPU engine, which gives real-time expierences, ideal for previews but not as good for rendering (slower).

  • encoding and decoding are performed on Intel GPU


One can immediately see, that there are two preferred configurations for my system (in fact, for every dual-graphics card systems, not a “fancy” system, but common nowadays, especially with laptops!):


  • one for preview/editing, real-time

  • another for rendering, not real-time


Thus, the least I would expect from Cyberlink is implementing separate settings for Preview and Rendering. Otherwise I have to constantly switch between he two modes, which is not really convenient.

This feature will not be any overhead even for “stupid user”. But it would add value to the product for any advanced user.

Speaking of the “Advanced...” link to the page where the resources could be assigned manually, well...

We all (even stupid users) become progressively advanced as we are using the product. In the very beginning yes, we welcome simplicity above all. Then, it becomes a limiting factor. Finally, it is lack of flexibility what pushes us towards more professional products (and away from Cyberlink products). Why letting this happen to already existing loyal customers?

In my case, I can see that configuring computation like it is done on my system, leaves about 70% of computational powers untouched - totally for the two GPU on average with H.264:


  • A fair amount of computations could be put to Intel GPU in XAVC and H.264 case, and even in H.265 case there are still 50% of resources available. OpenCL allows for load balancing between all platforms, so why not use both?

  • Decoding and encoding could be done on different cards: source H.264 could be decoded by NVidia, while destination H.265 could be encoded by Intel.


So Warry and the other adepts of the “keep it simple” paradigm. You are right and I am with you. Keep it simple, BUT


  • allow for ADVANCED )))




SUMMARY:

Vote here for the ability to separately checking enabling OpenCL for “Preview/Edits” AND “Render”.

You will have then smooth, real-time editing and preview expierences AND low rendering times!



Thank you everybody!

This message was edited 1 time. Last update was at Oct 23. 2017 06:05

Warry [Avatar]
Senior Contributor Location: The Netherlands Joined: Oct 13, 2014 11:42 Messages: 853 Offline
[Post New]
I fully agree.

You have my vote!
Tomas G77
Member Location: Ayrshire Joined: Jun 13, 2008 08:54 Messages: 100 Offline
[Post New]
just to let you know there's a new Nividia driver out today 24/10/2017

check out there website

Tom G Scotland Thomas G
I'm a 33yr old trapped in a 69yr old body
Alevtina8606 [Avatar]
Newbie Joined: Oct 17, 2017 11:00 Messages: 12 Offline
[Post New]
Quote just to let you know there's a new Nividia driver out today 24/10/2017

check out there website

Tom G Scotland




Thank you Tom!

However I beleive this update only adds new drivers sections for new cards, nothing changes in depth..

What about you vote?

Are you with me?
Tomas G77
Member Location: Ayrshire Joined: Jun 13, 2008 08:54 Messages: 100 Offline
[Post New]
Quote
Quote just to let you know there's a new Nividia driver out today 24/10/2017

check out there website

Tom G Scotland




Thank you Tom!

However I beleive this update only adds new drivers sections for new cards, nothing changes in depth..

What about you vote?

Are you with me?




yes I'm with you Thomas G
I'm a 33yr old trapped in a 69yr old body
PepsiMan
Senior Contributor Location: Clarksville, TN Joined: Dec 29, 2010 01:20 Messages: 1054 Offline
[Post New]
come on guys, stop chasing your own tails... correction 27Oct2017; CL still supports GPGPU support. CyberLink Technology Hardware Acceleration but it's broke...

read again what GGRussell said...



happy happy joy joy

PepsiMan

'garbage in garbage out'

This message was edited 2 times. Last update was at Oct 27. 2017 13:00

'no bridge too far'

Yashica Electro 8 LD-6 Super 8mm
Asrock TaiChi X470, AMD R7 2700X, W7P 64, MSI GTX1060 6GB, Corsair 16GB/RAM
Dell XPS L702X i7-2860QM, W7P / W10P 64, Intel HD3000/nVidia GT 550M 1GB, Micron 16GB/RAM
Samsung Galaxy Note3/NX1
[Post New]
Intel's GPU just has higher priority in PowerDirector because it is more stable on a hybrid (Optimus) platform.

My platforn with Intel Xeon (w/o integrated GPU) can use NV GPU for fx, video decode and encode acceleration correctly.

This message was edited 1 time. Last update was at Oct 26. 2017 03:28

------------------------------------------------------------------------------------------------
PowerDirector 365
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
Quote Intel's GPU just has higher priority in PowerDirector because it is more stable on a hybrid (Optimus) platform.
And WHERE are you getting this information? If this is just your opinion, maybe you should state that. I certainly doubt that statement is true.

This message was edited 1 time. Last update was at Oct 26. 2017 11:21

Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Legacystar9 [Avatar]
Newbie Joined: Dec 04, 2017 14:00 Messages: 6 Offline
[Post New]
hello everyone,



i recently got into Powerdirector and have been using it on my Surface pro 4 and desktop which has an Nvidia gtx 1060. as my desktop is AMD Ryzen, it has no intigrated GPU, so it uses the Nvidia card and allows me to choose CUDA. the Surface pro 4 only allows intel synce or whatever it is. sinc ei like editing at my local coffee shop and brewery, i thought lets get something more powerful.



enter the Surface Book i7 w/gtx 1050

I think, wow this will be great, all the pwoer i need on the go. turns out, as this thread discovered, PD will not use the dGPU if there is and intel one available. i do not have the option to turn off the intel graphics, so need ot find a way to trick PD into using the Nvidia card.



we know PD is compatible with Nvidia, we just need to trick it, so i started looking through the registry, and found something called "GPUUtilityEx"which links to GPUUtilityEx.exe inside the cyberlink program folder. this utility seems to scan the computer and dictate the gpu in use. i tried altering the registry to indicate what my nvidia desktop showed in those fields, but whenever PD16 is loaded, those values get reset. so i removed GPUUtilityEx.exe from the folder, altered the registry again, and loaded PD16, and got some interesting results. under hardware acceleration options, it used to say OpenCl and i could select it, now it says "enable Intel Effect Accelerations/NVIDIA CUDA/AMD Accalerated Parallel Processing technology to speed up video effect preview/render" but the option is greyed out. so i look in the registry again and some of the values i changed remained, others went back to intel.

https://photos.app.goo.gl/LSsUBW6QcalHFKyl2

https://photos.app.goo.gl/gAx9sSsN50uUClO03



so, we need to try and find the mechanism that PD16 has that selects the GPU. i was also able to delete the Nvidia Driver profile for Powerdirector and replace it with one that owuld allow e to select the GPU in the Nvidia settings, but that doesn't seem to work, power director is slecting it's own GPU, not being told where to go.

here are the registry entries from the Nvidia desktop

https://photos.app.goo.gl/w5b8opX7ubgaJY262

https://photos.app.goo.gl/BKljD94Iv257zA422
GGRussell [Avatar]
Senior Contributor Joined: Jan 08, 2012 11:38 Messages: 709 Offline
[Post New]
We shouldn't have to go through all this and I complained about this issue several years ago so Cyberlink should be aware. They need to fix their software. This can also be an issue on desktops, but most desktop mobos have a way to only use the dGPU. No clue why most laptops can't. Intel i7 4770k, 16GB, GTX1060 3GB, Two 240GB SSD, 4TB HD, Sony HDR-TD20V 3D camcorder, Sony SLT-A65VK for still images, Windows 10 Pro, 64bit
Gary Russell -- TN USA
Powered by JForum 2.1.8 © JForum Team