Announcement: Our new CyberLink Feedback Forum has arrived! Please transfer to our new forum to provide your feedback or to start a new discussion. The content on this CyberLink Community forum is now read only, but will continue to be available as a user resource. Thanks!
CyberLink Community Forum
where the experts meet
| Advanced Search >
Is there any way to perfectly stabilize a time lapse from a drone?

THe motion tracker has the logic, but is there a way to keep a target "centered"

The anti-shake dampes movement, but I want to lock the movement.
So I select burn disc, smart size, it goes through the process and burns the disk "successfully"
It won't read in windows or any dvd player. I tried it twice.

Windows doesn't even think there is a disc in the drive.
Quote:
Quote: How do you do transitions with multi-cam?
If I place them on the clip after I created the multi-cam track, it reduces the length of the video. I want to be able to fade from one cam to the other, without clipping any of the time.

Test this...
Right-click the transition and modify transition behavior to CROSS-FADE. That mode does not abridge.
Set the default transition mode in EDIT|PREFERENCES|EDITING.



Thanks!
How do you do transitions with multi-cam?
If I place them on the clip after I created the multi-cam track, it reduces the length of the video. I want to be able to fade from one cam to the other, without clipping any of the time.
Not when there is video on an above track
More like this
I put a fade out followed by a fade in, however it's forcing it fade-out followed by another fade-out.

How do you tell it which way to fade?


Just take a screen capture of the background video before you walk in front of it. Have a blanket that is one color on the inside (the color people will see) and green on the outside.

I didn't have a green blanket, so I went with red which is more problematic as it matches some skin tones. The lighting was also not uniform, so the red edges show through, and the couch has a shadow.

In PowerDirector just put the snapshot on video track 1, and the movie on video track 2. Extend the duration of the snapshot for the whole scene. Then select modify on video track 2 and enable chroma key. Use the eyedropper tool to pick the green cloak. (red in my case) Tripod is required of course.

pdr11 supports 4k resolution... or use ffmpeg or another product to do the first encode (jpg to mov) to something higher than 1080p.

it would be nice if the crop tool worked on pdr slideshows in timelapse mode.
It installs the same way you installed it on the other computer.
You're not licensed to run it on more than one computer at a time however.
Whether it is actually installed or not is arguable.
The HD4000 (Ivy Bridge) uses a new version of Quick Sync which is almost twice as fast as the previous Sandy Bridge (HD3000)

I have not done a pixel level comparison, but the render speed boost is worth any minor quality trade-off.
After all, this is PowerDirector we are talking about, not Sony Vegas.

PDR10 is great, but it is not a high quality encoding engine. It is a good quality one that is fast.

I ran a few benchmark scenarios.


Whats weird is if you enable quick-sync it turns off video preview, however it does not prevent you from turning video preview back on.

It does make a significant performance difference, but even with video preview on quicksync 2.0 is smoking fast, even on an ultrabook.

Since embedded videos don't show video notes, here they are:
Two minute sample clip shot from a Nikon D800. 1920x1080P MOV file transcoded to a MP4 1080P file.
Software: Cyberlink PowerDirector 10 Ultra
Asus UX31A 1.7ghz 2-core with Intel HD4000 on-board graphics. Best speed: 40 seconds (3x real-time)
Dell Latitude 6410 2.7ghz 2-core with nvidia. Best speed: 3:50 seconds (about 0.5x real-time)
Desktop Core i7-950 3.07ghz 4-core with Nvidia GT470. Best speed: 1:40 (about 1.2x real-time)
You can convert images to a time lapse now in version 10 which is nice, but you cannot do anything to the time lapse until after its rendered which defeats the purpose. If I crop a 12mp image I can zoom in quite a ways / pan around without loosing any optical resolution.

By rendering it to 1080, then editing it after the fact, any cropping will diminish the optical quality of the time lapse.
Use google chrome as a web browser. Faster, more secure and has spell check by default. Then get a ad-block plugin.

As for my layman's explination:
a GPU is a super-efficient calculator. They were designed for handling triangle-math. Turns out the same kind of math can be applied to other things, freeing up the main computer to do other things. Physics, video redering, raw math. Then they found they could pack hundreds of cheap math-hungry processors onto one card. On NVIDIA these are CUDA Cores, on ATI they're STREAM processors.

If you can find a way to tackle your math problem in a way that hundreds of different parts of it can be worked on at the same time, then your project is perfect for CUDA/STREAM.

Unfortunately, not every math problem can be broken up into tiny pieces. (single threaded) Or nobody has done the work to figure out how to do it yet. That is why some codecs have hardware acceleration and some do not.

Recent video cards and intel processors have purpose-built capability to process video files.

In the case of intel quicksync the program you're using just sends the raw video to quicksync and tells it what you want to do to the video. Quicksync processes the video in the way intel programmed it to, and returns the result.

This means what ever program you're using to encode video, no mater who makes it, if they use quick sync the results are going to be identical. Because the video processing is being handled by intel's video transcoding utility.

When the program uses CUDA, they still have to write the program to convert the video. The difference is they have access to a very fast calculator that can run hundreds of operations at the same time. But it's THEIR program and math they're running.

It means a company like cyberlink can process a video 3 different ways:
1.) Regular old CPU. They write a program that uses 1 to 8 CPU cores at the same time to process video using their own math to do it.
2.) Using CUDA/STREAM on the GPU: They write the program to parse out the job into hundreds of tiny jobs that are all tackled at the same time. This requires processing video completely differently than the CPU approach. But it's fast.
3.) Offloading it completely like Intel quick sync or ATI's AVIVO. You have no control over what happens to the video. They just send it to the quick sync processor and say "convert this to mp4 at this quality setting" and it does it. You rely on intel to have made good design decisions in their math.

Writing the brains of your program 3 times (actually 5 times if you want to support CPU/CUDA/STREAM/AVIVO/QuickSync) is tedious and time consuming. That's why hardly any video editing company supports them all in the same product.
I can understand things like avivo and intel quicksync having a impact on video quality as they have their own rendering math. However using CUDA/STREAM should not have as much of an impact as you are offloading your own math and not using some other companies.

http://tech.slashdot.org/story/12/05/08/2217236/the-wretched-state-of-gpu-transcoding

Have you noticed using GPU acceleration produces different quality output than cpu only, as suggested by this article?
Here's the command line parameters I use to deinterlace 60i to 60p.

If you use PD10 to slow down a 60i clip, it throws away half the fields, and you effectively only slow down 30 fps source footage

ffmpeg -i myinterlacedfile.m2ts -vf yadif=1:0 -vcodec libx264 -vpre hq -acodec copy -copyts -threads 0 -b 22000k "myprogressivefile.mp4"

The tricky part with ffmpeg is getting the presets to work.

You can download a bunch of them here:
http://www.mediasoftpro.com/aspnet-x264-presets.html

The presets must be in %home%\.ffmpeg

C:\ffmpeg\presets\.ffmpeg>dir
Volume in drive C has no label.
Volume Serial Number is 902B-83FA

Directory of C:\ffmpeg\presets\.ffmpeg

03/31/2012 06:59 PM <DIR> .
03/31/2012 06:59 PM <DIR> ..
05/18/2011 03:37 AM 43 libx264-baseline.ffpreset
03/19/2010 02:05 PM 304 libx264-default.ffpreset
03/19/2010 02:05 PM 313 libx264-fast.ffpreset
03/19/2010 02:05 PM 313 libx264-faster.ffpreset
03/19/2010 02:05 PM 313 libx264-faster_firstpass.ffpreset
03/19/2010 02:05 PM 322 libx264-fastfirstpass.ffpreset
03/19/2010 02:05 PM 313 libx264-fast_firstpass.ffpreset
34 File(s) 9,427 bytes
2 Dir(s) 5,368,791,040 bytes free

C:\ffmpeg\presets\.ffmpeg>echo %home%
c:\ffmpeg\presets

Assuming you save your presets in the same spot I did:

You can set the HOME variable using the following command line function:
setx home c:\ffmpeg\presets
(this takes effect for new command line windows only, not the one you just entered. So close it out and open a new window)

What ever folder you specify as HOME, the presets must be inside a folder .ffmpeg inside that folder.
It took me a long time to figure this out.

Attached are the source 60i footage, and a low bitrate 60p version (low bitrate to save download time)

The yadif filter does quite well at deinterlacing and "doubling" the frame rate. Technically 60i is really 30i (two fields per frame)

Download both clips, slow then down in powerdirector to 10% and you'll see what I mean.

Curious if you have any other video applications/tools installed that may have stepped on the default. Nero / Codec Packs etc
1080 at 50fps breaks youtube. Even if you upload 720P at 60fps which is supported, is it re-encoded by youtube to 30 fps mp4.
So there is no point in uploading 50/60 fps, and it breaks the video at 1080 anyways.

Produce to Mpeg4 "Best" - it will be 1080 30P which is perfect for youtube.
If the screen is too bright, and the subject is too close to the screen, the screen itself will cast diffuse green light onto the subject which would be very difficult to deal with. This is also true if the subject is not in perfect focus.
Go to:   
Powered by JForum 2.1.8 © JForum Team