利用者:Nazg-gul/Keying

提供: wiki
< 利用者:Nazg-gul
2012年7月24日 (火) 18:41時点におけるwiki>Nazg-gulによる版 (Keying node)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Green Screen Keying

Here are some technical and user-side information about enhanced techniques for greenscreen keying in Blender which were developed to deal with footage in Mango Open Movie project.

Issues with current nodes

For really green screens situation with keying in Blender isn't so dramatic. But that's screens leads to lots of spill in foreground so they're not so commonly used in real production. But even with such kind of screens there're couple of ways to improve current nodes:

  • Keying operations are too atomic, which is good in some way but in practice it's just lead to really spagetty mess. For example to just apply black/white clipping one need to create color ramp node, tweak it in needed way, add Set Alpha node and replace alpha for result image of keying sequence. Another example is doing chroma pre-blur of image, which ends up having 4 extra nodes and values of blur size should be manually kept in sync, which is quite annoying.
  • Keying nodes assumes single color for screen. But in practice it's pretty common when there're gradients on screen.
  • Dilate/Erode are pixel-precision, which means dilating/eroding matte would be quite aggressive.
  • There's currently no techniques to remove noise from matte. For real 4K footage from camera this would be nice (because of all that grain).
  • Dealing with semi-transparent objects, really spilled foreground and edges might also be improved in some way.

Research

Unfortunately there's no much research published in this area. There are some published results but almost all of them are not good enough or they are not actually better than current stuff implemented in Blender.

Another problem was met -- some of papers don't contain implementation details, so only general ideas might be used and with some creativity made working. That's what currently happens with new keying nodes.

Implementation

Here's code- and algorithm- side description of new nodes.

Screen color

As it was told, all current keying nodes in blender assumes single screen color. Actually, gradient input might be used by using image as color input of keying nodes. But it's still might be very much improved by generating this gradient automatically using additional data like motion tracking.

It was decided to implement node called Keying Screen which is aimed to produce gradiented image which might be used as screen color input. Here's description how it works.

Currently it uses given Movie Clip as input for gradient colors. If be more exact, it's using Motion Tracks given Movie Clip and Tracking Object to generate gradient. This tracks defines coordinate of point and it's color in image space which are later used to produce gradient.

Actually, it's not single color track is pointing to, it's average color of track's pattern image is used. This is caused by the fact that film grain might be really noisy and it'll make it difficult to place markers at exact position of color one want to use for this point.

This motion tracks might be "animated", or tracked along the movie and gradient would be updated automatically to match updated tracks coordinates and colors.

Gradient is creating using some ideas of building Voroni diagram and triangulating it. Basically, algorithm is working in the following way:

  • Voronoi diagram is building for given motion tracking tracks.
  • This diagram is getting triangulated by connecting sites with edges which defines site area.
  • Color of sites are assuming to be average color of motion track pattern, non-site points colors are assuming to be average color of two neighbor sites (actually, this might eb done a bit smarter, but even this approach seems to be working pretty well).
  • Triangles are getting filled with gradient using barycentric weights to define color of pixel. Say, we've got triangle with colors Color_1, Color_2 and Color_3, barycentric coordinates of current pixel in this triangle is Weight_1, Weight_2 and Weight_3, then color of result pixel would be:
 
 PixelColor = Color_1 * Weight_1 + Color_2 * Weight_2 + Color_3 * Weight_3
 

New Keying Node

It was also created new keying node, which is basically encapsulates all basic operations which are needed to control key:

  • Chroma pre-blur
  • Factor of color despilling
  • Clipping of black and white
  • Matte dilate/erode
  • Matte blur

Most of operations were currently re-used from current Blender's nodes, but some tests were done with despilling and matte creation.

Blur

For blur it's used simple and pretty fast technique which is often used in other keyers. It's based on replacing value of pixel with average value of pixels around current pixel. Size of such area is defined by blur size value: square area of size (2 * blur_size + 1) around current pixel is being used to calculate average value.

This works like simple de-noising when applying as chroma-preblur and helps eliminating noise from footage.

Color Despilling

Currently used formula is based on formula used by Color Spill node when it's set to Average mode. Say, we've got pixel_color from processing image, screen_color is current color of screen. Then we can find primary color channel of screen color, say it'll be screen_primary_channel.

 
 output_color = pixel_color
 average_value = (pixel_color[0] + pixel_color[1] + pixel_color[2] - pixel_color[screen_primary_channel]) / 2
 despill_amount = pixel_color[screen_primary_channel] - average_value

 if (despill_factor * amount > 0) {
    output_color[screen_primary_channel] = pixel_color[screen_primary_channel] - despill_factor * amount
 }
  

Keying

After lots of research and test it wasn't found some existing algorithm which works nice for all cases, so it was started playing around with different ideas and formulas. Here's currently used approach.

Currently developing keying node is actually color difference keyer which works on every pixel individually. Every pixel is handling in the following way:

  • Calculate saturation for screen color and pixel in input image. Saturation is a weighted difference between primary channel and median of two rest components (currently balance of 0.5 is used):
 
 other_1 = (primary_channel + 1) % 3
 other_2 = (primary_channel + 2) % 3

 min = MIN2(pixelColor[other_1], pixelColor[other_2])
 max = MAX2(pixelColor[other_1], pixelColor[other_2])
 val = screen_balance * min + (1.0f - screen_balance) * max

 saturation = (pixelColor[primary_channel] - val) * fabsf(1 - val)
  
  • If primary channels of pixel on image and screen are different, pixel is assuming to be a foreground. Alpha for such pixel is 1.0 and despilling of this pixel happens.
  • If saturation of pixel is higher or equal to saturaiton of screen, then it's a background color. Alpha for this pixel is 0.0.
  • Otherwise pixel is an edge pixel and it need to be despilled in some way and some amount should go to it's alpha. Currently alpha for edge pixel is calculating as
 
 alpha = 1 - image_pixel_saturation / screen_pixel_saturation
 

Edge detection

White and black clipping should happen only on foreground and background, but should not affect on edges to make foreground/background really clean but keep nice gradients on edges.

Edge detection is based on counting foreground and background pixels within given area around the pixel. Size of this area is controlled by Edge Kernel Radius slider which defines square area of size (2 * kernel_size + 1) around the pixel.

Every pixel within this area is being checked against value of center pixel, and if difference between this two pixels is larger than Edge Kernel Tolerance, this pixel is treated as belong to different plane (foreground or background, depending on value of current pixel). Algorithm counts number of such pixels and if there's less than 10% of them, current pixel is treated as edge and clipping doesn't happen.

Black and White Clipping

For black and white clipping simple formula is used:

  
 if color < this->clipBlack:
     color = 0
 elif color >= this->clipWhite:
     color = 1.0
 else:
     color = (color - clipBlack) / (clipWhite - clipBlack)
 

But there's also attempt to make it preserve gradients on edges, so it'll clip noisy pixels on foreground and background but wouldn't affect on gradients.

Currently very simple approach is used for this. Overview kernel around current pixel with given size, count pixels which are within giver threshold with current pixel and if most of pixels fits this threshold pixels assumes be non-edge and clipping happens.

Garbage and Core Mattes

There're two extra inputs for keying node.

Garbage Matte defines areas which are definitely background. This means that areas in which garage matte is white would be forced to be treated as background, without taking result of keying algorithm into account.

Core Matte defines areas which are definitely foreground. This means that in areas in which core matte if white result matte would be white as well.

Code Layout

This keying stuff is mostly going on on tile compositor, not actually ported to old compositor system (only stuff which was implemented before tile merge present in new keying node's exec callbacks for old compositor).

Keying Screen

Pretty straightforward node which consists of one node file and one operation which are OM_KeyingScreenNode.cpp and COM_KeyingScreenOperation.cpp.

Keying

This node consists of one node definition file COM_KeyingNode.cpp which creates a chain of operations depending of what's settings are used. In general this is:

  • Chroma pre-blur sequence which consists of several steps. This pre-blured image is used for keying only despilling is using original footage. This operation completely uses existing operations.
    • Convert RGB to YCrCb
    • Split channels
    • Apply Blur on Cr and Cb, blur is defined in COM_KeyingBlurOperation.cpp
    • Merge channels back
    • Convert YCrCb back
  • Matte operation which is in file COM_KeyingOperation.cpp. This operations takes pre-blured image and generates matte for it
  • Clip black/white for matte which is defined in COM_KeyingClipOperation.cpp.
  • Apply dilate/erode on clipped matte using existing operations.
  • Apply Gaussian blur on dilated/eroded matte. At this point working with matte finishes.
  • Despill original footage using COM_KeyingDespillOperation.cpp operation.
  • Set alpha for despilled image based on final mask.

Usage

Keying Screen Node

Keying Screen Node

Currently Keying Screen node support gradient creating based on Movie Clip and motion tracking data. So first step would be to open clip in Movie Clip Editor.

When it's open, you'll probably want to create new tracking object, because tracks used for gradient can not be actually used for camera/object tracking. After this tracks might be places in places where gradient colors should be defined. This tracks could be tracked on moved manually, so gradient would be updating automatically along the movie. Tracks might have offset for easier track of feature-less screen.

When it's done you might go to compositor. Then Add » Matte » Keying Screen menu and you'll see node like it's displayed in the picture. As soon as Movie Clip and tracking object is set, gradient screen will appear in the output socket of this node.

Keying node

Keying Node

Actually this node is pretty straightfowrard, but here're small description of it's usage.

First, you'll need to connect input image which you're wanna to key. Then you either connect Keying Screen as screen color source or choose single color manually or using color picker.

Just after this you'll have some kind of keying already, which might be tweaked in some ways.

  • Pre-blur might be used in cases of high color grain of input image. This will reduce it's amount by using chroma blur with given size. This will affect on matte calculation only, not to result image.
  • Despill means how many color would be despilled from input image: 0 means no despilling would happen, 1 means all possible spilling would be removed.
  • Edge Kernel Radius defines radius of kernel within which pixels would be used to determine whether pixel is on edge. Actual size of kernel is 2 * kernel_radius + 1.
  • Edge Kernel Tolerance defines threshold used to check if pixels in kernel are the same as current pixel: if difference between pixel colors is higher than this threshold then likely point would be considered as an edge.
  • Clip Black and Clip White are used to increase matte contrast and made almost-background pixels background pixels and almost-foreground pixels foreground pixels.
  • Dilate/Erode might be used to dilate/erode matte,
  • Feather Falloff and Feather Distance are controlling feather around the matte. It could give more accurate results as dilate + blur would in some circumstances.
  • Post-blur is applied on matte to make it's less sharp.


Tests

Here's quick test made with new nodes using images from Mango Open Movie project.

Original image was

Original greenscreen image

Matte is

Original greenscreen image

And here's composite result for this image

Composited image using new nodes