利用者:AlexK/Gsoc2013/Canvas

提供: wiki
< 利用者:AlexK‎ | Gsoc2013
2018年6月29日 (金) 05:57時点におけるYamyam (トーク | 投稿記録)による版 (1版 をインポートしました)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Sequencer Canvas.

The main idea is to render only pixels which are needed. Therefore, we propagate backwards to calculate the resolution and size which is are needed.

Final Output image

The output image parameters are:

  • Pixel dimensions(integers)
  • Unit square expressed in number of pixels. (e.g. one horizontal unit is 457.3 pixels)


Transformation data

Each strip has positional information in floats applied in the following order:

  • Scaling in X and Y
  • Rotation
  • Position
  • (Do we need affine translation)
  • Output only strip (movie, image)will also picture size of image expressed relative to current unit square.
  • Cropping (relative to size units) for images (and maybe for effects strip with on/of switch)

All those parameters are relative to the strip above


Storage

These representation doesn't corresponds to physical storage. Storage is calculated based on the current image size. First step, we multiply output unit square by all the scaling (taking into the account rotation) to get the pixel dimensions of image in floats. We round up the dimension and render the image to the buffer. We apply a transformation and store new image in buffer with newly calculated correct size. The information about the relationship with actual position is passed on.

Blend

Blend effects are special. They take two arbitrary positioned and sized buffers and merge them. It assumes that the rest of an image is transparent. If the scaling of an input image is different from what was expected, the image is sized on the fly while coping the image from another bucket into the internal memory. The result of blend effect is an image with previously defined resolution, but previously unknown size. The new size and position (physical and relative) information is passed on.

Flexibility

The canvas should have ability to move some transformation to earlier stages. That way, for example, we can downsample a move and then apply blend, instead calculating blend on original movie and then down sampling. The canvas of a blend just passes scaling to input images, setting it scaling to 1, but keeping the resolution. However, non proportional scaling and rotation are incompatible for this operation.


View

We can also edit a transformation in a view. When we select an pixel in a view, we calculate which source strip is hit. Users can also select specific strip. Each strip also has anchor point. The code handles appropriate correction for position when rotating and scaling around anchor. When moving, we take appropriate strips above into account and calculate relative displacement. User is presented with an illusion that strip image is moving with the courser.


Pros and Cons

Cons of this approach:

  • If we upsampling an original image, it will result in more pixels to work with. But it usually shouldn't be a case (except for proxies, but then just decrease final output resolution)
  • Strip reuse. If one strip is used multiple times as input, engine might ask for different resolution from the strip. A simple way to resolve it, is to pick the highest resolution. But is not very good due to down sampling error. For better quality, we can force the strip to render twice for different resolutions. This is good for the final quality. First method can be used for multiple views with different resolutions and scopes.


Pros:

  • *Only real pixels are rendered.
  • Blend and effects usually won't have transformations. (unless they are used as adjustment layer) We even can disable transformations for grater optimization.


Question: Should transformation probably be applied before any other modifier?