利用者:AlexK/Gsoc2013/vse
Scheduling
- We will use OpenCL kernels and cpu threads
- So, there is cpu and gpu platform memory space
- So, we can share images between gpu-s in the same platform
- We would have queue for each memory space.
- A bucket is the list of instruction which is assign to a single device
- A bucket is dependent on outputs of other buckets
- Later we can extend the bucket system to work on tiles instead of whole images
- By making tiles smaller, we can have fewer frames in processing == more interactive
- So, there is cpu and gpu platform memory space
- Frame ID
- When we update a strip parameter, we drop the frame associated with it (if we have time)
- The composition starts all over again
- Scene Strips
- For now we would make a copy to avoid time jumping problems (we would process them in main thread anyway)
Sequencer
- Images of a strip are stored as cache 2
- They are associated with clip ad stored in array liked structures
- We can prefect frames in advanced (videos)
- We make a copy of a scene to avoid update conflicts
- coping is also done for OpenGL preview rendering
- Later, we can make not full copies, and cut corners for faster performance
Plugins
- Plugins will be in OpenCL
- With python api
- It can be used for sound
- Using python for math can be too slow
- for C we don't have compiler, plus we need to handle sigsegv
Strips
- Size
- Each strip will have position, scaling, rotation
- Internally, they are all modifiers
- Moreover, users will be able to manipulate the position in a view
- Selecting a strip by selecting an object in the view should be possible, but it is not a priority
- Each strip will have position, scaling, rotation
- The cache is used for storing the images. The level 2 cache is associated and is stored in a strip. A image in cache will have have "in use" flag.
- Strip will have offset time
- the current implementation of movie clips is fine
- closer integration can be possible later
Effects
- Effects
- One input effects
- These effects can be used as modifiers for a strips
- For current modifiers, they can be used as effects strips
- The effects won't be restricted to original clip (for this we will have modifiers)
- Also, effects will have a (default) option for a sequence below as an input
- Other effects
- Are mainly blends
- They all can be used as blend mode for a strip
- Internally, modifiers/effects/blends will be handled the same way by engine
- One input effects
Views
- By design, the engine will support multiple views
- The engine will calculate the target resolution for outputs
- The outputs can be different types: original strip, left/right, waveform, histogram
- The engine will have the list of all target outputs for optimization
Quality
- The quality level can be selected
- filters can be turned off, or put to the draft quality
- For draft, we will first pre-scale the images to the target output size .
- Each kernel will be compiled as char, half (gpu only), float. The quality settings are applied to the whole sequencer.
- The whole track can be muted (audio and video)
Colorspaces
- Blender will convert images into rgb type color space with OCIO
- All operations will be done in internal color space
- The output image will be converted to display colorspace with OCIO
DNA
- Pretty much will stay the same
- Will be extended for new modifiers/effects
- Position/size might be move to modifier stack