「Dev:Ref/Requests/Internal Refactor/Data Storage」の版間の差分

提供: wiki
移動先: 案内検索
 
(1版 をインポートしました)
 
(相違点なし)

2018年6月29日 (金) 02:52時点における最新版

Data System

There will be an importer from the current file format. Data should not be lost, however, the new data files will not be backwards compatible if we go with a new data storage system.

I would propose that this system be one of the first developed since it could be a utilized in the current blender development.


data format requirements

  • binary format which is quick loadable
  • binary format can be convert to human redable format what is good for debug
  • handle packed files in a standard way (ex: tar, gzip)
  • allow to add custom data easily
  • the stored properties must be order invariant (i mean struct members), this helps to keep compatibility
  • keep out redundancy

mesh data requirements

Another Idea

Having the data stored in a binary chunk system, where the header of each chunk identifies the type of the chunk as well as the version. The header of the file also has a system chunk that contains all of the reader/writer/accessor functions (in lua or other scripting language) for each of the data types that are stored in the file. This would mean that data files are able to load themselves (since the loading code is in the file) between different versions of the application kernel without problems.

The code to load files is originally stored in the module spaces, so there may be a version conflict there that requires more research (of the SDNA system) to overcome that issue.

This means that unknown data chunks (ie. the module that creates the space to handle the data is not loaded) are not lost and may still be loaded, and saved unlike current blender system. They would be loaded as an unknown module, and possibly edited in a raw data editing space (since the data description and mutator code is also loaded from the file). This feature may be useful when working on very large projects with a group of people and members of the team are using very specific modules for their tasks.

The process flow

  • Open file
  • Verify file type
  • Register data components with the data subsystem (load the lua data manipulation functions )
  • Run the lua loaders that were just registered for each sub chunk to load into the data system

This allows greater flexibility in the data formats. It also allows data sources to be remote (network linking-verse , local linking-current append system) or procedural (fractal generated data or paged/tiled data).

This will require the data system to have a manager that runs in it's own thread and be aware of the applications needs to provide data for what is being worked on and free memory of data that is not in use, which should allow for more complex projects.