利用者:LeifAndersen/GSoC2010/Readme

提供: wiki
移動先: 案内検索

Blender Test Suite

The Blender Test Suite is a series of tests, made primarily in Python (both 2.x and 3.x) with the Blender Python API, PIL, and PyUnit. It also uses CMake/CTest as a means of aggregating the tests. As well as CDash to aid in viewing of the test results.

1. Installation

2. Running the tests

3. Reading the Results

4. Getting more detailed results/Reading the Dashboard

5. The different types of tests

1. Installation

The test suite currently requires you to build with CMake (other build systems such as Scons are not supported at this time). To add the tests to your build directory, first create the directory, and run CMake on it as you would normally do. Next, set the WITH_TESTS and WITH_INSTALL to be on with a tool such as CCMake. Finally, make the tests in the usual manner.

2. Running the tests

Once build, the tests can be run by simply invoking CTest. On a system where ctest is already in your path, this can be accomplished by:

$ cd ${BUILD_DIRECTORY}
$ ctest

Optionally, a build target called test can be used:

$ make test

To submit the results of the test to CDash upon completion, use the -D flag, more specifically:

$ ctest -D Experimental

Optionally, a build target called report can be used:

$ make report

Once reported the test can be found here:

http://my.cdash.org/index.php?project=Leif-Blender

In the future this should be located at a domain such as cdash.blender.org, or tests.blender.org

Many of these tests will open up a blender window. If the window stays open long enough for you to take action, don't, doing so may cause the test to have unexpected results. The window opens do to limitations to the Blender API, which is to say that many of the needed API calls don't work when Blender is in background mode, as such, a window will open. If for some reason, the test crashed, CTest is set to automatically time out the test. This should happen in less than a minute for these tests.

3. Reading the Results

CTest will run every test, and tell you if it passed or failed. Because most of the tests are run using PyUnit, it uses regular expressions to determine whether the test passed, or failed.

If, as a result like this:

PY_Armature_example2.py.blend ........   Passed    1.07 sec

The test passed, and all is well.

PY_BGL_example.py.blend ..............***Failed  Required regular expression not found.Regex=[OK]  3.57 sec

The test failed. It was a Pyunit based test, and the output of pyunit did not have the text OK in it (symbolizing that it passed).

RE_displace.blend ........................***Failed  Required regular expression not found.Regex=[All tests passed]  2.14 sec

This was an image comparison test, and the image (or animation), was determined to be sufficiently different from the version generated by a known good version of Blender.

AO_setup.py ......................***Exception: SegFault  0.97 sec

There was likely a bug in blender which did not allow the script to even finish.

4. Getting more detailed results/Reading the Dashboard

If there is a time when more detailed results are needed for the test (in specific, seeing the output of the test), either the -v flag can be used with ctest:

ctest -v

Otherwise, these results can be found on the CDash dart dashboard.

The CDash Dashboard will show you the results of all past versions of the test suite that have been reported. As well as other meta data, such as the time it took, etc. etc.

The Tests in the dashboard are split into five categories currently, the only one in use is Experimental Builds, although hopefully the other categories will be populated soon.

5. The different types of tests

Currently, there are three main types of tests in this suite, each with several subtypes. The three types are:

a. Render Comparison Tests

b. PyUnit/Blender Python API Based Tests

c. Manual Tests

a. Render Comparison Tests

These tests are made with Python 2.x, and PIL, it assumes that you either have a good version of Blender in your path, or copies of good images stored in the needed locations. By default, it looks for the good version of Blender in your path to render the images. Because of the time taken to render pictures and animations, the timeout for these tests were raised to ten minutes. In addition to working in aggregate with the larger test suite, these tests also work as a standalone package.

These tests can be found in tests/render

b. PyUnit/Blender Python API Based Tests

These tests can differ by quite a bit, but are generally based around the same framework. Which is to say that each test starts up PyUnit, which tests Blender in the specified manner, which than reports back to CTest if it passed or failed with the regular expression OK.

Most of these tests use a hashcode to determine if the contents of Blender, after opening the blend file, and doing the required actions, is the same as the desired data. The hashfile can be located in the tests directory of the source directory, or it can be found in the bin folder of the build directory. Once read in, the data is stored in the tests python module, which is separate from other Blender modules.

Each folder does slightly different tests, based on what's appropriate for it's subject. More details for each test can be found in each folder's README file.

1. addons - This is where any additional tests that you as a developer want to write and easily add to the suite without needing to modify the suite at all.

2. data - This is an attempt to simply test raw data manipulation in blender, without the use of many operators. (These tests are currently disabled do to bugs currently in the python api).

3. export_import_testing - These test the result of exported and imported data from blender.

4. gameengine - These test the blender game engine. (Currently, these tests are not automated, and need to be ran manually).

5. mesh_modeling - These tests will test the results of making modifications to meshes. (Note that while these tests are automated, they are not thoroughly tested automatically).

6. physics - These test the results of Bullet. In a few cases, render tests are made, however, the majority of them will do a hash compare to ensure the data is the correct.

7. python - This tests the python api. Note that there are a few tests (in particular one dealing with the Blender GUI), which are still using the 2.4x api.

8. sequence_editing - These tests will test the sequence editor. (Note that while these tests are automated, they are not thoroughly tested automatically).


c. Manual Tests

These are the tests that are not well suited for automation. These generally are tests for the user interface, to ensure that it works as expected. Simply load each blend file, and follow the on screen instructions.

These tests can be found in tests/manual


Blender Test Suite Addons

tests/addon

This is the location where any extra python regression tests may be placed. Once they are put into this folder, they will be added to the CTest suite automatically.

To build:

1. In the source directory, place python files (and all other needed files) in this folder.

2. Create a binary directory.

3. Build with cmake. (eg. cmake ../blender && make)

4. Run CTest, note that your tests have been added to the suite.


NOTE:

In order to pass, your tests must do two things:

1. Complete the entire test within 30 seconds, otherwise the test will timeout.

2. Somewhere in the output, have the text "OK", and CTest will assume that means the test has passed.

These restrictions can be avoided by adding the test to the suite manually by creating your own CMake command in the CMakeLists.txt file.

Blender Test Suite Game Engine

tests/gameengine

At this time, the Blender Game Engine Tests are not automated. Therefore, to be run, a tester is required to open each of the files by hand, press p to run the test or follow the on screen instructions.

Blender Test Suite Python Tests

tests/python

This is the location for python tests. In order to be a test in this suite, two files are required:

1. A blend file with the starting data for the python script: <Script Name>.py.blend

2. A python file with the script to be run, the main code for the script needs to be put in a function called func: <Script Name>.py

3. A start and end hashcode placed in the global hashfile.txt file (located in the tests/ directory): <Test Name>.py.blend_start and <Test Name>.py.blend_end

Once added to the source directory, rebuild the binary directory with CMake, and the tests will be automatically added to the suite.

The way these tests are run is via the hash_compare.py script. For every test in the folder, the script will compare the start hash with the one in the hashfile.txt file. It will than load the script and call the scripts func() method, after that it will compare the ending hash it has in the hashfile.txt file. If there proper hashes are not in the hashfile.txt file, or the script does not have a func() method, or cannot be located, the test will fail, if the .py.blend file is not in the directory, the test will not be run.

Note that all tests are using the blender 2.5x API, with these exceptions: BGL_example.py.blend Radio_example.py.blend Sound_example.py.blend Draw_example.py.blend Timeline.py.blend and mandel-py.blend is not entirely formed

it is expected that these five tests will fail (and mandel-py.blend will not be run), they need to be ported to the 2.5x python API.


Blender Tests Render Tests

tests/render

The Blender Tests Render Tests works as a standalone suite, or in aggregate with the rest of the test suite. It requires python 2.x, PIL, and either a good copy of blender, or good renders made by a good copy of blender.

Usage: run.py [options]

Options:
  -h, --help            show this help message and exit
  -b BLENDER_BIN, --blender-bin=BLENDER_BIN
                        Location of the blender binary if not in your path.
  -g GOOD_BLENDER_BIN, --good-blender-bin=GOOD_BLENDER_BIN
                        Location of the good blender binary (for building
                        tests) if not in your path.
  -i, --image           Run an image test outside of the current folder
  -a, --animation       Run an animation test outside of the current folder
  --with-animations     Run the animation tests as well as the image ones.
  -v, -V, --verbose     Sets the output to be verbose
  -p, --pre-built       Use pre-built images for this test

Blender Render Regression Test Tool

The Blender Render Regression Test Tool is a tool to help busy developers quickly run render based regression tests on blender, as well as aiding in the creation of more generic unit testing tools where Render Tests are needed.


Requirements:

To run these tests you need:

- python 2.x, and all of the associated libs
- PIL (Python Imaging Library), for whatever version of python you have


Quickstart:

The images required to compare image tests are built into the package, simply go into the test directory and run:

python run.py -p

if blender is not in your path run:

python run.py -p -b <Location of your testing blender binary>

Running the animation tests:

The animation tests require much more space, and take much longer to run. However, on the off chance you do want to run them, you can use the --with-animations flag in order to run your animation suite. However, these tests will take a more time to complete.

Because of the size of these tests, the needed images to run them were not included in this package. In order to get them, download the animation_images package, and extract it to the same location you placed this folder in your filesystem. If you don't want to download the tests, the suite will automatically build them for you.

python run.py --with-animations

or if the good blender binary is not in your path, run:

python run.py --with-animations --good-blender-bin=<Your good blender binary>

if you did download the images, run:

python run.py --pre-built --with-animations

Viewing the test output:

The tests report whether or not they passed while they are running, but more data can be viewed with an HTML file the tests generate. This file gives a list of the pixels that can be considered different, as well as the percentage of difference for each image. It also shows the good image along side the most recent test image, and a diff of the two images. Finally, if the test was an animation, the name of the test becomes a link to another HTML file, containing this data for every frame in the animation.

Note that this feature is still a bit buggy, and while the webpage is still very usable in most browsers, many will not output a page that has a completely clean theme.


Running additional images and animations:

The suite is also capable of running additional animations and images outside of the ones in the folder. To use this mode simply use the -i or -a (but not both) flags to represent image and animation respectively. Then, simply insert the location of the blend file you would like to compare.

Note that a specific file structure is still required for these tests to work properly. For images, it expects there to be a folder in the same directory the blend file is in, named 'render', in it, it expects a png file of the same name as the blend file, except replacing .blend with _0001.png. For animations, it requires a folder with the same name of the blend file (without the blend), with PNG output of the blend file. Note that you need to use the default settings for these png files for the tests to work properly.


Odds and ends:

-v shows the output of blender as it renders single images, and -h shows a list of all the parameters.


Final notes

This suite was made by Leif Andersen for the Blender Foundation as a GSoC (Google Summer of Code) project. For more information, or to contact me, you can go to:

- http://wiki.blender.org/index.php/User:LeifAndersen/GSoC2010

and if that's off line for some reason, you can probably also find me at:

- http://leifandersen.net

I also am on #blendercoders and many other Freenode channels, my nick is Leif

Page Siblings <dpl> namespace=利用者 titlematch=LeifAndersen/GSoC2010/% format=,²{#ifeq:²{PagetitlePartsAmount