利用者:Ideasman42

提供: wiki
移動先: 案内検索

{{#dpl:

 |debug=0
 |skipthispage=no
 |mode=ordered
 |namespace=利用者
 |titlematch=Ideasman42/%
 |shownamespace=false
 |format=Subpages,²{#ifeq:²{IsUpper¦²{SUBPAGENAME:%PAGE%}²}²¦1¦²{Multiply¦*¦²{#expr:1+²{PagetitlePartsAmount¦%TITLE%}²-²{PagetitlePartsAmount¦²{PAGENAME}² }² -1 }² }² ²{SUBPAGENAME:%TITLE%}²¦}²,,

}}

Peach System Setup

Moved to http://wiki.blender.org/index.php?title=Bf-institute


Development Tools

Scripts

I used to have various scripts here, now they're mostly hosted at:

https://gitlab.com/ideasman42/dotfiles/tree/master/bin

svn_log_author.py

Example use:

svn log --verbose --xml -r4752:HEAD | svn_log_author.py --author=campbellbarton --wiki

If --author argument is omitted you get commits from everyone (good for release logs).

script on github:

This is a shell script I use to get logs of work in the previous week:

Example output:


Shell script to collect work since last week:

Example use:

svn_work_since_last_revs.sh 59868 4740

touchdiff

Sometimes you want to know if any of the files you edited caused errors. Rebuilds you may have missed some so touching all modified files, then rebuilding is useful for this.

Add this to your ~/.bashrc file.

alias touchdiff='touch svn st | grep "^M" | cut -f4-8 -d" "'

crep

search *.c* and *.h* files for some text, prints out numbered results, so you can type in the number to open a text editor at that line.

crep "Some words"
crep some_func
https://gitlab.com/ideasman42/dotfiles/blob/master/bin/cr

Example output

ideasman42@cambo:/b$ crep BPY_call_importloader
1	./source/blender/python/BPY_interface.c:2694:int BPY_call_importloader( char *name )
2	./source/blender/python/BPY_interface.c:2696:	printf( "In BPY_call_importloader(name=%s)\n", name );
3	./source/blender/blenkernel/intern/exotic.c:2430:					if (BPY_call_importloader(name)) {
4	./source/blender/blenkernel/bad_level_call_stubs/stubs.c:245:int BPY_call_importloader(char *name)
5	./source/blender/python/BPY_extern.h:121:	int BPY_call_importloader( char *name );
6	./source/blender/blenkernel/BKE_bad_level_calls.h:149:int BPY_call_importloader(char *name);
enter a number to edit >

prep

variation - prep, same as crep but searches python scripts -
https://gitlab.com/ideasman42/dotfiles/blob/master/bin/prep

cedit

Search for a file from the current directory and open it at a line number, especially nice to copy/paste from compiling errors.

cedit somefile.c:255
#!/bin/bash
FILE=$(echo $1 | cut -d":" -f1)
NUM=$(echo $1 | cut -d":" -f2)
if ! [ -f $FILE ]
then
        FILE=`find -name "$FILE" | head -n1`
fi

# scite $FILE -goto:$NUM
# vim $FILE +$NUM
gedit $FILE +$NUM & disown


pyrename

Do you ever want to rename many files without having to use GUI apps? Pyrename is a simple script that takes your file names and puts them in a text file. You can then search/replace, regex, whatever you want to the list of names. Save and close the text editor and the files will be renamed.

just cd into the directory where you want to rename files and type

pyrename .
http://code.google.com/p/ideasman42-dev-scripts/source/browse/trunk/bin/i42_batch_rename.py

include_lint.py

This script rebuilds blender, commenting one include file at a time, any change in the error output will revert the change. A simple way to get rid of unneeded includes from copy/paste's. Though it works best when used with 'sparse' which complains of undeclared functions, this way C files wont remove own includes.

http://code.google.com/p/ideasman42-dev-scripts/source/browse/trunk/bin/i42_include_lint.py

bftag

Another one from Theeth

#! /bin/sh
if [ $# -ne 2 ]; then
	echo 1>&2 Usage: $0 tag log
	exit 127
fi

if (svn ls https://svn.blender.org/svnroot/bf-blender/tags/$1 1> /dev/null 2> /dev/null); then
	svn del https://svn.blender.org/svnroot/bf-blender/tags/$1 -m "removing tag for retag"
fi

svn copy https://svn.blender.org/svnroot/bf-blender/trunk/blender https://svn.blender.org/svnroot/bf-blender/tags/$1 -m "$2"

Program Execution Tracing

Sometimes its useful to know the flow of an application without having to step through each line manually. In cases where similar input makes blender act differently, it can be useful to find out in what part of the code it branches into different areas.

This can be done by running gdb, executong commands from a file, redirecting the output to a file.

gdb_flow.txt

# so we're not prompted for input to scroll long output
set pagination off
# so any breakponts we add for libraries are applied when dynamically loaded
set breakpoint pending on
break main
r --background --disable-crash-handler /home/ideasman42/test.blend -a
while 1
  step
end
  • insert your own breakpoint at the point before blender starts acting unpredictably.
  • run blender with whatever args you want.
gdb ./blender --command gdb_flow.txt > ./log_1.txt

Now run the program twice with different input and run diff on the output, I use meld, a visual diffing tool. The memory locations will be different but you can still see where large parts of the flow changes.

Note after writing this I found a similar method that I could not get working http://www.brain-dump.org/blog/entry/101/comments


Hint, the hex values can get in the way of diffing output, the following command strips hex values so they are either '0xNULL' or '0xSET'. It could be nicer to give each a ID based on when they're first seen, but this is more involved.


cat ./log_1.txt | sed 's/0x0[^0-9a-f]/0xNULL/g' | sed 's/0x[0-9a-f]*/0xSET/g' > ./log_1_strip.txt

define lint

This script finds defines in headers and prints them out in order of how many lines use them. Warning - this script is not that nicely written. only runs on a unix system.

import os
import sys

BLENDER = '/src/blender'

# for a single header file
# headers = ['%s/source/blender/include/blendef.h' % BLENDER]

import re
def_regex = re.compile("^\s*#\s*define\s+([a-zA-Z_][a-zA-Z_0-9]*)")

headers = os.popen('find %s/source -name "*.h" -print | grep -v "\.git" | grep -v "~"' % BLENDER).readlines()
headers.sort()

for head in headers:
    head = head.strip()
    if not head or not os.path.exists(head):
        print('\tnot found', head)
        continue
    
    header_text = open(head, 'r', encoding="utf8").read()
    
    defs = []
    for i, l in enumerate(header_text.split('\n')):
        ret = def_regex.match(l)
        if ret is not None:
            d = ret.group(1)
            if not d.endswith('_H__'):
                defs.append((d, i, l))

    defs = list(defs)
    defs_count = []

    for i, define in enumerate(defs):
        used_count = eval(os.popen(
                'find %s/  \\( '
                '    -name "*.c"   '
                ' -o -name "*.h"   '
                ' -o -name "*.m"   '
                ' -o -name "*.cxx" '
                ' -o -name "*.cpp" '
                ' -o -name "*.cc"  '
                ' -o -name "*.hxx" '
                ' -o -name "*.hpp" \\) '
                ' -print | grep -v "\.svn" | grep -v "~" | xargs grep -n --color=never -H "%s" | wc -l' %
                (BLENDER, define[0])).read().strip())
                    
        used_count-=1 # exclude the define as a user
        
        # We may want to ignore case statements?
        defs_count.append((used_count, define))
    
    # defs_count.sort()
    output = []
    for users, define in defs_count:
        if users == 0: # may also want to check for 1 user
            d, line, fulldef = define
            # print head, users, d, line
            text = '%s:%s\t%s' % (head, line, fulldef)
            output.append(text)
    
    for t in output:
        print(t)


Screencasting from linux

Start Blender at a lower resolution

./blender -p 0 0 1920 1080

Run record my desktop, will create "out.ogg" using a very high bit-rate so re-compressing doesn't look bad.

This argument is specific to my setup:

-device plughw:1,0
recordmydesktop \
	-v_bitrate 2000000 \
	--v_quality 63 \
	--s_quality 10 \
	--full-shots \
	-fps 10 \
	--overwrite \
	-device plughw:1,0 \
	--channels 1 \
	--no-frame \
	-windowid $(xwininfo | grep "Window id:" | sed -e "s/xwininfo\:\ Window id:\ // ;s/\ .*//")

For fullscreen just do...

recordmydesktop -v_bitrate 2000000 --full-shots -fps 10 --overwrite -device hw:1,0

Press Ctrl+C in the terminal when your done.

For some reason my USB soundcard only records stereo but right channel is silent, remux as mono.

ffmpeg -i out.ogv -c:v copy -c:a libvorbis -ac 1 -q:a 1 out_mono.ogv

Re-Encode for YouTube Upload Note, I had many issues with mencoder this time, frames dropping and out of sync audio, ffmpeg on its own worked though
You may want to try bitrates between 500k and 2000k
Youtube now accepts oggvorbis video.

ffmpeg -i out.ogv -b:v 1000k -vcodec libxvid -g 300 -acodec libmp3lame -ac 1 out.avi

Editing the Video in Blenders sequencer

If all you need is the video unedited you can ignore this part.

Export for editing in blender, even though blenders ffmpeg can read many formats, this one streams best.

mencoder out.ogv -oac pcm -ovc lavc -lavcopts vcodec=mjpeg:vbitrate=10000 -af resample=44100:channels=1 -o out.avi

Render to high Quality 100 AVI-Jpeg from blender and MIXDOWN the audio from the sequencer panel.

You may want to edit the audio, Audacity is good for noise reduction, compressing and normalizing.

Mux these with...

mencoder export.avi -ovc copy -oac copy -audiofile export.wav -o export_mux.avi

Compress any video to ogg for the web

ffmpeg2theora --optimize --channels 1 --samplerate 22050 --audioquality 5 out.ogg -o out_web.ogg

Batch Converting OGG to WAV

for f in *.ogg ; do mplayer -quiet -vo null -vc dummy -ao pcm:waveheader:file="$f.wav" "$f"  ; done

Development Tools Wishlist

  • Valgrind frontend that can filter output based on library names (could be written in python), since suppression files get out of date.
  • SVN Time Warp tool - Imagine SVN Blame (annotation) with a slider to step to different revisions.
  • Python module to control GCC - so one could write very comprehensive conditions for offline debugging. (Tried to write this but GCC's input is NOT easy to automate)
  • Utility to change an svn checkouts user and/or URL, for now this is ok...
    perl -pi -e 's/oldbob/newjim/g' find | grep "\.svn/entries"

BGE Logic Bricks Id like to write/modify

  • DONE! - Constrain Vecocity (local or global, could be used to fake anisotropic friction)
  • DONE! - Angular velocity for add object actuator (add get/setAngularVeclocity to py api)
  • DONE! - property actuator have a toggle option to toggle values, currently doing this requires a few logic bricks,
  • state actuator could have 3 state button so some states could be ignored and not set for copy actuator.
  • track actuator Time setting dosnt keep camera Z axis up.
  • set local loc/size/rot actuator (can be done with ipo's but thats a bit of a hack)
  • State sensor, so you could sens if some state was enabled.

BGE Nodal Logic

From discussions with Benoit

Some background information first. In the current implementation, sensors, controllers and actuators are executed separately: first the active sensors, which eventually generate pulse (positive or negative) to controllers. Then the triggered controllers are executed, which eventually generate pulses (positive or negative) to actuators. Then the active actuators are executed and the whole cycle restarts. Actuators are similar to controllers except that they can stay active and execute each frame even without receiving a trigger.

This architecture is easy to program but not easy to use. Normally it should be the opposite and that's why we want to introduce nodal logic. A logic graph allows to visualize a whole logic setup much better than logic bricks.

This proposal is an attempt to bring nodal logic with minimal change in the BGE code. In particular we want to keep the existing C++ sensor and actuator code because they execute complex operations at C++ speed.

A nodal logic is made of interconnected nodes. All nodes have inputs and outputs. There are 2 types of input: logic and data. Logic inputs are used to receive pulses from other nodes. There can be two types of logic inputs: start and stop. Receiving a pulse on start will cause the node to execute. Receive a stop will cause the node to stop executing. The stop input is only present on nodes that have persistent actions (actuators). The data input will be of type vector, integer, boolean, string, object reference, etc, depending on the type of node. The data input is only used when the node is activated. There are also 2 types of outputs: logic and data. Logic output are used to emit pulses. There are two cases where a node can emit a pulse. If the node is a sensor, it can create a pulse to one of its two outputs: a positive output when the internal state becomes true and a negative output when the internal state becomes false. Other types of node have only one logic output and they emit a pulse only when they finish their execution. Data output can be of type vetor, integer, string, reference, etc, depending on the node.

A graph of nodes can be grouped and become a single node for a higher level of logic.

We will define 3 types of nodes:

  • sensor nodes.

These are the only node that can produce pulses. They have 2 logic inputs to explicitely start and stop them. Starting a sensor node means that it will execute on each frame and produce pulse if the internal state changes. It's important to be able to stop sensors to save CPU when they are not needed anymore. Sensor nodes have 2 logic output as explained above. They are executed only once per frame before any other types of node.

  • atomic action node

These nodes are never active at the start of a frame. They can only be active if they receive a pulse from a sensor node or from another action node. Atomic action nodes have only one logic input and one logic output. If they receive a pulse on the input, they execute immediately and release the pulse on the output. Actions nodes can be chained: the chain of nodes is executed immediately in the same frame until the chain is broken or until it reaches a persistent action node or a sensor node. Beware that an infinite loop of atomic action node can be done, which will hang the game engine.

  • persistent action node

These nodes are similar to atomic action node except that they execute only once per frame. Execution of persistent action nodes is delayed after all atomic action nodes. A persistent action node has 2 logic inputs (for start/stop) and one logic output. The logic output produces a pulse when the persistent action is stopped. The pulse can be passed to another node and is handled on the next frame.

Implementation

These 3 types of node match closely the sensor, controller and actuator bricks that we have today so that little change will be necessary to implement them.

Starting/stopping sensors is currently done based on the number of links to controllers. An explicit method must be implemented.

C++ controllers will be removed and replaced by a variety of python node that implement the same functions: AND, OR, etc. All these nodes will be atomic action nodes. A lot of different python node can be predefined: for loop, condition, etc. A generic python node with configurable data inputs and outputs should be implemented to allow custom atomic nodes. This will replace the python controller. The advantage of having configurable inputs means that it's no longer necessary to retrieve reference in the script: the inputs will be set prior calling the node and readily available as variables in the script. It seems appropriate to implement the python node as a method of an class that has data members corresponding to the input and outputs of the node: this way the outputs are available to other nodes even after the script has executed.

C++ actuators remain the same. The actuators that are non persistent by nature will be defined as atomic action nodes, while the persistent actuators will be defined as persistent action nodes.

The execution of the graph can be done by converting it to a python script. Python is well suited to implement data connections between nodes: data inputs and outputs will simply be attributes of sensors/action nodes and easily accessible to python. Execution of sensors and persistent nodes will be left to C++, python will only be used to execute the atomic actions. When the pulse gets to a persistent action, python will simply register the action for execution at the end of the logic frame. Question: how a complex tree of actions can be converted in a python script.

Since the logic bricks are attached to objects, nodes will also be attached to objects. A graph mixing nodes of different objects should be possible. When a group of objects are linked in logic graph and this group is instantiated, the nodes must be instantiated too and the script corresponding to the graph must use references to the instance objects. Question: how the script will acquire the reference to the instance objects.

BGE Python Logic

Heres an idea I had about how developers who prefer to bypass logic bricks could do so...

Currently we have event managers (timer, joystick, keyboard, collision). Each event manager has a list of active sensors

My proposal to bypass logic brick sensors is to define a new type of sensor class that is only accessed from python with no need to have a logic brick in the UI or be connected to a controller.

Event Queue?

From the EventManager it will look like a normal sensor in the list however rather than trigger a controller, the sensor will collect a list of events in a typical event queue. This can be a python list containing any relevant info like the key pressed, location of a ray-hit. Collision-Points when 2 objects touch.

This sensor would need to be enabled (probably via python), otherwise it would be a waste to have this on when not used.

GameLogic.EventLogEnable('keyboard')
...
evens = GameLogic.Events # a normal python list
while evens:
    event= events.pop()
    if event.type == 'keyboard' and event.key == 'a':
        print("blah blah")

Event Callbacks?

Rather than logging an event queue alternately we could also register python functions to be called on spesific event types. so you could have a function to be called on any collision, keypress etc. This has the advantage that you dont need to run python every logic tick to peek into the event queue.

def MyFunc(event):
    if event.type == 'keyboard' and event.key == 'a':
        print("blah blah")

GameLogic.EventRegister('keyboard', myFunc)

You could ofcourse register multiple callback functions for each event type, or even use the same function for collision and timer callbacks for eg.

Its also a bit nicer that you only register the callbacks rather then telling python to start logging an event queue.

User documentation editing

  • Menu item for editing docs
  • Built in operator for description editing, (can be python)
    • XML/RPC submission from python (Mindrones)
  • Blog voting system, acceptance etc... (mindrones)
  • Automatic extraction of accepted descrptions (Mindrones)
  • Apply this as a patch to C/Python code + commit (me)


Graphing Memory usage

 import os
 while 1:
   os.system("ps -eo pid,ppid,rss,vsize,pcpu,pmem,cmd -ww >> ~/monitor.txt")
   os.system("sleep 1")

Random Post Durian Discussion with Brecht

Speed

Cam suggests Re-Structure packages so not ALL modules are loaded at startup. "spaces", "properties" can be packages with all registered classes, each draw function would do its own import.

Brecht suggests we could have all classes autogenerated into a stub which does some tricky class swapping trick on access which swaps in the heavy classes.

We could do an async read on all py files on startup so they dont slow down load times when python waits on access for module load.

Python caches the string hash value, we could make use if this with RNA hashes so we dont need to do "unicode->ascii->ghash"

Plugins

  • Use python as a base, extend with C/C++ modules if needed.
  • Use C++ for plugins rather then C?, this means we dont need to expose BLI functions since C on its own is quite limited. Doesnt need to be forced but can be recommended.
  • Need to check on MSVC's visibility settings so internal C funcs can be exposed like in gcc.
  • Use general system for all plugins (fill in callback members), similar to modifiers.
  • For DNA - use string lookups which resolve to a plugin on load to avoid unique int ID problem.
  • Use derived-mesh arrays for mesh access.
  • Improve RNA api by using inlineed rna accessor functions so rna access can be as fast as direct access.


What Im working On

Page Moved Here: http://wiki.blender.org/index.php/User:Ideasman42/WhatImWorkingOn

TODO

  • ui_hsvcircle_vals_from_pos - avoid sqrt
  • use #pragme GCC warnings for DNA padding macros.
    _Pragma("#pragma GCC diagnostic push") _Pragma("GCC diagnostic error \"-Wpadded\"")
  • PADUP - make a static function.
  • use py identifier for re-used strings - _Py_IDENTIFIER(__import__);
  • auto-document ID property API.
  • add generic way to install presets.
  • add access to active scene layer via RNA
  • add note in docs that image pixel access is very slow. copy list first.
  • add support to export BVH with axis conversion - once done mail sybren _at_ stuvel.eu
  • Make render output path an operator property so testrenders can be written to arbitrary paths without setting scene data.
  • Write up patch so python can reference ID's across undo's & check with Ton if this is ok.
  • option for importers to re-use existing materials (OBJ specifically)
    see: http://blenderartists.org/forum/showthread.php?234521-Import-OBJ-but-using-existing-material
  • help(mesh.update) # and similar RNA functions should show keywords.
  • Option to show enum menu as a search box instead.
  • Run newly linked appended scripts when their 'Register' setting is enabled.
  • Enable operator presets for 3d view, currently they only work with the file selector.
  • Add window relative X/Y coordinates to RNA regions and areas.
  • Cleanup todo http://wiki.blender.org/index.php/Dev:2.5/Source/Development/Todo/Tools
  • Document windowing concepts in blender and how they relate to operators and context. (new section in quickstart)
  • obj.shape_key_add(name="whatever", from_mix=True) - look into a way to remove a shape key.
  • Test operator property comparison for better menu key shortcuts. [#27239]
  • Check mathutils/blf/bgl modules work with blenderplayer.
  • Report error on file load if the engine used is not found.
  • Preset Menu: Currently no nice way to remove a preset.
  • a way for rna functions to pass and return collections to python, eg: allow an rna function to receive a list of objects for eg.

Code Refactoring

  • refactor BLI api's
    • specify consistent naming convention, currently we mix get_object_foo, object_get_foo, object_foo_get
      Suggest to use RNA like naming convention:
      BLI_object_foo_get(...) / BLI_object_foo_set(...)
  • Add in generic header for portability - like BLO_sys_types.h, but not associate with blender modules, so any C file can include to ensure code compiles across different architectures. we may want to have more then one of these.
    • blender_compat_stdint.h - one for int types uintptr_t/size_t etc.
    • blender_compat_storage.h - another for storage/fs types
    • blender_compat_macros.h - another for common macros such as __line__, __func__
  • Other...
    • Look over all uses of dw->def_nr, many can be replaced with api functions.
    • Look over flip_side_name use, some of it can be improved.
    • replace inline def weight editing in ED_vgroup_nr_vert_remove

Python Integration

This section is to gather ideas about whats needed for better python integration:

  • callbacks for RNA properties
  • python listeners
  • python notifiers
  • depsgraph integration (knowing when object data changes, is re-calculated)

Use Cases

Fell free to add your requirements here, Id like to collect them to see whats most important. (just sign what you add)


Notifications for custom property changes: When implementing an additional rendering backend in Blender, there will usually be a slew of custom material properties. In LuxRender and Mitsuba, these are reachable via bpy.data.materials[index].(lux/mitsuba)_material.*.* (i.e. with some nesting). For instance, the 'v' roughness parameter of the Ward model in Mitsuba would be stored at someMaterial.mitsuba_material.mitsuba_mat_ward.vroughness.

An important use case would be to register a callback that is invoked whenever one of these properties is changed so that the user interface can be updated (for instance to cause a redraw of the material preview). In the most general case, the callback would have a signature that provides information about what particular property was changed, and it would also provide the old and new value.

Ideally, this callback would be active across all material instances, without having to register it manually for each one. -- Wenzel 09:21, 30 March 2011 (CEST)


The use case put forward by Wenzel is indeed a common one, which I've encountered myself a lot. Ranging from ID properties for rigging, to custom rna properties for user interfaces.
One thing to keep in mind is when the callback is invoked: while the property is being changed (example: dragging a slider), or after the property has been changed.

Depsgraph integration would be very nice for keeping track of changing mesh topology. It would also be nice for finding out when fcurves and drivers are changed.

For python listeners it would be good if we had access at a low level. For instance: I've written an add-on to display keypresses, which would greatly benefit from access to python listeners at a low level. So across editor-windows, and from within modal operators. Example: currently I can already catch the event that invokes the grab operator, but if a user then presses X-key to limit to x-axis, I can't catch that.

One thing I'd like to stress is the importance of stability of the system. Currently we already have bpy_struct.callback_add but when a new file is loaded and the callback is added again, Blender crashes. The same for disabling an add-on while the modal handler and callback are still attached. This is inconvenient and makes python integration unpredictable. --Crouch 17:22, 30 March 2011 (CEST)

The use cases for LuxBlend25 (LuxRender engine integration) mostly relate to material settings. Before RNA-write restrictions came in, we were able to update the base blender diffuse color setting with an appropriate value from the LuxRender material panels, so that objects in the 3DView were displayed with approximately the correct color. We have a temporary work-around of a "Copy material color to blender" operator, but it would be better if the underlying blender color could be updated automatically (and in addition, also trigger a preview update).

Another use case would be for real-time rendering control, for image-adjustment in the LuxRender core. Ideally, when the render is in progress, we would have several custom panels visible in the Render View which can feed values into the LuxRender post-processing pipeline to adjust levels, tonemapping, light layers etc. --Dougal2

Rigamarule is a system to add intelligence to rigs (that operates in edit mode / rig build time mode (not animation) ). It can be likened to constraints, except not animator visible ones, rather, edit mode constraints. things like aligning a bone with another, or aligning a bone to a lattice, or keeping rolls perpendicular to an IK chain-plane.. etc. Currently rigamarule implements it's own dependency graph, and calculates the rules when the users invokes the rigamarule operator. In the future, we'd like 'live' updating of rules, based on transformation of joints. To do this we need to hook into some kind of event system, that alerts python when certain events happen, and allows executing a callback - or something of this nature. Benefit will be extremely simple/vanishingly transparent UI for people wanting to customize rigs. --Bassamk