利用者:Dfelinto/Ideas
This document is to save ideas of potential features to implement.
目次
Wireframe Modifier
The wireframe modifier has some issues when even thickness is used together with very high thickness. Not un-often the edges cross the faces.
See: https://developer.blender.org/T38767
Potential fixes:
1. New modifier to remove intersections. 2. Add a clamp system (in bmesh_wireframe.c):
http://www.pasteall.org/pic/show.php?id=67429
The modifier idea wold benefit 3d printers. The other idea is not that expensive to implement, but not trivial.
- Notes of my diving into the code
- The result we get from shell_angle_to_dist():
https://www.wolframalpha.com/input/?i=abs%281+%2F+cos%28%28pi+-+x%29+*+0.5%29%29+from+x%3D0+to+pi
- My idea was to create one vert for the negative and one for the positive new polygon, and connect them with their respective polygon.
-- discussed with Campbell on 25-2-2014
Triangulate Modifier
The Triangulate Modifier gives no control over the triangle split. However (specially for lowpoly assets) the final orientation is very important.
Similar to the Baking options we will let the user select options for how the quads and ngons will be split.
Quads
Quad Split Method. Fixed, Fixed Alternate, Beauty, Ugly
- Fixed: 0, 2
- Fixed Alternate: 1, 3
- Ugly: Shortest Edge
- Beauty: ? (blender internal beautify function)
Eventually we can create an option for the user to re-orientate the face (so it goes from A to B) for nitty gritty control.
Ngons
For Ngons we will have another enum, Ngon Split Method:
- Default: Ear-Clipped
- Beauty: ? (blender internal beautify function)
Aditional Benefit
In some rare occasions the split orientation can affect the baking quality. Using a Triangulate modifier with this feature will allow full control of the final baked result. (see The Quad Caveat)
Note: existing code should be able to handle this, see beauty how beauty fill is used with the triangulate bmesh operator --Ideasman42 12:54, 17 September 2013 (CEST)
Implementation
Mostly this is just accessing existing BMesh tools, but there are some considerations.
- beautify is an bmesh operator but for accessing from a modifier it should really be made available (as has been done with source/blender/bmesh/tools/bmesh_decimate_collapse.c).
- current ngon triangulation uses a kind of ear clipping method, but does so in an order that gives a fan-fill result. The order of divisions can be tweaked to give better results.
Investigate
- Not using bmesh for faster execution since the data we need to create is predictable sized.
- Threading could be done for beauty option, calculating this can be slow so there is some opportunity for speedup since each ngon can be calculated isolated.
Navigation Modes (alternate between them with 1/2/3)
- Walk - WASD move you around always keeping the same Z level
- Fly - same as walk, but WS moves you towards the camera direction
- Terrain - same as walk but changing the Z level according to the ground
The modes are toggleable from inside the operator. Extra: to add 'Teleport Functionality' to move you forward a few Blender Units in the camera direction.
- User Preferences
- Damping Value
- Initial Navigation Mode
- Teleport Displacement
- Teleport Implementation
The user press spacebar and we zoom over the current camera direction. The animation can last 1 second, like smooth-view in Blender right now. To be decided: what to do in walk and terrain modes:
- Basically, what happens when the user is not looking at horizon level. For instance, in terrain mode should we keep the original eye level, or define a new one?.
- Zeno Damping System
The mouse look damping system is based on the [Zeno's Paradox]. Basically if you move your mouse to 0.4 away from the center, Blender will rotate the camera only a fraction of this value (e.g., 20% * 0.4 = 0.08), and then re-place the cursor to the remaining part of the offset (e.g., 80% * 0.4 = 0.32). Next frame then Blender will repeat this until the offset is close enough to 0.0 that it stops moving. That means that if you quickly move your mouse, Blender will make a smooth rotation of the camera.
The re-centering of the mouse cursor prevents the user from having the mouse away from the viewport during the navigation. It's also a common trick used by the BGE navigation scripts (mine included).
Unorganized Ideas
OLD
Take as reference for functionality this [addon]. And the navigation.py I wrote for my book. [link]
As for code, it's better to build as an option for Fly-Mode, in builtin code. So the user can pick which mode she prefers.
- Get the code from fly-mode to allow for camera recording.
- Consider NDof.
Take view3d_fly.c and make view3d_fps.c
Multi-Layer Image Handling
Blender supports multi-layer formats such as OpenEXR and PSD (limited). But Blender doesn't fully benefit from this in its pipeline. A more in-depth support could include:
- Be able to use the same image to map different channels (e.g., the user would load a single char.exr and pick which layer to use for diffuse, specular, ...)
- Likely the layer mapping support would be read-only to begin with.
- Some utility/script to load a layered images and automatically assign to the right channels in cycles/blender-internal.
- It may be worthy to consider using the RenderLayer structure (as EXR is doing) when reading multilayer files (e.g., IMB_exr_multilayer_convert).
Blender currently imports PSD via OpenImageIO. A good start point would be to support OpenEXR and PSD multi-layer capabilities. An extended goal (outside Blender core, but it may be valid for a GSoC) is to implement support for OpenRaster in OpenImageIO. The applicant could implement that directly in Blender, but doing it via OpenImageIO would benefit other FLOSS projects as well. I don't see the need of supporting XCF (Gimp format) but it could be implemented as well if time allows.
Photoshop PSD support
PSD (Photoshop Document) is supported in | OpenImage I/O. To be investigated: I don't know if OIIO provides a 'flat' buffer, or if it output a per layer access.
Loading of PSD may be enough, to save them may not be necessary (though it would be convenient for baking). (I think that) The Blender Game Engine doesn't support other Blender formats (EXR) so it may be ok to not support PSD either.
Update: apparently the | oiio API allows to convert different modes to rgb, but not to flatten out the image.
To investigate
I think each layer should be available separately. This way the artist can keep a single file (my_mode.psd) and work in the diffuse, specular, ... layers together. This will only work with the game engines (Unity, UDK, Source, ...) can take individual layers.
Food for thought
Does it make sense to support Photoshop PSD if we don't support gimp or krita files?
Originally suggested in the | Blender Gamedev Requests/Issues list.
Supporting Layered Images for Textures
- Once this is added users will expect to be able to select layers (or layer groups?) from blender, to map spec/normal/diffuse into texture channels, I think its worth some basic planning regarding this.
- Do we even want to support this?
- If this is supported, it should be done in a way which works with existing EXR multi-layer support, extends to other layered image formats (EXR, OpenRaster, XCF even)
- Likely the layer mapping support would be read-only to begin with.
- Some utility/script to load a layered images and automatically assign to the right channels in cycles/blender-internal.
- It may be worthy to consider using the RenderLayer structure (as EXR is doing) when reading multilayer files (e.g., IMB_exr_multilayer_convert).
Implementation thoughts
I talked with Brecht today, those are non-organized points for my own records:
- Photoshop files stored the composited image (probably as the first layer), so we can get it using oiio api
- First thing to do is to support Photoshop files as if they were a regular image.
- Reference code in Cycles : render/image.cpp - for OSL it goes through TextureSystem and the code itself is in oiio
- OIIO should be added to imbuf, with PSD being one of the 'frontends'. For the interface, rna, we create PSD, but internally it would be an OIIO image.
Join UV maps by name
This is a small feature requested in the Blender Gamedev Requests/Issues list. If the implementation works ok we can add this for Vertex Colors as well.
Note: This can be a generic customdata function, perhaps match name order, then join, or layer merging can take an index-mapping array. This way all named customdata can take advantage of it --Ideasman42 12:51, 17 September 2013 (CEST)
Implementation Thoughts
I even looked at the code, but the data loop merge code is used for all different data types (not only UV). I think it may be better to do a re-ordering of the UV channels (first leave the channels that are common to the source and destination objects, then the others) before merging them. It may sound as a hack, but it may be fine (and we can eventually re-order the channels to match the original order).
Campbell's suggestion: "Ideally this can be done by an index remapping arg and work for any layer type. Then you write a function to sort named lists and give back index values. Those index values will be passed as an argument to the new merging function."
Original Description
When you join two objects, both of which have more than one uv map, the maps will be joined by their position on the list. Because making game art often involves a lot of joining and splitting, this can lead to problems like your unique normal map uv set being joined with your non-unique diffuse uv set or your decal uv set, or your lightmap uv set, unless the objects have the uv maps in the same order. Either let the uv maps be joined by name, or allow the user to reorder the uv map list.
Baking Priorities
(as proposed by Brecht) The baking API to the renderer would be like:
struct BakePixel {
int primitive_id; float u, v; float dudx, dudy; float dvdx, dvdy;
};
bake(Object *object, BakePixel pixel_array[], int num_pixels, int passes_bit_flag, float result[])
That would mean pulling half of the baking code out Blender Internal to be shared with Cycles (and potentially other external renderers in the future). The rasterization, selected to active, cages, antialiasing, etc, would all be shared. The renderer would be asked to render a list of locations on the object, the location can be identified by the face id, uv, and some derivatives for texture filtering. This would probably be done in batches (e.g. 100k at a time), not too small so CPU threads / GPU render can be efficient, but not too big to avoid a lot of memory usage.
The bake modes would be render passes (at least for Cycles). Some render passes mostly make sense for regular render and others for baking, so they don't need to be shared in the UI necessarily, but the implementation inside Cycles should be the same I think. There is some code already for baking background colors for importance sampling, and for evaluating displacement shaders, and this would be an extension of that.
I assume the looping over faces, image handling, rasterization and such would live somewhere in Blender, maybe in editors/object or editors/render? And then this would call into Cycles? If that's the case, that seems ok.
The basic baking code in bake.c is not actually so big and could be copied relatively easily. If I was you I would use them as a base because they give you a basic working structure and make it easier to deduplicate code later on. You can approach this any way you want, but I wouldn't consider the code too daunting to copy & adapt.
- The BakeShade struct can be copied except for the ssamp, obi and vlr members.
- RE_bake_shade_all_selected and do_bake_thread set up threads, and handles images. With some small tweaks these can be used.
- get_next_bake_face, shade_verts and shade_tface needs some changes to use the regular Mesh data structures, it mostly involves replacing VlakRen by MFace I think.
- zbuf/zspan functions can be reused for rasterization, the API is quite simple and entirely decoupled from render data structures and it's hard to implement them well.
Then you end up at do_bake_shade, which is a function somewhat similar to the bake API call that we discussed. You put in an object pointer, face pointer, and uv coordinates, and it writes the shading result directly to the image.
(A) Some passes could be computed directly without the need of the renderer.
- UV
- Object ID
- Material Index
(B) Passes that need to go through the render engine:
- Normal (If we want to bake normals from bump mapping we still need to go into Cycles.
- Diffuse (Direct/Indirect/Color)
- Glossy (Direct/Indirect/Color)
- Transmission (Direct/Indirect/Color)
- Subsurface (Direct/Indirect/Color)
- Emission
That not to say we can't have all passes calculated from within Cycles. It may be simpler if we do. But we could eventually do the (B) passes from the Baking operator directly without resorting to the rendering engine for that. Which would also mean other engines that are not Cycles could more easily get baking for the (A) passes, even if they can't support the (B) ones.
Cycles Baking
To be detailed. A good way to start is to see which maps make sense to make (or are possible) in Cycles. I need to discuss with Brecht to get some overall guidance.
This can take precedence over the other baking requests. This way we make sure new features are added considering both engines.
Implementation thoughts
Maybe it's better if things like selected to active, cages, ... are outside the render engine. This way the render engine only takes care of doing the shader on the high resolution surfaces. The ideal is in fact to share code with the internal renderer for those.
That doesn't stop Cycles from getting full render baking. As a proof of concept/hackxperiment we can:
- map the render space to the UV layout of the active object's UV;
- convert the u,v coordinates from image space to x,y,z coordinates in the 3d space;
- use xyz + margin (a few B.U.) along the normal as origin of the rays;
- use the negative normal as the direction of the rays;
If that works, one can even get highpoly to lowpoly by using the Cycles baked texture as a shadeless layer in blender internal, and bake from high to low. Not a very nice workflow, but it can work for early testing.
Additionally, there is more that can be done to baking. It may work better as an addon though. Basically it would be nice (for blender internal even) if we could associate a few maps to a mesh (e.g., diffuse, specular, ..., normal), select cage, high poly, ... and then you can select a few objects, select a 'Bake Objects' button, and have them all baked, one at a time.
Last but not least, according to Brecht baking could be more like Render passes. Where you can tick a few options and have multiple maps rendered at once (it's related to the above). Perhaps even with layers (Render Pass, but EXR layer, or PSD layer) support.
Normal Bakery
One of the top-hits in the overly mentioned list. To be written in more details, but making long short:
- Preset system for the Normal Baking so the user can set each axis direction and to have a invert flag
- Support ray blockers
- Support Cage
(either a custom cage mesh, or even a 'cage' modifier that simplifies the process and can be used by the baker without having to apply it).
- Review/Update/... the patch to expose Tangent Space data (Campbell is on top of that I think).
SMD Tools
Valve uses tools to 'compile' 3d models and textures in the Source (Valve's engine). So far it seems their tools are Windows only. I believe Valve may have plans to make this all cross-platform. It'll be good to check with them on that to see which direction we take on that regard. (to re-do part of smdtool as a blender addon to run in linux/osx is overkill and out of our scope in my opinion).
Cube Map
To be investigated and discussed further. More on that in the bf-gamedev@blender.org mailing-list.
Python Manipulation Widgets
Just a thought: to allow widgets to be created by addons with callback functions. All the drawing + manipulation code to be handled by Blender internally.
Inspiration: Procedural Road Generator.