「Dev:Source/UI/Principles Of UI Architecture」の版間の差分

提供: wiki
< Dev:Source‎ | UI
移動先: 案内検索
 
(1版 をインポートしました)
 
(相違点なし)

2018年6月29日 (金) 02:51時点における最新版

Following the release of 2.43, Blender's UI and event system will go through a refactor. This page is meant to share the experience I have of writing 2 full-featured GUI libraries in opengl, and to suggest a course of action for if we decided to rewrite the UI interface code.

Introduction: UIs Tend to be Hiearchically OOP In Their Metaphors

A very important thing to remember when writing a GUI library, is that a user will percieve a GUI as object-oriented and hiearchically based, even if he doesn't realize it and whether the GUI code is OOP or not.

The basic prinicples of a GUI are things-you-input-with-mouse-and-keyboard and things-that-contain-those-other-things (note that a good GUI system will combine these two concepts). By the very nature of this concept, GUI code works much better with even semi-OOP code then a purely procedural system.

Take the example of a text editor component: a text editor component consists of a text field, two scrollbars, and something to contain all three of these elements.

State

GUIs should never be written stateless, this is in complete conflict with the metaphor that a GUI is a static viewer and manipulator of data. True, you could argue that a stateless GUI provides "views" that are "rendered" each frame, but this argument assumes that UIs are simple.

At any one time, a UI coder must be able to nest UI elements as far as they want, through containers, and store state data about his grouping. This is a fundemandally important concept, as otherwise widget and GUI code becomes semi-separate and very hard to maintain.

Take the example of a non-linear animation mixer. With stateless code, an NLA mixer is forced to be written with a mix of GUI and hard-coded quasy-widget code that draws and processes the editor. Picutre in your mind the basic needs of an NLA mixer: there must be support for multiple channels, there must be an ability to hide/collapse channels, there must be an ability to zoom in a sane fashion, and the user must be able to pan within the mixing field.

With stateless UI, really nothing will use the GUI code. Panning and zooming will be implemented outside of the the GUI code, as will the NLA strips, the channel drawing, and virtually the entire editor. This is extreamly bad and is extreamly difficult to maintain.

Low-Level Drawing Routines

It can be argued that most important part of a UI is consistency. A user must see what he expects to see, what he's always expected to see. For this reason, the code the UI code uses to draw must be separate from the low-level drawing library uses.

Instead, widgets should use basic drawing routines, such as:

  • Draw Inset Box
  • Draw Outset Box
  • Draw Frame Inset Box
  • Draw Frame Outset Box
  • Draw Text
  • Draw Stylied Line

2D UI Event Systems: Signals and Hierarchies

UI event systems work the best when there's a clear path from low-level event to specific widget. This works best by treating the GUI widgets and containers as a tree.

However, actual custom callbacks for widgets should not be aware of this. A callback--actually, a signal--should be simple yet powerful to use.

Signals

Signals work by associating a callback mechanism with an event. This event can be a button being pushed, a mouse moving over a widget, etc.

Generic utility libraries for signals exist in good most UI libraries, including GTK and QT.