Community:Science/Robotics/Sensors, Controllers, Actuators architecture in the GameEngine

提供: wiki
< Community:Science‎ | Robotics
2018年6月29日 (金) 03:41時点におけるYamyam (トーク | 投稿記録)による版 (1版 をインポートしました)
(差分) ← 古い版 | 最新版 (差分) | 新しい版 → (差分)
移動先: 案内検索

Sensors, Controllers, Actuators architecture in the GameEngine

This give some general informations on how the GE is structured at the moment and how it could be extended for robotics application.

Here is the current situation:

The sensor/controller/actuator model does not exactly match what you would expect in robotic. The sensors can go complex stuff but eventually that boils down to a simple positive or negative pulse that activates the controller. If the sensor does not generate a pulse, the controller does not run. The sensor can provide additional data but they are not passed with the pulse. The only way you can retrieve the data is to use the Python controller. Hence you cannot handle sensor data at C++ level.

The task of the controller is to take a decision based on the sensor input and do ponctual changes in the simulation. If it wants to have persistent action, it must activate an actuator. Again the link between the controller and the actuator is a simple activate/deactivate control. If the controller wants to change some actuator parameters, it must do it in python before activating the controller.

The actuator acts upon the simulation ponctually or persistently. The first type of actuator is really to allow simple simulation without python. It is possible by using the predefined logic controllers that perform simple operations on the sensors pulse (AND, OR, NAND, etc.). The second type of actuator is useful to perform certain persistent tasks at C++ level, such as tracking an object, controlling the velocity, etc.

Currently no sensor will detect external activity. If you want to sense activity from a robotic scene, you must do it in the controller with Python API. Thus, any specialized application in the GE usually ends up in a large python controller that is activated on every frame with a fake sensor (Always in pulse mode). You will usually have no actuators because the action they perform do not match what you need.

Although this model has the merit to exist, it is not very user friendly nor efficient.

It would be much better if the sensor, controller and actuator functions could be customized to specialized needs. While adding new bricks in Blender is relatively easy and will serve that purpose, it is unlikely that such specialized bricks will go in trunk (the mainstream source code). Maintaining a branch is not a long term solution either: it requires a lot of effort to stay in sync with trunk and fix the merge issues.

Ideally a plugin model should be available but Blender is not well designed for that. The main problem comes from the DNA system that requires that all data stored in the .blend file are listed in headers that are compiled in the blender executable and in the .blend file. Maybe the DNA can be extended to external libraries but I don't know enough about it; I'll pass the question to the committers mailing list. It is also possible that the 2.5 release will allow more easily such extensions.

For the time being, it would be acceptable to have new sensors in trunk provided that they are very few, very general and do not create any dependency to middleware that would affect the whole blender. The same goes for specialized controllers. However, the current GE GUI is too limited to provide a good visual representation of a robotic process: the controllers have only one input to which all the sensors are connected.

There is currently a project to refactor completely the GE GUI to allow a nodal representation of the logic. This is done in relation with a commercial game development but it would benefit largely the entire community. The project is still a bit vague so I cannot give any precise info. It could be based on this old, but good, proposal: http://www.blender.org/documentation/logic_editing_proposal.pdf