Hi, my name is Stephan Zibner. I am a doctoral student at the Institut für Neuroinformatik, which is an institute at Ruhr-University Bochum.
I am a member of the work group Autonomous Robotics. In this group, we evaluate neuro-dynamic architectures modeled using the DFT framework on robotic platforms. In doing so we analyze the constraints for DFT models introduced by this embodied approach.
My personal research evolves around the topic of scene representation. I am interested in how a visual scene is perceived and memorized for later use. My focus lies on explaining how the nervous system is able to autonomously generate scanning sequences, create and update working memory, and re-instantiate memorized knowledge about the scene. The overarching theme of my efforts is integration; both in the sense of closing the loop between perception, cognition, and action and combining multiple smaller DFT-based architectures into larger, more comprehensive
ones with emerging features such as on-line updating. DFT supports this theme by being able to interface with sensory surfaces and movement generation, as well as using a common "language" in which modules influence each other. With growing size of architectures and consequently a growing number of behaviors these architectures may perform, the time course of behaviors gets more and more complex. Behavioral organization is required to autonomously determine when behaviors may become active, both in adhering to sequentiality (e.g., first moving the hand to a cup, then closing the fingers around it to grasp) and mutual exclusiveness (e.g., after grasping a cup, I cannot try to drink from it while simultaneously putting it back in a cupboard). Driven by the theme of integration, I have extended my research onto the fields of arm movement generation and behavioral organization.