In the ICSPACE project we explore how to provide intelligent coaching in sports training, motor skill learning, or physical rehabilitation. The goal is to develop a VR-based "Intelligent Coaching Space" that enables novel ways of adaptive, online feedback in a closed-loop interaction and training system. Leveraging fundamental cognitive principles, perceptual processing, and movement understanding, ICSPACE combines state-of-the-art motion tracking and analysis, neuro-cognitive diagnosis, and multi-sensory feedback in various forms, including virtual mirrors, online verbal feedback, or instructions and demonstrations by a virtual coach.
ICSPACE: Intelligent Coaching Space
- A deep grounding of intelligent coaching in fundamental cognitive principles, perceptual processing, and movement understanding;
- Adaptive support in an online, closed-loop sensorimotor learning and interaction system.
We aim to investigate the sensorimotor, cognitive, and social mechanisms underlying successful coaching, and to explore how cognitive interaction technology can provide coaching assistance that achieves, and perhaps even exceeds, the efficacy and acceptance of human coaching.
To advance the use of Cognitive Interaction Technology in effective coaching and deep assistance applications, we are creating the Intelligent Coaching Space, which supports human trainees in learning and practicing sensorimotor skills through adaptive and personalized instruction and tutoring in an online, closed-loop sensorimotor learning and interaction system.
|01.10.2013: Official start of the ICSPACE project!|
|07/2015: Our two-sided CAVE goes operational, featuring stereo back-projection on front wall and floor.|
|12/2015: Our OptiTrack motion capture system arrived, enabling full-body motion tracking in the CAVE.|
|13.02.2015: First ICSPACE milestone workshop is held in CITEC.|
|11.02.2016: Second ICSPACE milestone workshop is held in CITEC.|
|11.02.2016: ICSPACE is featured in a press release of Bielefeld University.|
|11.02.2016: ICSPACE features are nicely shown in a research_tv video (also shown below).|
|12.02.2016: The ICSPACE project is featured in nice articles in Neue Westfälische and Westfalenblatt.|
|02.03.2016: The ICSPACE project is featured in a short TV spot of WDR Lokalzeit.|
|25.07.2016: The ICSPACE project is featured in an article of Wunderwelt Wissen .|
|26.07.2016: The ICSPACE project is featured in an article of E&T 11(7).|
The ICSPACE project is structured into six work packages, each of which will address a main component of the targeted intelligent coaching environment:
- Motion: Addresses questions concerning the recording, representation, analysis and interpretation of human motions.
- Character: Creates different life-like virtual characters (e.g. from 3-D body scans) for the trainee avatar or the coach.
- Feedback: Focuses on aspects of sensorimotor learning and multi-sensory feedback.
- Dialogue: Concerns the verbal and nonverbal communication skills of the coach.
- Coaching Contributes to abilities for adaptive continuous coaching.
- Integration and Evaluation: Provides an immersive virtual experiment environment and develops the software architecture.
The internal structure of the project is organized along four milestone demonstrators, for which the intermediate results of each work package will be integrated and interwoven:
- Virtual World: During the initial installation of ICSPACE, we focus on tracking human motion, devising multimodal feedback technologies and evaluating sensorimotor feedback strategies to lay the foundations for the project. In this setup, the trainee’s motion is displayed through a coarse template avatar, which replicates the motions of the trainee in real-time (Figure 1).
- Virtual Mirror: In the second demonstrator, we introduce two virtual characters: a virtual clone of the trainee based on an animated 3-D scan used to mirror the trainee’s performance, and an autonomous virtual coach that initially demonstrates exercises and gives feedback after the exercise has been performed by the trainee (Figure 2).
- Virtual Coach: In the third demonstrator, the focus shifts from direct multi-sensory feedback to communicative feedback by the virtual coach. We will also evaluate whether modifications to the coach’s visual appearance, movement style, and demonstrations based on the appearance and capabilities of the human trainee have an effect on the trainee’s performance.
- Personalized Intelligent Coach: In the final demonstrator, the coaching capabilities will include individualized planning of a long-term coaching program. At this stage, all levels of representation, from the biomechanical model up to mental structures by means of neurocognitive diagnostics, are included in the coaching process. The intelligent coach will employ coaching strategies adapted to the capabilities and the training progress of the trainee, and establish long-term motivation by social feedback.
Dialogue Structure of Coaching Sessions
Proceedings of SemDial Workshop on the Semantics and Pragmatics of Dialogue, 2014, pp. 167-169.
Perceptual-cognitive monitoring in real world and virtual reality
Proceedings of the 14th European Congress of Sport Psychology, 2014, pp. 128-129.
Analogous motion illusion in audition and vision
Poster presented at the 15th International Multisensory Research Forum, 2014.
Temporal frequency modulates auditory speed perception
Poster presented at the 16th International Multisensory Research Forum, 2015.
Demonstrating the Dialogue System of the Intelligent Coaching Space
Proceedings of SemDial (GODIAL), 2015, pp. 168-169.
Timing and Grounding in Motor Skill Coaching Interaction: Consequences for the Information State
Proceedings of SemDial Workshop on the Semantics and Pragmatics of Dialogue, 2015, pp. 86-94
Multi-Level Analysis of Motor Actions as a Basis for Effective Coaching in Virtual Reality
International Symposium on Computer Science in Sport, 2015.
Motor Performance and Interaction - Building Blocks in Brain and Technology
Invited Keynote Lecture at the International Symposium on Computer Science in Sport, 2015.
A Multimodal System for Real-Time Action Instruction in Motor Skill Learning
Proceedings of ACM International Conference on Multimodal Interaction, 2015.
Hearing in slow-motion: Humans underestimate the speed of moving sounds
Scientific Reports 5, 2015.
Accurate Face Reconstruction through Anisotropic Fitting and Eye Correction
Proceedings of Vision, Modeling and Visualization, 2015.
Realizing a Low-latency Virtual Reality Environment for Motor Learning
Proceedings of ACM Symposium on Virtual Reality Software and Technology (VRST), 2015.
Cornelia Frank, William Marshall Land, Thomas Schack
Frontiers in Psychology: Movement Science and Sport Psychology (6), 2016.