16. Participation, Conversation, Collaboration

16. Participation, Conversation, Collaboration

Since the last enactment exploring navigation, I have been looking to implement performative interaction with virtual objects – the theatrical equivalent of props – in order to facilitate Dixon’s notions of participation, conversation and collaboration.

I envisaged implementing a system that would enable two performers to  interact with virtual props imbued with real world physical characteristics. This would then give rise to a variety of interactive scenarios – a virtual character might for instance choose and place a hat on the head of the other virtual character, pick up and place a glass onto a shelf or table, drop the glass such that it breaks, or collaboratively create or knock down a construction of virtual boxes. These types of scenarios are common in computer gaming, the challenge here however, would be to implement the human computer interfacing necessary to support natural unencumbered performative interaction.

This ambition raises a number of technical challenges, including the implementation of what is likely to be non-trivial scripting and the requirement of fast, accurate body and gesture tracking, perhaps using the Kinect 1.
There are also technical issues associated with the co-location of the performer and the virtual objects and the need for 3D visual feedback to the performer. These problems were encountered in the improvisation enactment with a virtual ball and discussed in the section “3. Depth and Interaction”  in the intermedial workshop blog post.

The challenges associated with implementing real world interaction with virtual 3D objects  are currently being met by Microsoft Research in their investigations of augmented reality through  prototype systems such as Mano-a-Mano and their latest project, the Hololens.

Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects.”

Microsoft HoloLens understands your gestures, gaze, and voice, enabling you to interact in the most natural way possible”

Reviews of the Hololens suggest natural interaction with the virtual using the body, gesture and voice is problematic, with issues of lag, and the misreading of gestures, similar to the problems I encountered during the enactment of navigation.

“While voice controls worked, there was a lag between giving them and the hologram executing them. I had to say, “Let it roll!” to roll my spheres down the slides, and there was a one second or so pause before they took a tumble. It wasn’t major, but was enough to make me feel like I should repeat the command.

Gesture control was the hardest to get right, even though my gesture control was limited to a one-fingered downward swipe”

(TechRadar 6/10/2015)

During today’s  supervision meeting it was suggested that instead of trying to achieve the interactive fidelity I have been envisaging, which is likely to be technically challenging, that I work around the problem and exploit the limitations of what is possible using the current iMorphia system.

One suggestion was that of implementing a moving virtual wall which the performer has to interact with or respond to. This raises issues of how the virtual wall responds to or effects the virtual performer and then how the real performer responds. Is it a solid wall, can it pass through the virtual performer? Other real world physical characteristics might imbued in the virtual prop such as weight or lightness; leading to further performative interactions between  real performer, virtual performer and virtual object.

 

 

Originally posted at http://kinectic.net/participation-conversation-collaboration/

http://kinectic.net/participation-conversation-collaboration/