KINECTIC » visual feedback http://kinectic.net Performative Interaction and Embodiment on an Augmented Stage Tue, 17 Oct 2017 10:33:22 +0000 en-US hourly 1 https://wordpress.org/?v=4.2.23 20. Colocation and the Oculus Rift http://kinectic.net/co-location-and-the-oculus-rift/ http://kinectic.net/co-location-and-the-oculus-rift/#comments Mon, 16 May 2016 15:43:29 +0000 http://kinectic.net/?p=714 Continue reading 20. Colocation and the Oculus Rift ]]> In post 18. Interactive Props and Physics it was noted “Colocation issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception.”

In this enactment the Oculus Rift VR headset is used as a means of ascertaining whether the added depth perception of the stereoscopic rendering of the Unity scene might assist in enabling a perfomer to locate the virtual props in 3D space.

Three enactments were carried out, two with  the rendered viewpoint from the camera located from the audience perspective and one from the first person perspective typically used in VR and gaming.

The video below is a mobile phone recording of a computer monitor rendering the Unity scene in real time. The computer uses an i7 processor and a relatively powerful Nvidia GT 720 graphics card to deliver the stereoscopic rendering to the Oculus Rift. Though the system is able to support the new Kinect v2, the older Kinect was used in order to maintain continuity with previous enactments.

In the first enactment myself and one of the previous performers carried out the task of knocking the book off the table. We both felt that the task was much easier to accomplish with the stereoscopic depth enabling one to easily judge the position of the avatars hand in relationship to the virtual book.

Kinect tracking errors made bending the arm and precise control of the hand a little problematic. The task was felt to be much easier to achieve than  previous enactments using the monoscopic video camera perspective as it was possible to clearly see where the virtual hand was, even if when it was ‘misbehaving’.

However with the added depth perception a new issue came to be highlighted that was previously unnoticed, that of difficulties in knowing  front from back. When one moves ones hand forward it moves away from you, whilst when viewed from the camera perspective the hand moves nearer to the camera, the opposite direction to which one is used to. This effect parallels the left right reversal of a mirror in comparison to the camera view. In both cases through practice it is possible to become accustomed to the depth reversal and lack of mirror reversal, though at first one finds oneself moving in the opposite direction, or using the opposite limb, It is possible to technically produce a mirror reversal, but a depth reversal was felt to be more problematic. A simpler solution, easily achievable using VR was to give the performer the same first person perspective as one is normally used to – seeing the scene from the viewpoint of the avatar.  In the video, the third enactment  carried out by myself demonstrates this perspective.

Due to time constraints it was not possible to test this enactment with the external participant. However despite the incredibly immersive qualities of the first person perspective, I felt there are some serious problems resulting from this viewpoint.

Firstly I felt a very strange out of the body experience looking down at a virtual body that was not mine, in addition my virtual limbs and my height were completely different to my own and this produced a strong sense of disorientation. Perhaps a male body of similar height and dimensions to my own might have felt more familiar.

The task of  knocking the book over felt extremely easy as I could see my virtual hand in relationship to the book from a familiar first person perspective. Despite Kinect tracking issues, it was possible to correct the position of the hand and ultimately knocking the book over was easy to achieve. Both the issues of depth and mirror reversal were removed using this perspective.

However walking and moving in the scene resulted in a strong degree of vertigo and dizziness. For the first time I experienced “VR motion sickness” and nearly fell over. It was extremely unpleasant!

Further, after taking the headset off, for some minutes I still felt disorientated, somewhat dizzy and a little out of touch with reality.
Although the first person perspective should have felt the most natural, it also produced disturbing side effects which if not rectified would make the first person VR perspective unusable if not hazardous in a live performance context.

The feelings of vertigo and motion sickness may well have been exaggerated due to Kinect tracking issues, with the avatar body moving haphazardly resulting in a disconnect between the viewpoint rendered by the avatars perspective and that of where my real head thought it was.

Two further practical considerations are:  i) the VR headset is tethered by two cables  making it difficult to move feely and safely and ii) the headset being enclosed felt somewhat hot after a short period of time. Light, ‘breathable’ wireless VR headsets may solve these problems, but the effects of vertigo resulting from the first person perspective whilst moving in 3D space and feeling as if one is in another body are perhaps more problematic.

The simplest solution, though still with the depth reversal issue, is removing the VR tracking and to create a fixed virtual camera giving the audience perspective, parallel to the previous methodology of relaying the audience perspective through a video camera mounted on a tripod.

Before dismissing the VR first person perspective being the sole cause of motion sickness, it is planned that a further test be carried out using the more accurate Kinect v2 with a virtual body of proportions similar to my own. It is envisaged that the Kinect v2 would result in a more stable first person perspective and with a more familiar viewpoint as one I am used to with my natural body.

In addition other gaming like perspectives might also be tried, the third person perspective for instance, with a virtual camera located just above and behind the avatar.

A key realisation is that the performers perspective need not necessarily be that of the audience, that the iMorphia system might render two  (or possibly more) perspectives – one for the audience – the projected scene, and one for the performer. The projected scene being designed to produce the appropriate suspension of disbelief for the audience, whilst the performer’s perspective designed to enable the performer to perform efficiently such that the audience believes the performer to be immersed and present in the virtual scene.

 

]]>
http://kinectic.net/co-location-and-the-oculus-rift/feed/ 0
19. Interaction Workshop http://kinectic.net/interaction-workshop/ http://kinectic.net/interaction-workshop/#comments Tue, 08 Mar 2016 17:16:13 +0000 http://kinectic.net/?p=705 Continue reading 19. Interaction Workshop ]]> A workshop involving two performers was carried out in order to re-evaluate the performative notions of participation and navigation (Dixon 2007), described in post 15. Navigation.

Previously a series of auto-ethnographic enactments (documented in posts August-December 2015) provided some initial feedback on participation and navigation with iMorphia . It was interesting to observe the enactments as a witness rather than a participant and to see if the performers might experience similar problems and effects as I had.

Participation
The first study was of participation – with the performer interacting with virtual props. Here the performer was given two tasks, first to try and knock the book off the table, then to knock over the virtual furniture, a table and a chair.

The first task involving the book proved extremely difficult, with both performers confirming the same problem as I had encountered, namely knowing where the virtual characters hand was in relationship to ones own real body. This is a result of a discrepancy in collocation between the real and the virtual body compounded by a lack of three dimensional or tactile feedback. One performer commenting “it makes me realise how much I depend on touch” underlining how important tactile feedback is when we reach for and grasp an object.

The second task of knocking the furniture was accomplished easily by by both performers and prompted gestures and exclamations of satisfaction and great pleasure!

In both cases, due to the lack of mirroring in the visual feedback, initially both performers tended to either reach out with the wrong arm or move in the wrong direction when attempting to move towards or interact with a virtual prop. This left/right confusion has been noted in previous tests as we are so used to seeing ourselves in a mirror that we automatically compensate for the horizontal left right reversal.

An experiment carried out in June 2015 confirmed that a mirror image of the video would produce the familiar inversion we are used to seeing in a mirror and performers did not experience the left/right confusion. It was observed that the mirroring problem appeared to become more acute when given a task to perform  involving reaching out or moving towards a virtual object.

 

 Navigation
The second study was of navigation through a large virtual set using voice commands and body orientation. The performer can look around by saying “Look” then using their body orientation to rotate the viewpoint. “Forward” would take the viewpoint forward into the scene whilst “Backward” would make the scene retreat as the character walks out of the scene towards the audience. Control of the characters direction is again through body orientation. “Stop” makes the character stationary.

Two tests were carried out, one with the added animation of the character walking when moving, the other without the additional animation. Both performers remarking how the additional animation made them feel more involved and embodied within the scene.

Embodiment became a topic of conversation with both performers commenting on how landmarks became familiar after a short amount of time and how this memory added to their sense of being there.

The notions of avatar/player relationship, embodiment, interaction, memory and visual appearance are discussed in depth in the literature on game studies and is an area I shall be drawing upon in a deeper written analysis in due course.

Finally we discussed how two people might be embodied and interact with the enactments of participation and navigation. Participation with props was felt to be easier, whilst navigation might prove problematic, as one person has to decide and controls where to go.

A prototype two performer participation scene comprising two large blocks was tested but due to Unity problems and lack of time this was not fully realised. The idea being to enable two performers to work together to lift and place large cubes so as to construct a tower, rather like a children’s toy wood brick set.

Navigation with two performers is more problematic, even if additional performers are embodied as virtual characters , they would have to move collectively with the leader, the one who is controlling the navigation. However this might be extended to allow characters to move around a virtual set once a goal is reached or perhaps navigational control might be handed from one participant to another.

It was also observed that performers tended to lose a sense of which way they were facing during navigation. This is possibly due to two reasons –  the focus on steering during navigation such that the body has to rotate more and the  lack of clear visual feedback as to which way the characters body is facing, especially during moments of occlusion when the character moves through scenery such as undergrowth.

These issues of real space/virtual space colocation, performer feedback of body location and orientation in real space would need to  be addressed if iMorphia were to be used in a live performance.

]]>
http://kinectic.net/interaction-workshop/feed/ 0
15. Navigation http://kinectic.net/navigation/ http://kinectic.net/navigation/#comments Fri, 07 Aug 2015 13:12:28 +0000 http://kinectic.net/?p=614 Continue reading 15. Navigation ]]> At the last workshop, a number of participants expressed the desire to be able to  enter into the virtual scene. This would be difficult in the 2D environment of PopUpPlay but totally feasible with iMorphia, implemented in the 3D Games Engine, Unity.

Frank Abbott, one of the participants, suggested the idea of architectural  landscape navigation, with a guide acting as a story teller and that the short story “The Domain of Arnheim” by Edgar Allen Poe might be inspirational  in developing navigation within iMorphia.

The discussion continued with recollections of the effectiveness of early narrative  and navigational driven computer games such as “Myst”.

Steve Dixon in “Digital Performance ” suggests four types of performative interaction  with technology (Dixon, 2007, p. 563):

  1. Navigation
  2. Participation
  3. Conversation
  4. Collaboration.

The categories are ordered in terms of complexity and depth of interaction, 1 being the simplest and 4 the more complex. Navigation is where the performer steers through the content,  this might be spatially as in a video game or via hyper links. Participation is where the performer undergoes an exchange with the medium. Conversation is where the performer and the medium undergo a back and forth dialogue. Collaboration is where participants and media interact produce surprising outcomes, as in improvisation.

It is with these ideas I began investigating the possibility of realising performative navigation in iMorphia. First I added a three dimensional landscape, ‘Tropical Paradise’ an asset supplied with an early version  of Unity (v2.6, 2010).

island_demo_unity

Some work was required fixing shaders and scripts in order to make the asset run with the later version of Unity (v4.2, 2013) I was using.

I then began implementing control scripts that would enable a performer to navigate the landscape, the intention being to make navigation feel natural, enabling the unencumbered performer to seamlessly move from  a conversational mode to a navigational one. Using the Kinect Extras package I explored combinations of spatial location, body movement, gesture and voice.

The following three videos document these developments. The first video demonstrates the use of gesture and spatial location , the second body orientation combined with  gesture and voice and the third voice and body orientation with additional animation to enhance the illusion that the character is walking rather than floating through the environments.

Video 1: Gesture Control

Gestures: left hand out = look left, right hand out = look right, hand away from body = move forwards, hand pulled in = move backwards, both  hands down = stop.

Step left or right = pan left/right.

The use of gesture to control the navigation proved problematic, it was actually very difficult to follow a path in the 3D world, and gestures were sometimes incorrectly recognised (or performed) resulting in navigational difficulties where a view gesture acted as a movement command or vice versa.

In addition the front view of the character did not marry well with the character moving into the landscape.

Further scripting and upgrading of the Kinect assets  and Unity to v4.6 enabled the successful implementation of a combination of speech recognition, body and gesture control.

Video 2: Body Orientation, Gesture and Speech Control

Here the gesture of both hands out activates view control, where body orientation controls the view. This was far more successful than the previous version and the following of a path proved much easier.

Separating the movement control to voice activation ( “forward”, “back”, “stop”) helped in removing gestural confusion, however voice recognition delays resulted in overshooting when one wanted to stop.

The rotation of the avatar to face the direction of movement produced a greater sense of believability that the character is moving through a landscape. The addition of a walking movement would enhance this further – this is demonstrated in the third video.

Video 3: Body orientation and Speech Control

The arms out gesture felt a little contrived and so in the third demonstration video I added the voice command “look” to activate the change of view.

Realising the demonstrations took a surprising amount of work, with much time spent scripting and dealing with setbacks and pitfalls due to Unity crashes and compatibility issues between differing versions of assets and Unity. The Unity Kinect SDK and Kinect Extras assets proved invaluable in realising these demonstrations, whilst the Unity forums provided insight, support and help when working with quaternions, transforms, cameras, animations, game objects and the sharing of scripting variables. At some point in the future I intend to document the techniques I used to create the demonstrations.

There is much room for improvement and creating the demonstrations has led to speculation as to what an ideal form of performative interaction might be for navigational control.

For instance a more natural form of gestural recognition than voice control would be to recognise the dynamic gestures that correspond to walking  forwards and backwards. According to the literature this is technically feasible, using  for instance Neural Networks  or Dynamic Time Warping, but these complex techniques are felt to be way beyond the scope of this research.

The object here is not to produce fully working robust solutions, instead the process of producing the demonstrations act as proof of concept and identify the problems and issues associated with live performance, navigation and control. The enactment and performance to camera serves to test out theory through practise and raises further questions and challenges.

Further Questions

How might navigation work with two performers?

Is the landscape too open and might it be better if constrained via fences, walls etc?

How might navigation differ between a large outside space and a smaller inside one, such as a room?

How might the landscape be used as a narrative device?

What are the differences between a gaming model for navigation  where the player(s) are generally seated looking at a screen using a mouse/keyboard/controller and a theatrical model with free movement of one or more unencumbered performers on a stage?

What are the resulting problems and issues associated with navigation and the perspective of performers and audience ?

]]>
http://kinectic.net/navigation/feed/ 3
8. Evaluation Workshops http://kinectic.net/evaluation-workshop/ http://kinectic.net/evaluation-workshop/#comments Wed, 21 May 2014 11:41:47 +0000 http://kinectic.net/?p=484 Continue reading 8. Evaluation Workshops ]]> In order to evaluate the effectiveness and to gain critical feedback of ‘iMorphia’ the prototype performance system, fourteen performers took part in a series of workshops which were carried out between the 14th and 18th April 2014 in the Mixed Reality Lab at Nottingham University.

One of the key observations was that content effects performative behaviour. This was originally posed as a research question in October 2013:

“Can the projected illusion affect the actor such that they feel embodied by the characteristics of the virtual character? ”

An interesting observation was the powerful and often liberating effect of changing the gender of male and female participants, producing comments such as “I feel quite powerful like this” (f->m), “I feel more sensual” (m->f).

All participants when in opposite gender expressed awareness of stereotypes, males not wanting to behave in what they perceived as a stereotypical fashion towards the female character, whilst females in male character seemed to relish the idea of playing with male stereotypes. These reactions reflect a contemporary post feminism society where the act of stereotyping females has strong political issues. A number of males reported how they felt that they had to respect the female character as if it had an independent life.

 One participant likened the effect of changing gender to the medieval ‘Festival of Fools’, where putting on clothes of the opposite gender is a foolish thing to do and gives permission to play the fool and to break rules, which was once regarded as a powerful and liberating thing to be able to do. This sentiment was echoed by a number of participants, that the system gave you freedom and permission to be other, other than ones normal everyday self and removed from people’s expectations of how one is supposed to behave.

In summary the key observations resulting from the workshops were:

i) The effectiveness of body projection in creating an embodied character that is sufficiently convincing and effective in creating a suspension of disbelief in both performer and audience.

ii) How system artefacts such as lag and glitches from tracking errors were exploited by performers to explore notions of the double and the uncanny.

iii) The affective response of the performer when in character compared to the objective response when viewing the projection as an audience member.

The video below contains short extracts from the four hours of recorded video, with text overlays of comments by the performers.

http://www.youtube.com/watch?v=of8s_GgzbUk

]]>
http://kinectic.net/evaluation-workshop/feed/ 0
2. Unity 3D and Kinect tests http://kinectic.net/unity-3d-and-kinect-tests/ http://kinectic.net/unity-3d-and-kinect-tests/#comments Thu, 23 Jan 2014 22:57:01 +0000 http://kinectic.net/?p=340 Continue reading 2. Unity 3D and Kinect tests ]]> Overview
It has been some time since the experimental performance MikuMorphia and the dubious delights of being transformed into a female Japanese anime character. Since then I have cogitated and ruminated on following up the experiment with new work as well as reading up on texts by Sigmund Freud and Ernst Jentsch on the nature of the uncanny, with the view of writing a positional statement on how these ideas relate to my investigations in performance and technology.

In January I moved into a bay in the Mixed Reality Lab and began to develop a more user friendly version of the original experimental performance whereby it would be possible for other people to easily experience the transformation and its subsequent sense of uncanniness without having to don a white skin tight lycra suit. Additionally I wanted to move away from the loaded and restrictive designs of the MikuMiku prefab anime characters. I investigated importing other anime characters and ran a few tests that included the projection of backdrops, but these experiments did not result in breaking any new ground. Further, the MikuMiku software was closed and did not allow the possibilities of getting under the hood to alter the dynamics and interactive capabilities of the software.

MikuMorpha as spectator
Rather than abandoning the MikuMiku experience altogether I carried out some basic “user testing” with a few willing volunteers in the MR lab. Rather than having to undress and squeeze into a tight lycra body suit, participants don a white boiler suit over their normal clothes, This does not produce an ideal body surface for projection being a rather baggy outfit with creases and folds, but enables people to easily try out the experience.
Observing participants trying out the MikuMiku transformation as a spectator rather than a performer made clear to me that watching the illusion and the behaviour of a participant is a very different experience from being immersed in it as a performer.
The subjective experience of seeing one self as other is completely different from objectively watching a participant – the sense of the uncanny as a spectator appears to be lost.

Rachel Jacobs, an artist and performer likened the experience to having the performers internal vision of their performance character visually made explicit, rather than internalised and visualised “in the minds eye”. The concept of the performers character visualisation being made explicit through the visual feedback of the projected image is one that deserves further investigation with other performers who are experienced in the concept of character visualisation.

Video of Rachel experiencing the MikuMiku effect:

http://www.youtube.com/watch?v=eBvkLihWlXw

Unity 3D
My first choice of an alternative to MikuMiku is the games engine Unity 3D which enables bespoke coding, has plugins for the Kinect and an asset store enabling characters, demos and scripts to be downloaded and modded. In addition the Unity Community with its forums and experts provide a platform for problem solving and include examples of a wide range of experimental work using the Kinect.

Over the last few days, with support from fellow MRL PhD student Dimitrios, I experimented with various Kinetic interfaces and drivers of differing and incompatible versions. The original drivers that enabled MikuMiku to work with the Kinect used old version of OpenNI (1.0.0.0) and Nite, with special non-Microsoft Kinect drivers. The Unity examples used later versions of drivers and OpenNI that were incompatible with MikuMiku which meant that I had to abandon running MikuMiku on the one machine. I managed to get a Unity demo running using OpenNI2.0, but in this version the T-pose which I used to calibrate the figure and the projection was no longer supported, calibration was automatic as soon as you entered the performance space, resulting in the projected figure not being co-located on the body.

Technical issues are tedious, frustrating, time consuming and an unavoidable element of using technology as a creative medium.

Yesterday, I produced a number of new tests using Unity and the Microsoft Kinect SDK, which offers options in Unity to control the calibration, automatic or activated by a selecting a specific pose.

Below are three examples of these experiments, illustrating the somewhat more realistic human like avatars as opposed to the cartoon anime figures of MikuMiku.:

Male Avatar:

http://www.youtube.com/watch?v=yFCFMVLG3X8

Female Avatar:

http://www.youtube.com/watch?v=u7ubiwRQWFw

Male Avatar, performer without head mask:

http://www.youtube.com/watch?v=Db53K6Z47FA

This last video exhibits a touch of the uncanny where the human face of the performer alternatively blends and dislocates with the face of the projected avatar, the human and the artificial other being simultaneously juxtaposed.

 

 

]]>
http://kinectic.net/unity-3d-and-kinect-tests/feed/ 0