1) a vJ station plus props, with sensors, joysticks, and whatever other apparatus is needed to translate the vJs gestures into a navigation of virtual worlds of sounds and visuals.
2) a station from which the hypnosis is guided. Video cameras, microphones, and other sensors pick up the induction and the behaviors of the subject and feed them into the AV+V (audio/visual/virtual) mix.
3) a dJ station/pod, plus props, from which the dJ can create layers of sound and video control, to add live jamming capacity to the mix.
All three parties are responding to the mood of the audience and to each other, creating a real live performance situation, with all the exciting dynamics that that involves. Cameras trained on all three stations feed screens around the house with live, mixed, and processed video.
The fourth partner in this is the audience itself. Sensors trained on the crowd monitor the activity level, rhythm, heat, loudness, etc, and represent it to the audience as changes in the sound, video, and virtual world parameters, sometimes going along (if the energy is increasing, or if some recharging is needed), sometimes contradicting it (if the energy is slumping).
The virtual worlds contains sounds and visuals that the vJ navigates through; the ‘hypnoguide’ induces members of the audience into performative states of hypnosis (speaking in tongues, for instance), and the dJ adds a live counterpoint to the sounds and images emanating from the virtual worlds. Using streaming technologies, live, mixed, and internet-streamed audio and video will be mixed into the virtual worlds and drawn out into the materials available for remixing by the DJ, opening the performances to live input from the whole world, and from potential parallel perfomances in other spaces or cities.
Marcos lutyens and marcos novak @ mindspring