May 31, 2012
Emily Roysdon’s I am a Helicopter, Camera, Queen @ BMW Tate Live

The third performance from the BMW Tate Live series at Tate Modern happened tonight with Emily Roysdon’s choreographed performance, I am a Helicopter, Camera, Queen. The performance used untrained performers who moved through the performance room and outside into a larger space at the Tate for the finale. The following is the artists statement from her website about the work:

Roysdon thinks of the helicopter, camera, and queen as representations of territory and seeing – regimes of viewing and ways of understanding space.

For this project Roysdon will be working with 100 volunteers who identify as queer and/or feminists. Taking the precise confines of the room itself as the score, and thinking about scale, resources and units of measure, the choreography will ‘make room,’ reconstituting and queering a previously defined space. The participants will attempt to be exactly in the space, to be audience and performer, to be in time and to create a stage within their collectivity. The room will be full and the participants will be guided by a room size score below their feet, that once the live action is complete will remain as a document of the event.

While the performance had its moments, overall it lacked conceptual clarity and seemed to be attempting too much for what are scheduled as ten minute performances. For me, the ‘site’ of it being an online live performance and the relationship of a global spectatorship with the white space of the room seemed to not be considered as a factor. This was a real shame as these were clearly a key factor in Pablo Bronstein’s performance, Constantinople Kaleidoscope, last month and I had hoped were a sign of a well considered selection by the Tate curators. This seems to have not been the case, credit for consideration of the online space lies solely with Pablo Bronstein who took up that challenge, and it makes me wonder on what basis are artists being selected for these performances? All work with performance but to date they seem to have little if any experience of networked performance or indeed embrace the new technologies that it represents. Why do we not see at least one artist from this area who would, given the chance, be able to explore the full potential and scope of the online?

Note that the video of the performance above seems to have been poorly trimmed and the performance itself starts 36 minutes 40 seconds into the video. The next Tate Live event will be on Thursday the 28th of June at 20:00 BST (GMT +1) by artist Harrell Fletcher.

Posted by: Garrett @ 10:34 pm
Comments Off
May 23, 2012
Social Firefly


Social Firefly by Jason McDermott, Liam Ryan and Frank Maguire is:

a community of friendly intelligent lights that influence one another. The fireflies are programmed to respond to light from their neighbours, popular fireflies become highly influential, whilst isolated fireflies must work harder to reach their friends. By shining lights on to the fireflies, audience members speak the same language and influence the interaction between community members.

Inspiration came from lateral and cellular communication systems such as those used by fireflies in synchronizing their rhythms and slime molds in movements through caves, which collided with network theories and cascading relationships between the parts and the whole.


Originally seen on the Creative Applications Network.

Posted by: Garrett @ 8:19 pm
Comments Off
May 7, 2012

Another device for distant non-verbal interaction (similar to Feel Me) is Kissenger by Dr. Hooman Samani. Created under a the research umbrella Lovotics (Love and Robotics) at Keio-NUS CUTE Center, a collaborative artificial intelligence lab between the National University of Singapore (NUS) and Keio University of Japan, the Kissenger are a pair of devices you use with your loved one to transfer a kiss over distance. Kissenger:

provides a physical interface enabling kiss communication for several applications facilitating intimate human tele-presence with the real and virtual worlds…With the aid of digital communication media and advanced robotic technology, the system takes the form of an artificial mouth that provides the convincing properties of the real kiss.

The system propose and enables three modes of possible kiss interaction:

1. Human to Human tele-kiss through the device: bridges the physical gap between two intimately connected individuals. Kissenger plays the mediating role in the kiss interaction by imitating and recreating the lip movement of both users in real time using two digitally connected artificial lips.
2. Human to Robot kiss: enabling an intimate relationship with a robot, such technology provides a new facility for closer and more realistic interactions between humans and robots. In this scenario, one set of artificial lips is integrated in a humanoid robot.
3. Human to Virtual character physical/virtual kiss: provides a link between the virtual and real worlds. Here, humans can kiss virtual characters while playing games and receive physical kisses from their favorite virtual characters. Further, Kissenger can be integrated into modern communication devices to facilitate the interactive communication between natural and technologically mediated environments and enhance human tele-presence.

This is an interesting concept but is essentially drifting into the area of teledildonics. It’s not unique as I’ve seen devices similar to these before from the very serious to artistic parodies however I’m not sure what the thinking is behind making it look like a cute pig.

Originally seen on Valentina Tanni’s weblog.

Posted by: Garrett @ 4:20 pm
Comments Off
May 6, 2012
Feel Me


Feel Me by Marco Triverio is an iPhone app that attempts to connect people through a form of digitised touch and natural intuitive gestures. Mobile phones prioritise language and sound, what if we could touch people through these technologies?

Based on the finding for which communications with a special person are not about content going back and forth but rather about perceiving the presence of the other person on the other side, Feel Me opens a real-time interactive channel.

When two people are both looking at the conversation they are having, touches on the screen of one side are shown on the other side as small dots. Touching the same spot triggers a small reaction, such as a vibration or a sound, acknowledging that both parts are there at the same time. Feel Me creates a playful link with the person on the other side, opening a channel for a non-verbal and interactive connection.

The concept videos for the app are worth a look, Transmissions, reverberations, connections and movements.

Originally seen on the Creative Applications Network.

Posted by: Garrett @ 10:54 pm
Comments (1)
Don't know what this is? Click here.
This is a QR Code, it's a printed link to this webpage on Network Research!

Using a web-enabled mobile phone with built-in camera and QR Code reader software you can photograph this printed page to display the original webpage. For more information on how to do this please see the short article here:

and download a reader application for your mobile device.
Creative Commons License
Except where otherwise noted, all works and documentation on the domain are copyright
Garrett Lynch 2018 and licensed under a Creative Commons Attribution-ShareAlike 3.0 License. is powered by WordPress