July 11, 2014
Remote Encounters: a report about networking practitioners

digicult

A short(ish) article on the Remote Encounters conference and Liminalities journal special issue titled Remote Encounters: a report about networking practitioners has been published on Digicult.it.

Posted by: Garrett @ 4:08 pm
Comments Off
May 14, 2014
Liminalities 10.1 now online

networkresearch-10-1

The journal of performance studies, Liminalities issue 10.1, a special issue guest edited by Garrett Lynch (University of South Wales) and Rea Dennis (Deakin University), is now online. The contributions to this issue have been compiled from the outcomes of the international conference Remote Encounters: Connecting Bodies, Collapsing Spaces and Temporal Ubiquity in Networked Performance held at the University of South Wales on the 11th and 12th of April 2013. The conference brought together artists and scholars with a joint interest in using networks as a means to enhance or create a wide variety of performance arts. The direct url to the issue is below, please do forward to your networks/colleagues etc.

Liminalities issue 10.1 – http://liminalities.net/10-1/

Posted by: Garrett @ 12:11 pm
Comments Off
April 27, 2014
Please Switch On Your Mobile Phones

Tonight I attended what was listed as a Public Beta 1 of a network controlled performance titled Please Switch On Your Mobile Phones at the Sherman Theatre in Cardiff. Below is a quick run through of what happened.

The audience arrived in the theatre and were immediately asked to contribute a short memory to the performance from their mobile phones. This was done through a website on a local area wifi network. Once submitted you saw the message displayed below:

mobile-phones-1

And then waited patiently for your message to appear on screen (as seen below) behind the dancers who were warming up/rehersing in the space.

mobile-phones-2

Five memories were then chosen and each one assigned to a group of dancers who were colour coded as red, green, blue, yellow and orange. Below is the memory for the yellow group.

mobile-phones-3

Each message was then ‘transcoded’ into a series of movements based on the number of characters in each word. Below are the ‘transcoded’ movements for the yellow group. At this stage many in the audience started to have technical difficulties (you were instructed to raise your hand if this was the case), become impatient with what was happening and started to wander in and out to the toilets and bar.

mobile-phones-4

Next the audience were asked to pick a pictogram for each group of dancers. From here on I began to not understand the link between what we the audience were doing, specifically choosing the pictograms, and how it/they related to the what the dancers were doing.

mobile-phones-5

The choosing of the pictograms, three times per colour group, amounted to a voting contest and a bar graph of the results were shown on screen (seen below).

mobile-phones-6

Finally all chosen pictograms were shown.

mobile-phones-7

Throughout all of this the website on your mobile phone kept redirecting you to the interacts you were being asked to perform or showed you the choreography chosen for the dancers. Below is shown a menu for all the colour groups which when accessed showed each groups choreography.

mobile-phones-8

And then there were some Kinects in the space that were being used to capture video and projected as an effect (seen below). The audience were now redirected on the website to controls for audio, video etc. which they could control for a set period of time. This however had many technical difficulties with many unable to access it in their limed time slot.

mobile-phones-9

I attended this with I guess too many expectations, knowing full well what can be done with technologies in this context, however that was’nt the issue. I knew the whole event was a test, an experiment, and I was quite happy for it to be rough around the edges but instead it was very disjointed. The ‘transcoding’ of memories to movement was simple, clever, and could have been evolved further. The audience seemed most happy with this part of the performance as they were able to see their memories on screen so they had visible evidence of contributing to the event. Everything about the pictograms was confused; What did the pictograms mean? How did they relate to the memories or were they a completely different idea? What were dancers doing as a response to the pictograms? Was a voting system enough to give audience members a sense of actually contributing? The idea of controlling the environment e.g. the audio, video etc. had lots of potential but the way this control was provided, a time-share of sorts, which was hindered by lots of technical difficulties meant that most audience members seemed to give up.

All of this took over an hour and a half before finally there was a short performance of the choreography score created by the audience. I came away disappointed, disappointed at not understanding how I was contributing in many parts of the performance and disappointed that it took such a long time to score the work for what was then quite short.

Posted by: Garrett @ 1:13 am
Comments Off
March 26, 2014
Netscapes exhibited as part of A-EYE

a-eye

Netscapes will be exhibited within the exhibition A-EYE: An exhibition of art and nature­-inspired computation as part of the Artificial Intelligence and the Simulation of Behaviour (AISB) 50th annual convention at Goldsmiths, University of London from the 01-04/04/14. Full details below.

————————–

A-EYE:
An exhibition of art and nature­-inspired computation

This art exhibition is organised as part of a convention (AISB50) commemorating both 50 years since the founding of the society for the study of Artificial Intelligence and the Simulation of Behaviour (the AISB) and sixty years since the death of Alan Turing, founding father of both Computer Science and Artificial Intelligence, will be held at Goldsmiths, University of London, UK from the 1st to the 4th April 2014.

The exhibition is the first of its kind at the AISB convention and it incorporates various aspects of generating artworks using various artificial intelligence techniques (swarm intelligence, evolutionary algorithms, artificial neural networks, multi-­agent systems, artificial life and any other algorithm) or method that derives from the natural world.

Exhibition Date:
1-4 April 2014

Private View:
Monday 31st March, 17:00-20:00
Harold Cohen will give the opening speech in the exhibition venue.

Venue:
Goldsmiths, University of London
New Academic Building

Posted by: Garrett @ 12:37 pm
Comments Off
December 17, 2013
Two orchestra-like works

Two orchestra like projects, one highly polished and the other a work produced as part of a workshop but equally as interesting.

computerOrchestra_blue

ComputerOrchestra_interface_3

The Computer Orchestra (images above, video below) by Simon de Diesbach, Jonas Lacôte and Laura Perrenoud, students at ECAL (Ecole cantonale d’art de Lausanne) is:

a crowdsourcing platform that allows users to create and conduct their own orchestra. They can choose to upload their own music or download samples to integrate into their formation. With a simple interface, they assign the chosen samples to each post. They can also arrange detection zones, that allow them to order the “musicians” to play, using various gestures. Once their orchestra is configured, they can direct it with the movements of their body.

Screen Shot 2013-12-17 at 15.14.19

Neo-Aula (image above, video below) is a interactive sequencer consisting of 25 networked computers and a web based interface to interact with them. This work seems to have been the outcome of a workshop lead by mobilitylab as part of a digital week at the Universite of Vic in Spain.

What’s interesting about both of these projects is seeing a lab of computers within a university as a source of inspiration to create a work. This is most obvious in Neo-Aula which has been subtitled “hacking the classroom”.

Posted by: Garrett @ 3:32 pm
Comments Off
Older Posts »
Don't know what this is? Click here.
This is a QR Code, it's a printed link to this webpage on Network Research!

Using a web-enabled mobile phone with built-in camera and QR Code reader software you can photograph this printed page to display the original webpage. For more information on how to do this please see the short article here:

http://www.asquare.org/networkresearch/resources/qrcode-help

and download a reader application for your mobile device.
Creative Commons License
Except where otherwise noted, all works and documentation on the domain asquare.org are copyright
Garrett Lynch 2014 and licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
asquare.org is powered by WordPress