April 28, 2012
Descriptive Camera


The Descriptive Camera by Matthew Richardson at NYU’s Interactive Technology Program is another camera to add to the growing list of networked enabled cameras I’ve been posting about (see the Lens-less Camera and Buttons). Using it is the same as other cameras, simply point and click, however the output produced is very different. Instead of producing a photographic representation of the space in front of the lens, the camera produces (via a mechanical turk) a description of the scene printed to paper. The rationale for the work is as follows:

Modern digital cameras capture gobs of parsable metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo. The Descriptive Camera only outputs the metadata about the content.

As we amass an incredible amount of photos, it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections. Of course, we don’t yet have the technology that makes this a practical proposition, but the Descriptive Camera explores these possibilities

The camera utilises some similar technologies to the cameras posted about previously however particular to this is the human element, an amusing use of Amazon’s Mechanical Turk service:

The technology at the core of the Descriptive Camera is Amazon’s Mechanical Turk API. It allows a developer to submit Human Intelligence Tasks (HITs) for workers on the internet to complete. The developer sets the guidelines for each task and designs the interface for the worker to submit their results. The developer also sets the price they’re willing to pay for the successful completion of each task. An approval and reputation system ensures that workers are incented to deliver acceptable results.

An example image converted to text description output is shown below.

Originally seen on Today and Tomorrow.

Posted by: Garrett @ 9:10 pm
Comments Off
March 31, 2012
Networked works by Winnie Soon

The following are a selection of four networked works by artist Winnie Soon from the last three years. The first two works employ mobile phones while the last three use Twitter creating some shared concerns and methods of presentation.


5-stars’ identity (image above, video below) is an interactive installation which uses mobile phones as ready made objects to create a connected work. It is the first of two works where mobiles play an important part in the work. The works purpose, research led, is to:

express the notion of transmediation, examine the properties of dynamic complex system in association with readymade object. The new aesthetic possibilities is explored by having the inter-relationship of technology, media and objects, leading to a hybridization in sensorial transformation.

The project starts with scanning the various Internet websites of news and blog, those content that is related to Chinese’s Identity will be translated into different language versions and send to the mobile device. The five mobile phones perform with different behaviors and this is subject to political and environmental events. It constructs a continuous and dynamic autonomous system.


Jsut Code (image above, video below), a collaborative work with Helen Pritchard, is an interactive installation using QR Code, mobile phones and Twitter. It is the first of three technically related work which uses live information from Twitter as its basis. The work prompts users to explore and browse online texts written by a combination of human and non-human writers.

Statements on life and death are gathered in real-time, from the social media site twitter and displayed as geometric images. Viewers encounter a continuously updating feed as the machine translates language to image and twitter message to QR code, each image “carries” a language of pattern and meaning, which is activated by the reader…We see code as a call to action, a call for execution. The playful activity of reading in ‘jsut code’ is a collaborative performance between human, machine and code. The installation explores a continuously evolving and mutating text which moves beyond and between language.


Net.Portrait (image above, video below), a collaboration with Sam Norgard also uses live information from Twitter as its basis. Net.Portrait is:

a live and network-based installation combined with fine-art painting, kinetic sculpture and collective network data. While you are watching the piece, the artwork is also dynamically watching you by having different emotive eyes painted on a collection of wall mounted cocktail umbrellas. The live happenings of happy and sad smiley faces from Twitter are being transformed from a text, static and virtual medium to a kinetic and physical sculpture. Every bit of spinning action amplifies the network behavior, resulting in a continuous and flowing net portrait.


Datascape (image above, video below) is an interactive installation / performance which is created through the latest text and emoticons from Twitter.

Posted by: Garrett @ 11:53 am
Comments Off
April 13, 2011
AR and the invasion of public and private spaces

Late last year there was a call for works for a Guerrilla art exhibition at the Museum of Modern Art, New York. The call was the brainchild of Sander Veenhof, an unofficial exhibition which had no ties to MoMa. The exhibition of works would be enabled by the use of Layar, an augmented reality browser for mobile devices which would with the aid of gps position and overlay art works within live videos of the space. The following is a selection of works by Sander employing augmented reality as a tactic to invade spaces well known, spaces which are the spaces of contemporary arts elite, public and highly controlled or spaces of power, private and highly secured.

Guerrilla exhibition, Museum of Modern Art, New York (images above and video below) is the exhibition mentioned above.

The show will test case Augmented Reality art within an appropriate critical context: the bastion of contemporary art. The organizers of the event…aim to address a contemporary issue caused by the rapid rise of Augmented Reality usage. What is the impact of AR on our public and private spaces? Is the distinction between the two fading, or are we approaching the contrary situation with an ever increasing fragmentation of realities all to be perceived individually? Being uninvited guest users of the MoMA space themselves, Veenhof and Skwarek call out any AR artist worldwide to place their artworks within the walls of the MoMA too on the 9th of October (Lat/lng: 40.761601, -73.977710). Since the exhibition happens in virtual space, there’s no reason not to host and endless amount of parallel virtual exhibitions.

Below is a video of Tamiko Thiel’s work Art Critic Matrix shown in the exhibition.

infiltr.AR (images above and video below) is a virtual infiltration into the White House and Pentagon in America.

two virtual (AR) Twitter balloons have been positioned inside the Oval Offica and inside the Pentagon press room. The balloons can be seen ‘for real’ inside these two locations, but elsewhere in the world, an ‘artist impression’ can be viewed…The balloon displays the latest tweet containing either the hashtag ‘#pentagonchat’ or ‘#ovalofficechat’.

Turbine Hall 3D Controller (image below) is currently showing as part of an exhibition Gradually Melt the Sky at the Devotion Gallery in Brooklyn New York. The work is part physical device at the gallery and part augmented reality work ‘placed’ in the Turbine Hall at the Tate Modern in London. Users at the exhibition can control the device which causes the augmented reality part, a giant disco ball, to react in real time.

Posted by: Garrett @ 11:46 pm
Comments Off
March 2, 2011

Tension by Eva Schindling is a cloth held under tension by eight motor units which is connected to a number of RSS feeds on the internet.

The analysis of several news-feeds on ‘good’ and ‘bad’ in nowadays world [sic] provides the input for those motors that pull and release the hooked membrane. A constant interplay between opposing forces produces movement, deformation and destruction of the material.

To see more network related works by the same artist see Txt-Me-1st.

Posted by: Garrett @ 12:58 pm
Comments Off
November 17, 2010
Bruce Sterling & REFF RomaEuropa FakeFactory

Bruce Sterling who wrote the foreword for REFF RomaEuropa FakeFactory gets his hands on the publication for the first time and tests out the various tags:

It’s interesting to watch as Bruce wrote Shaping Things five years ago about these very same technologies and scenarios. In a very real sense he is watching his predictions come true.

Posted by: Garrett @ 10:02 pm
Comments Off
Older Posts »
Don't know what this is? Click here.
This is a QR Code, it's a printed link to this webpage on Network Research!

Using a web-enabled mobile phone with built-in camera and QR Code reader software you can photograph this printed page to display the original webpage. For more information on how to do this please see the short article here:


and download a reader application for your mobile device.
Creative Commons License
Except where otherwise noted, all works and documentation on the domain asquare.org are copyright
Garrett Lynch 2016 and licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
asquare.org is powered by WordPress