October 31, 2010

More works with a textual/language theme. SMSlingshot by VR/Urban is

an autonom working device, equipped with an ultra-high frequency radio, hacked arduino board, laser and batteries. Text messages can be typed on a phone-sized wooden keypad which is integrated in the also [sic] wooden slingshot. After the message is finished, the user can aim on a media facade and send/shoot the message straight to the targeted point. It will then appear as a colored splash with the message written within. The text message will also be real-time twittered – just in case.

For similar work see The Media Cartridge, TXTual Healing, Light Attack and The Artvertiser.

Originally seen on Networked_Performance.

Posted by: Garrett @ 6:42 pm
Comments Off
October 21, 2010
Inside the feed

Continuing the textual/language theme of works that I’ve been posting recently, Inside the Feed by Samuel Huron is an experiment in visualising an RSS feed. More research work in progress than a finished work:

It presents the concept of the Htmlgramme, which is a mix between videogramme and HTML webpage. Htmlgramme is a plastical representation of web’s mutation throught the movement of document-based content to flux medium.

I’m curious of the use of the word Htmlgramme and would like to read more on the meaning behind this within the artists work, ideally in French as I suspect partial meaning is being lost in the translation to English. In English Htmlgramme sounds awkward (l and g together sound incorrect). Taking meaning from Telegram:

a message or communication sent by telegraph

I wonder what distinction a Htmlgramme, presumably a message sent by html, (mixed with videogramme which perhaps makes it more timebased?) has to html in general which is both a communication form and could be considered time-based albeit with a different type of mapping of time through interaction i.e. delayed, extended, exagerated, emphasised etc.

More images of the work can be seen here.

Posted by: Garrett @ 9:14 pm
Comments Off
October 17, 2010
We Read, We Tweet

We Read, We Tweet by Justin Blinder, another language based work however this time not visualised as such, is a Twitter / Google Maps / New York Times mashup work which:

geographically visualizes the dissemination of New York Times articles through Twitter. Each line connects the location of a tweet to the contextual location of the New York Times article it referenced. The lines are generated in a sequence based on the time in which a tweet occurs…The articles and tweets are constantly being aggregated and stored in a database, making use of the Twitter, Backtweets, Google Maps, and New York Times Articles API. Every 10 minutes, the Backtweets API is queried to find the most recent New York Times articles that have been tweeted about. For each article found, the New York Times Articles API is queried and if a contextual location is found, that location is then geocoded using the Google Maps API. Every tweet that mentions this article is also geocoded using the Google Maps API, and both the article and tweets are stored in a database.

Posted by: Garrett @ 8:14 pm
Comments Off
October 14, 2010
Delicious Poetry

Back to works which have a textual/language emphasis for the moment.

Delicious Poetry by Art is Open Source (xDxD.vs.xDxD / Salvatore Iaconesi and penelope.di.pixel / Oriana Persico) is a net.art work which assembles itself from popular links on Delicious to:

visually build a chaotic poem. An everchanging complex composition built on people’s wishes, desires, tastes and emotions…The generative poems composed by the work produce pages that are a dynamic assemblage of the things that internet users deem as being interesting at a certain time. This is why search engines and content aggregators seem to find these chaotic poems so interesting, finding them completely filled with the “hot” keywords of the moment. So much that they tent to spider, cache, index, rate and categorize them.

Further information about the work can be seen here and here.

Posted by: Garrett @ 4:31 pm
Comments Off
October 13, 2010

An interesting tag/marker based work I’ve been following through various versions on the Network Research Group is Jeongho Park’s Boxes (although it seems to have previously have been Rearwindow with reference to the Hitchcock movie). Images posted here are of the initial prototype while the video below is of a more advanced version. Video of the initial prototype can be seen here.

The work uses tracking of Tuio tags on the rear of each box and projects the resulting assembled video on the front of the boxes. Back and front view can be seen in the image above.

Posted by: Garrett @ 1:19 pm
Comments (1)
Don't know what this is? Click here.
This is a QR Code, it's a printed link to this webpage on Network Research!

Using a web-enabled mobile phone with built-in camera and QR Code reader software you can photograph this printed page to display the original webpage. For more information on how to do this please see the short article here:


and download a reader application for your mobile device.
Creative Commons License
Except where otherwise noted, all works and documentation on the domain asquare.org are copyright
Garrett Lynch 2018 and licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
asquare.org is powered by WordPress