Trying out Google SketchUp for our Mobile Sauna project
I am collaborating on a public art project with 3 other artists. It has been a struggle, but the goal is worth it: a mobile sauna that will keep us going through the winter. As another Syracuse winter circles the runway for a soon-to-be landing, we keeping our Kickstarter campaign going strong in order to make the deadline of January 2nd and to reach our funding goal. But the winter is coming, and we are forced to work on the sauna before we get our funding. Google SketchUp is helping us achieve our goal by letting us quickly try out different internal layouts.
Kickstarter’s all-or-nothing policy means that we won’t get any of our pledged funding if we don’t reach our full goal. Despite this, we have no choice but hope for the best and work on the sauna anyway because the winter is coming! We’ve used up our modest grant from SU’s VPA, and are going into personal dept in order to at least protect what we’ve done so far from the rapacious winter. We built the outer shell on top of our trailer, and will soon be able to fire our stove to keep warm during construction. In order to communicate to our supporters the imminent mobile sauna experience, we decided to create a 3D sketch of what the internal space will be like. Incidentally, a very successful kickstarter campaign called PrintrBot discusses Google SketchUp as an accessible modeling tool, and I decided to try it.
I installed the free version of the program, and spent about an hour watching the 4 introductory how-to videos (here is part 1). Google SketchUp has a lot of really nice snapping behavior allowing me to easily align edges of objects. Google SketchUp is smart: in the middle of drawing and edge I can hover over other edges to find their mid-points, and then Google SketchUp generates guide-lines on the fly. The program provides an easy interface to Google’s 3D Warehouse which allowed me to download a variety of human models and play with different configurations of internal sauna seating. Trying out different combinations of seating arrangements was straight forward, and every team member’s ideas could be quickly tried out. In the end, we went with the most colorful example for our Kickstarter update.
The World’s First Aquaponics Toilet Overflows With Life
I am working on a project inspired by a type of farming that involves a closed recirculating water system. Aquaponics farming can end up using only 2% of the water used for conventional fish farming. This project is illustrative of the nitrogen cycle and the cycle of consumption.
I am working on an installation that will include this toilet. The installation will premiere next fall at The Other New York 2012 biennial in Syracuse, NY.
I harvested the first crop of basil to make pesto escargot that turned out quite well. I brought them to a potluck dinner organized by Sam Van Aken in honor of Shimon Attie. Think Shimon thought the snails were alright!
Check out the video tour and meet the fish!
Onondaga Lake Remediation Machine Sculpture
This machine, which is based on a scale model of Onondaga Lake, is fully equipped to distill lake water and is used to demonstrate a spectrum of remediation solutions from the realistic to the Utopian.
Onondaga Lake became a Superfund site in 1994 due to detrimental effects of industrial and municipal waste disposal over the last century. The future of the lake has become both a scientific and political issue. The creative specialists of the DS Institute bring a cultural perspective to the table. Through sculpture, poster exhibitions, lectures, and video, the DS Institute provides a variety of information and interpretations of the history, current dynamic, and the planned future of the lake’s ecosystem. The DS Institute produced a custom-built sculptural model that it uses as a pedagogical tool during lectures and videos.
SoYummy Lecture Performance at McGill
On Wednesday, August 3rd, I gave a lecture performance at the robotics building on the McGill campus in Montreal. There was a good audience — both in terms of numbers, energy, and inquisitiveness. I presented research and ideas stemming from my experimentation with computer vision software being developed at McGill. A good amount of the audience was already familiar with the research behind the software, so I had the pleasure of presenting ideas from new perspectives of art theory and philosophy.
According to Charlie Hoban (Educational Technology theorist), the lecture as performance is achievable by only 2% of lecturers and is worth digitizing for the future. Even though the bandwidth limitations for disseminating such lectures he describes will soon seem laughable, I agree that the question of bandwidth is central to this subject. Lecture as performance is an effective information dissemination vehicle because it is so high bandwidth: it combines the visual, aural, gestural, and sometimes tactile and even olfactory senses.
Last year’s curatorial inquiry on the subject at the Museum of Contemporary Art in Belgrade asks “is contemporary art a product of fascination with aesthetic objects or a space of knowledge production?” The most successful art created these days falls into the latter category. It is art that engages the audience in confronting real changes happing in the current world. Contextual Art, as described by Jan Swidzinsky is a particularly attractive approach when presenting to a scientifically minded audience because Contextual Art accepts both art and science as disciplines capable of responding the rapid changes of our technologically accelerated world. From this angle, Lecture as Performance is a post-modern practice because of its emphasis on the communication of information through the melding of pedagogic as well as theatrical approaches (and sits in opposition to Modern Art as art for art’s sake).
Joseph Beuys, who is one of the originators of the Lecture as Performance form (stemming from his ‘Social Sculptures’) wrote that “The most important discussion is epistemological in characterâ€. This is especially true now in our economy of abundance where there is more media being produced than anyone can possibly consume. When the information about our world comes to us in such a fragmented state, from so many different ‘news’ outlets, and filtered by so many layers of special interests, its hard to know what is true and what is bent. Epistemology helps us answer how we know what we know, and Lecture as Performance is an artistic approach to investigating these questions.
Walt Whitman, late in life, lamented that he didn’t tour and read his poetry to the masses to increase his audience. Plenty of examples of artists and scientists disseminating ideas through Lecture as Performance hang in our collective memory. From Tesla’s highly influential demonstrations of radio to Nabokov’s lecture tours through the United States to Laurie Santos’s insightful TED talk on the “monkey economy” and human irrationality we learn of key ideas right from the horse’s mouth. Despite the continuously increasing complexity of the world around us, the old adage of ‘if you can explain it to a 4 year old, you really understand it’ is more true than ever. Lecture as Performance is that ultra-high bandwidth communication channel on which the most relevant truths of today can set sail, powered by nothing more than a person’s breath.
Through the combination of my creative practice and access to the latest cv research (in the form of Yogesh Girdhar’s software) I am able to devise artistic experiments and achieve insight. Through creative application of this research to solve the problems of our economy of abundance we can open new engagements between Art and Science. The impact of Temporal Semantic Compression on Culture will be determined with time and experimenation, and I’m excited to bring the latest insights to interested audiences. The poetic engagements with Art History and Philosophy are more illuminating to the scientific crowd, while explaining the science behind the software is more enlightening to the art audience. The presentation remains accessible for even the most general audience, who is often interested in these subjects and the impact of new technology on their life.
The lecture performance at McGill went really well and I would like to thank Yogesh Girdhar and Gregory Dudek once again for inviting me there. One excellent criticism of the talk was that I didn’t take the opportunity to delve into Lecture as Performance itself: I stayed in character during the question/answer session and didn’t step up to the meta-level. From the Lecture as Performances that I’ve seen personally, it doesn’t seem common for the performer to break out of character during the lecture or even the question/answer portion (if there even is one). One reason this is unclear to me is because the lecture performance itself is not an act: it is honest expression of subjective and objective truths. Thanks to everyone for coming out and being part of the event and I look forward to the next one.
SoYummy hits Subtle Technologies 2011
I just caught up on the sleep I missed during this year’s Subtle Technologies conference in Toronto where Yogesh Girdhar and I did a poster presentation on the SoYummy project. I wanted to do a post about the conference, who I met there, and thoughts on going forward.
The third annual Subtle Technologies festival/conference continued in the Art+Science vein of the previous years. There wasn’t a specific theme this year, but the talks and posters were well enough related to provide a consistent experience. Just because the conference was Art oriented doesn’t mean that all the talks were exciting though. Australian artist Mary Rosengren’s presentation about her collaboration with scientists was too general and at times aggravating while refusing to resolve into a coherent idea at the end. Robyn Moody—who creates fascinating kinetic sculptures related to relevant social topics—was unable to speak about the work in a cohesive way when he wasted 10 whole minutes on an unrelated introduction about the woes of vaccine paranoia.
I experienced several stellar presentations. I wasn’t able to catch all of them, but the scientists Ben Schumacher of Kenyon College and the zany Stephen Morris gave illuminating and engaging talks ranging from the consequences of Eistein’s relativity to patterns in nature. The vibrant Ben Schumacher explained how traveling faster than the speed of light would also mean traveling back in time. Stephen Morris showed how the fascinating formation called The Giant’s Causeway in Ireland was formed and how the same uniform cracking can be repeated at home.
Impassioned presentations by Italian curator Marco Mancuso illuminated larger connections between scientific and artistic research. Two dance related presentations by artists and choreographers Gail Lotenberg and Carl Flink showed innovative collaborations with scientists which resulted in beautiful dance performances captured on video. Jenny Leary’s nuanced advances in magnetic craft techniques traced the trajectory of an artist/inventor.
While some presenters strategically left out specifics, a few did so to their detriment. Riccardo Castagna and Valentina Margaria gave a concise and engaging presentation about their multi-modal Biomatics Virus. The idea itself seemed a bit of a rehash of the Neil Stephenson novel Snowcrash. The music that they generated from a 3D model of the H1N1 virus sounded great though. However, when asked by the audience about their process for converting virus structure data to make the music sound nice, they answered that they did nothing to smooth the raw data, further obscuring the connections between their science and art. Alan Majer’s (of Goodrobot.com) presentation on the other hand was strategically vague in order to open up room for the imagination: how would our society change if we could physically connect our brains together?
Yogesh Girdhar (aka Yogi) and I got a lot of attention with our poster, especially because we ran a demo of the latest software on a laptop. The software uses open source computer vision libraries to quantify the video coming in. It then uses Yogi’s unique qualification algorithm to decide how surprising an incoming video frame is (compared to what was already seen). If the input is surprising, the still is added into the summary of 9 images. We presented the demo as a contest to excite people to try to surprise the computer and make in into the summary. This was mega fun, and here is the resulting summary of the winners:
Socially I was lucky to meet and hang out with some amazing artists. Ted Hiebert, an artist and educator from University of Washington Bothell, blew our minds with his “plausible impossibilities” and descriptions of his telekinesis contests. Doug Jarvis, an artist-in-residence fellow at the Center for Studies in Religion and Society at University of Victoria BC, described the implications of brain cells in our gastrointestinal tract. I was psyched to meet a key member of the Yes Men: the artist and animator Patrick Lichty. Patrick is creative and generous artist and educator and is now working with Second Life and on cloning William S. Burroughs from a preserved turd. I also enjoyed conversations with Cara-Ann Simpson, Gail Kenning, Eva Kekou, Daryn Bond, Willie LeMaitre, and ginger coons (who gave me a copy of her amazing magazine produced using only free and open source software) among others. The social dynamics culminated with dancing at the party thrown by conference organizer and Director of Programs Jim Ruxton where we set up yet another Surprise Contest. Here is the summary images of the winners:
In conclusion I would like to thank Jennifer Dodd for helping with the logistics and poster curator Lorena Salome for the vital help with the poster. Thanks to Subtle Technologies for having us there. It seems that the talking presentation slots were reserved for those people who’ve taken their project into the world and have documentation of the interactions with other people. I would like to get our project to that level and do a presentation at Subtle Tech next year.
Why did Foursquare delete my account‽
My Solace 2.0 project encountered some interesting snags. First, Bird Library employees pulled and threw away my brochures. When I was first figuring out the brochure design I kept thinking about how the installation had to conform to the library venue in unique ways. For example, due to a history of artwork theft, the installation had to be ensconced in the heavy wooden case. This limitation turned into a benefit when the case proved to have just enough space for the work. I thought that since the library is informing and encroaching on my artwork in an interesting way, I should reflect this reality and creatively encroach on the library—just to smooth out the borders between the art and the library.
The library brochure, called “Research Helper” is actually a well designed note-taking booklet with lots of blank pages inside. I decided to write my artist statement on the blank slate pages of the library Research Helper brochure. But this proved tedious, so I decided to scan the brochure, add my text, and print out copies. I modified the design and added more icons and information, while keeping the original design credit and the library logo. This worked well for about three weeks, after which the library became concerned about their logo used in such an unofficial way. What they did was not censorship per se, but a legally motivated action that I can’t really argue with. In place of my brochure, Ann Skiold printed out the library’s blog post about the installation and stuck copies in the brochure holder, which was a positive gesture.
This was not the only snag, however. I noticed after about a month of running Solace 2.0 at the library that new points were not being created, while the old points were still being pruned normally. The face was starting to thin out and disappear. It appears that Foursquare didn’t like something about the project, and deleted my account without warning. I read all the terms and policies before I did the project, and wasn’t breaking any rules. I kept my API use below the request frequency limit and didn’t create multiple accounts. We will see what they say.
Email to Foursquare:
“Hello,
I noticed about a month ago that my Foursquare account was deleted without warning. I wasn’t breaking any of the terms of use, and was using the API within the API frequency use limit. I am an artist, and I am using Foursquare for an art project. Here is a brief description of the project. The second blog post down is an artist statement.
https://misharabinovich.com/blog/?cat=18
I was wondering why my account was deleted. I wasn’t doing anything illegal. I’m getting ready to release detailed documentation about the project and my findings. I think you would be interested in what I discovered. However, I wanted to find out what happened with my account before I did so. My user id was 3283208
Any information would be greatly appreciated!
Misha Rabinovich”
Solace 2.0: A Performance in Radiation installation is up!
Syracuse University’s libraries have extensive archives or both common and rare media. Their collection just got that much more unique with the inclusion of Solace 2.0: A Performance in Radiation, an installation/performance by yours truly Misha Rabinovich. The project deals with making one’s mark on the world, manifest destiny, surveillance, and social networks.
This project is an installation because of its transmedia form encompassing books, movies, pictures, and video. The installation includes a computer which is hosting and running the Solace 2.0 Social Media Platform. This platform automatically engages social networks so the user (currently just myself) can sit back and focus on other things. The project is also a performance because the identity of the user is split up through the engagement with the platform into a separate online entity that travels on its own.
The installation features several images representing various attempts by different entities to “make a name for themselves” and to be “masters of their domain” ranging from the monumental and permanent to the feeble and ephemeral. Among these images stands a computer monitor, framed in glossy black and gothic red. The monitor shows a grey map with red map markers specifically placed to outline the face of the user. As the points disappear and reappear over time the face exhibits a shimmering quality.
The points represent locations of actual real world venues (restaurants, businesses, etc) which have been registered in a geolocation game called Foursquare. To play such geolocation games, people ‘check in’ to locations they are currently at using their GPS enabled phone. Solace 2.0 checks the user into locations automatically, without the user having to go anywhere. It also publishes the checked-in locations on Twitter.
Everyone is encouraged to visit and experience the installation firsthand at the Bird Library and to pick up a free brochure. Please stay tuned for more information and updates on this project (you can even grab the RSS feed for your reader here). Thank you to Ann Skiold for opening the library for this work. Thanks to Holly Rodricks for proof reading and Caitlin Foley for exhibition consultation and general support. Thank you to Megan Foley for sponsoring this installation. Thank you to Matthew Williamson for pointing out how this falls in with “Griefing” (I will be writing about this soon).
Solace 2.0 Statement
To make a name for oneself, to be master of one’s domain, and to leave one’s mark on the map are goals shared by many. Solace 2.0 establishes and maintains one’s identity in today’s Internet-enabled economy of attention while preserving the user’s personal integrity. And it does it automatically, so the user can sit back and concentrate on what’s important.
Social networks seek to conform individual identities into their molds in order to monetize people. The fundamental bargain presented to users of Internet-based social networks is: if you publish private information about yourself, you will reap social rewards. Those who seek attention as capital accept this bargain. But the requirement to conform one’s identity into a social network’s profile is a farce. The Internet—with its ability to robustly connect people across great distance—doesn’t reflect our physical existence but copies, fractures, and multiplies our individual identities. The Internet’s commercial power necessitates the compression of our identities into tokens of trust so that we can buy and sell. These tokens of trust are examples of our newfound disembodied, autonomous, and powerful telekinesis. Our actions online persist in time, creating our data body, which is also a shadow sometimes appearing to dance of its own volition. Each of us is in many places at once.
“Griefing†means harassing online communities (often anonymously) to remind them not to take themselves too seriously. “Griefing†can result in online communities feeling grief. The opposite of grief is solace. “Griefers†feel solace when doing the Griefing and solace supplants grief when one is consoled or relieved.
The Solace 2.0 platform checks you in and keeps your persona fresh.
Yummy Faces: Bringing the Synopsis Into the Synopticon
More and more of us find ourselves living in the panopticon where the few surveil the many. If we earn the better vision of the future according to David Brin [1], we will all have access to surveillance data. In this possible future, we will each be able to employ what David Lyon [2] and Thomas Mathiesen [3] called the ‘Synopticon’ where the many can watch the few. The first problem in surveillance is always the question of who has access to the data. Eventually, when everyone has the data, the issue of computational power becomes primary so that the data can be sorted/mined. Finally, as the manicured data is delivered, our own perception of the data becomes primary.
As I described previously (here and here), I’m using computer vision software called SoYummy to create synopses (summaries) of videos. The software dissects and categorizes visual data and creates a set of the most interesting still images. The word synopsis itself means ‘seeing together’ and the summaries provide us with the ability to see the whole field of data together in a smaller representation. The concept of ‘seeing together’ at the root of the Synopticon is also essential in balancing state and corporate power: the watchers are themselves watched.
But what are we looking for? Currently, surveillance is mostly used to discover some kind of deviancy. I am intrigued by the inverse: normalcy. Luckily, the distance matrix generating tool that is part of the SoYummy suite compares a set of images and can be used to find the most normal and the most anomalous one. Driving this work are such questions as:
• What rules define and how relative is normalcy?
• What can we learn if we look for the most normal and most unique person?
I decided to use a set of portraits that was already available. I remembered this summer’s performance in NYC by Marina Abramović and the resulting photo portrait archive of the audience. What attracted me to this set is the active gaze of the participants. They are not ambivalent passerbys being caught on camera but are actively looking, even staring, at the artist. Marina herself, sitting almost motionless in the same seat in the gallery all day, every day for weeks, became a surveillance camera. In a way, she was only ingesting views of the visitors, as if it was life-giving sustenance. I harvested the portraits (taking out images of Marina herself) and strung them into a video.
I generated a distance matrix of all the portraits. The result was is a text file with space separated floating point numbers. There are 1490 portraits in the archive. Each image got a distance score to every other image, resulting in 1490^2 = 2226064 values. My Macbook Pro (2.8 GHz Intel Core 2 Due, 4 GB RAM) took days to complete this task. While the computer worked, I asked humans to give me their best guess of who the machine would pick out as the most normal and the most unique looking visitor. What I got was these two people:
Finally, my computer churned out the distance matrix. I took the average of each portrait’s distance from all the others. Here is the algorithmically picked most normal and most anomalous portraits:
One person in my class who I showed the results too immediately said “Your software is racist”. However, it is important to realize that our starting data set skews the results. I had a studio visit with the satyrical art world institutional critique painter William Pohida soon after this, and I showed him this project. According to him, this summary exposed the demographics of the MoMA audience, telling us more about that institution itself than the individuals.
The next step was clearly to contact Marina and ask her who she remembered the most and compare it to these results. Until I hear from her, I can only imagine that this software gives us some sense about what she remembers from the performance, and perhaps what images permeated her dreams. I wondered if she dreams of one single representative audience member. I decided to generate this epitome of the audience by finding the most normal set of eyes, the most normal nose, and the most normal mouth across all the portraits. I roughly sliced up the images into regions and generated distance matrix comparisons on all the eyes, all the noses, and all the mouths. Here is my approximation of what Marina dreams about now:
What can this exercise tell us about our own dreams? We are constantly barraged by media designed to affect us consciously and subconsciously (such as advertising). It is harder to detect and reason through the subconscious influence because it’s meant to alter our feelings and penetrate to the deeper levels of our psyche. They might even control our dreams. In order to deprogram ourselves from this psychological influence, a deconstruction of our dreams is necessary.
Above is a music video set to Nina Simone’s classic song about summary identity. Summarization software was used to perform social sorting on the actual features of audience participants at MoMA’s The Artist Is Present installation. The most normal facial features were excised and used to create a composited “most normal” portrait.
References:
1. David Brin, The Transparent Society
2. David Lyon, Surveillance after 9/11 available in full here.
3. Thomas Mathiesen, The viewer society: Michel Foucault’s “panopticon†revisited
Marina Abramović vs Nina Simone
I created this video while working on my computer vision surveillance project. I harvested the audience portraits from The Artist Is Present performance this past summer (2010) at the MoMA. I mined this data set for the most normal and most unique looking person (the results are coming the next post). When I strung the images together into a video, I was pretty amazed by the result. When the images are blowing by at a high rate, persistence of vision makes it seem that the individual features of the different people are being substituted. The effect is similar to Michael Jackson’s Black or White music video, but without the explicit morphing.
I also ended up harvesting the eyes, noses, and mouths from these photos and finding the most normal of each. I think Nina Simone’s song I Ain’t Got No (I Got Life) is more ideal in this case, as she really adresses the ‘the whole is more than the sum of its parts’ aspect of our individual identities.
Here is the video (and also its here on its own page with embedding instructions).
leave a comment