Final Project — Painting with Light

For the final project, I wanted to create something that encouraged movement to initiate a response with processing. My idea was vague to begin with, which allowed me to explore and learn a variety of different concepts in processing whilst I tried to find the response I wanted. For instance, I learnt how to pixelate video feedback, and how to de-pixelate regions with light. Although this response was cool, the video lagged. My main priority with this project was to establish something that was intuitive, fun, and worked as perfectly as I could make it. Having video lag did not line up with these design principles. I decided to aim for having the core components of my project working perfectly, rather than having lots of components that only half work.

Consequently, I created ‘Painting with Light.’ It is a simple interactive installation where the user controls the array of circles painted on the screen with a cube of blue light.

The array of circles take on the brightest value colour. As you paint, the circles that trail slowly fade in a way that replicates painting with water colours. The background changes colour very slowly between different blue/ purple values. Below is a video demonstrating only the gradual background colour change (it is difficult to see in the interaction videos).

These are simple factors of my project that I think are very important in creating the overall, calming experience. I found it interesting that some users would try to make the program react faster by waving the box around sporadically, but the calm nature of the program would make them slow their interaction.

Users could simply pick up the box and without any explanation could play with the program — which was exactly the type of intuitive interaction I wanted.

Painting with Light also tended to encourage a meditative interaction.

This program technically works without a controller and follows the brightest part of the user’s body. However, this does not allow for directed interaction i.e. the painted circles would appear all over the screen and would be difficult to control. Thus, I made a light box that acts as a controller and allows the user to make directed movements. It’s like painting with a brush rather than a large sponge.



The box consists of a small inner box that contains the LED circuit and battery pack. This smaller box is then suspended within the larger controller. A switch is mounted on the outside to conserve battery power.

(The controller circuit)

To keep the circuit neat, I kept most of the circuit on the breadboard except for the switch’s resistor, which I soldered and encased with heat shrink tubing. The breadboard has 6 LEDs in parallel, with three on each side so that the entire cube will be lit from within (not just one face). A 9V rechargeable battery powers the circuit.


The exterior of the box is transparent acrylic that I sanded so that it became opaque to better refract the light from the LEDs inside.

The smaller box is suspended within the controller by cylinders of acrylic mounted on three of the six faces. This allows the circuit to remain compact/ unmoving within the smaller box, and and also means the circuitry cannot by easily seen when looking in from the outer shell. I purposefully choose  only cylinders to use as these mounts, as I knew they would be visible and wanted a consistent aesthetic. A small hole is left for the switch in the outer shell. All faces are hot glued together along the box seams, except for the top face of the interior box. I simply taped this shut (securely) so that the circuit could be easily removed and recycled at the completion of the project.

With its blue glow and circular mounts, the box reminds me of energy sources in scifi movies e.g. the tesseract in Thor. I like this reference, because it makes it fell like there is a subliminal force within the cube that is allowing you to generate a response on the screen.

 (The tesseract in Thor — source)



The comments within the code describe the different elements I used. The code’s main components reference processing’s brightnessTracking example (see examples within processing) and array example. James showed me a technique to generate the slow change in background colour. Numbers are randomly generated, but at increments, like reading values along a gently curving graph. These numbers are then mapped to the range of colour values I want for r,g, and b. I was also having some trouble with the web cam being recognised by processing, so I included code that registers and prints the cameras available, before using the first one to source pixels (which should be the web cam). 

Processing CODE:

One of the three most difficult components of my project was trying to simplify my idea to its core components. I was overly ambitious with my first concept, but am very happy that I simplified to a more manageable project. The result was simple, yet was successful in the design elements I wanted. 

Another area of difficulty was making the circuit stable and working. I enjoyed soldering, and learnt a lot about making a good circuit whilst creating this project. I had to do a lot of debugging to finally get the circuit to work, but it was fulfilling as it made me realise the importance of soldering correctly and ensuring the integrity of all elements. The final controller was able to be thrown around by participants yet still remained functional, which was exactly the type of integrity I wanted to ensure. 

Finally, my last area of difficulty was figuring out the best way to code what I wanted to achieve. I still have a lot to learn here with respect to the coding process. Pierre gave me some really helpful advice when I asked him how to code a certain element. He basically said that I should break down what I am trying to achieve into its simplest components e.g. if I wanted to pixelate something, I could also say that I want to break an image up into individual boxes, where each box is a single pixel taken from the range of pixels that would otherwise inhabit that boxed section of the image. This type of thinking is something I hope to manifest in my future coding projects.

For a future iteration of this project, I would like to include RGB LEDs in the controller so that the colour of light can be changed, like choosing colours from a palette. This would require an arduino be used, with an XBee unit installed within the controller to enable the controller to remain wireless. I would also like to have this project work in 3D space, so the user moving in the z plane would also create a response in the processing z plane. 

4/12/17 Final Project Prototype

When I originally came up with the idea of making a cube that remotely controls a response in processing, I had no idea what idea I was trying to portray. I didn’t want to make the controller just because I thought it would be cool — I wanted there to be meaning behind it. The box would initiate a response when a ball inside the cube came into contact with force sensors lining the cube’s interior. After Wednesday’s class, I decided to try incorporate brightness tracking as well. Whilst trying to figure out how these two inputs would relate, I started thinking about torches that are charged when you shake them or grind a handle.

Thus, I want it to feel like shaking the box is ‘charging’ it, so that the strip of LEDs pulses/ gradually fades up with the frequency/ duration of the shaking. Then I would like a ‘fully charged’ response e.g. the LEDs remain on and bright when a certain ‘threshold’ of shaking has been reached. When the box has reached a certain brightness, the brightness tracker will be correlated to register it.

At this point, I want the cube to act as a torch, just like in Dan Shiffman’s image processing tutorial. As the user moves the cube in-front of the camera, different parts of the projected screen will illuminated. I would like it to be a process of discovery, so that the user feels as if they are discovering a new world, with the cube lighting the way/ granting access. The world they are discovering will be a video file of a sci-fi landscape. As I write this I am listening to youtube chillmix playlists. These videos usually have a gif as the background to the played music. Here are a couple of examples:

This inspired me to look into having GIFs as my background landscape. For instance, these GIFs from a landscape shots in Blade Runner:

I would like to have multiple landscapes that the user can explore. To achieve this, I would have the landscape change when the cube becomes ‘uncharged’ i.e. after a certain period of time without shaking the cube, the LED’s will dim, and the brightness tracker will no longer register the cube — thus, the projector screen will turn blank, and the landscape will change. When the user next ‘charges’ the cube and reinitiates the brightness tracker, a new landscape will be revealed. 

I want to have the GIFs sci-fi themed not only to align with the theme of SPACE, but also to invoke the experience of discovering a new world through an unfamiliar landscape.

Below is a rough sketch of the cube and its components:

I can foresee the most difficult aspects of this project being the follow: Creating the ‘charging’ effect with the force pad input and the LED output; using the camera and projector; creating a structurally sound cube controller; and calibrating the sensor with the output for a timely response/ making the experience as intuitive/ responsive as possible.



XBee or bluetooth shield

projector screen


If possible, a curtained/ blacked out space



Using an XBee shield

Connecting an external camera

Using brightness tracker with a projector and external camera

Creating the ‘charging’/ ‘threshold’ effect with the force pad as input and the LED as output.

29/11/17 Final Project Concept

For my final project, I have been playing around with the idea of making a remote, cuboid controller for a game and/or musical instrument. I want to have the controller completely remote, and robust enough that it could be thrown across a room. In order to create an input with this controller, I am considering laying pressure sensors inside the cube, and the added a rubber ball. When the box is shaken, the ball hits the pressure pads and creates an input. I am wary that the ball idea might be too dynamic to create a response that the user can intuitively control. This would be the first problem I would need to solve. I would also like to have a greater implication for this project — I want the user to have an ‘aha’ moment whilst interacting with the device. This could be created with the output that the controller elicits, which I still have to come up with. 

During today’s lesson where we are learning about brightness tracking, I had the idea to try fill the box with LEDs and incorporate brightness tracking into creating a response. Alternatively, I could do colour tracking.

In terms of equipment, I know for certain I will need an XBEE shield or bluetooth shield, and transparent acrylic.





































































































































































































27/11/17 Image Manipulation

For the image manipulation assignment, I want to create an image that involved some form of motion and interaction. After many experiments and failed attempts, I decided to try and incorporate the bouncing ball array example that we did in class. To make this sketch work with the image, I used the get() function to retrieve the colour from the image at the location of the bouncing ball. This retrieved colour is then used to fill that same ball, so that the balls bouncing across the screen reveal the image.

I took this one step further by adding another image. I made another class (movingCircle2) for the second image using most of the same parameters as the first class. The main difference is that in the draw loop, this second class is not activated until mousePressed. Thus, When you click the mouse, it appears the second image starts to lay over the first. If you release the mouse, the second image stops forming, and the first image eventually takes over.

I think a really cool application of this code would be merging portraits of people. If I was to revisit this project, I would take identical portraits of people (siblings, for instance), and use these portraits as the two images being laid over each other.





22/11/17 Computing

Computing used to be something that I associated with maths, a discipline that I largely avoid because it feels antithetical to the freeform world of creativity. Little did I know that computing is a creative world unto itself. I always considered myself to be an artist, but felt myself distanced from the art world. I adore art, but cannot believe in it or commit myself to it the way other artists do. Rather than see myself as an artist, I have come to identify more as a creative or maker. There is a practicality in that definition that is grounding. I feel this same grounded-ness when I create using computing. It holds a practicality and universal relevance in our technology laden world that truly appeals to me. I believe art can induce change, but through an abstraction that I find more difficult to grasp when compared to the potentials of computing for solving current world issues.

Not only does it allow you to create extensively and without limitation, computing is also a language. I often heard code referred to as a language, but didn’t actually understand the truth in this statement until I started to write it. Writing code and discovering new functions felt similar to learning vocabulary in Mandarine class. I was gathering material to build conversations and relationships — the only difference now is that I am using an interface to achieve this communicate between me (the creator) and the user.

20/11/17 LCD Assignment

For this assignment we use processing to communicate information to a liquidCrystal Display (LCD). When moving the mouse over the gradient in processing, information is displayed on the LCD.

Arduino CODE:


Processing CODE:


6/11/17 StarCatcher Game (Open Studios)

For this assignment I wanted to create a game working in 3D. Although it was a tedious, I really enjoyed the learning process. The game is called ‘StarCtacher,’ and the objective is to ‘catch’ stars by intersecting them with a box controlled by three pontimeters. The pontimeters each control the x, y, and  z movement of the box. The stars are an array of balls randomly scattered throughout the 3D space. They rapidly vary in size which creates the impression of flashing. When the box intersects a ‘star,’ the background flashes/ rapidly changes colour.

Processing Main CODE:

Processing Moving Circle Class CODE:

Arduino CODE:

Here’s a screenshot of the game when it starts. You can’t tell in the picture, but the balls are all ‘flashing,’ which is an affect created by making the circle size random and constantly changing.

1/11/17 Servo & LED (Processing/Arduino Communication)

In the first part of this assignment, I elaborated on the ‘dimmer’ communication example we did in class to make a servo respond to the mouseX position on a gradient created in processing:

Servo Arduino CODE:

Servo Processing CODE:

The second half of the assignment was to turn an LED on and off by clicking on different rectangles in processing:

LED Arduino CODE:

LED Processing CODE:


30/10/17 Simple Digital Art

Initially I wanted to recreate Walter Gordy’s digital art Android Monster, but I quickly realised that achieving that level of complexity was not feasible for this assignment. When I am more apt at using processing I will pursue this project, but for now I have done a simple interactive art piece, where random circles of random colours appear in random positions. The user can ‘erase’ the circles with the mouse, drawing a path through the circles.