L3D Cube visualizations Part 3: webcam stream projection

Overview

The video stream’s frames are divided into 8×8 squares of equal surface. The average RGB values of every pixel in the square is extracted and used to recompose a smaller image. The image is then projected on to the cube.

With enable3d option set to true, the past 7 frames are stored and displayed on the back frames of the cube with a delay set by the variable updateFrameRate.

The webcam stream could easily be substituted for any video stream if need be.

Walkthrough

Libraries

  • L3D Library: this one we know of already.
  • Processing Video library: it comes with Processing so no need to install it. It is used to communicate with the computer’s webcam.

The reduction algorithm

Since the cube is an 8*8 display, we need to downsize the original image from the webcam so that it fits on 64 pixels.

It is important to understand how the algorithm used to reduce the number of pixels of an image works.

You will quickly see that, if broken down into small pieces, it is not a difficult concept to grasp at all.

Let’s take the example of an image of width and height of 32 pixels. It is composed of 1024 pixels total. Say we want to downsize it to a 4*4 image.

First we divide the initial image in 64 squares of equal surface. We then iterate through each pixel of each of these surfaces, extracting the average R, G and B value from every pixel lying on the same surface.

This average result is used to re-compose a downsized image.

Here is a poorly made gif that might help you understand the idea, as well as an example of the result obtained with this method.
I don’t know why but the thumbnail of the gif eats up part of the graphics so you might want to click on the link to see the full version.

The code

Link to the repository.

Server: Processing sketch

Link to the sketch.

Capture webcam stream

Import the library used to stream the webcam feed and declare a Capture global variable in order to store the frames.

Start the webcam feed in setup().

You also need to register a new event listener that will read the new frames incoming from the webcam.

Downsize frames images

We create a new pixelateImage() function that reads a frame from the webcam, resizes it to a square so that it fits the cube, creates a new output image that will be composed following the algorithm aforementioned.

We also create a function that will allow us to display the resulting image in the rendered view for test purposes.

Project to cube

We use the functions just created to project the output image pixel’s values to the cube’s first frame’s voxels.

As mentioned, if enable3d is set to true, the past webcam frames are sent to the back of the cube with a delay.

We begin by importing the L3D library and setting the global variables.

Then we start the cube in setup() and prepare the view.

Then we define the function used to render the cube.

UI events

Weather it be for curiosity or testing purposes, it can be useful to try different output resolutions and to toggle between the 2D and 3D view.

In order to do that, we register a new global variable and two event listeners which will toggle the views between the cube and the output image on a right click. Additionally 0 and 1 can be used to decrease or increase the output image’s resolution.

Putting it all together

Populate the draw() loop with the two rendering functions we created.

Client: Photon firmware

Upload the following code to your device.

Sources

This post was originally published on digitaljunky.io.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn