Tuesday

A little Tip!!!

I found this great tip online that really works!!!
If you want to black out the background of your sketch instead of showing a refelction of your webcam, just add the line:

background(51);

within the draw function:

void draw()
{
  background(0);
 
  noStroke();
  noTint(); 
  imageMode(CORNER);

  opencv.read();                  //  Grabs a frame from the camera
 
 
  pushMatrix();
  //scale(2,2);
  translate(width,0); 
  scale(-2,2);
  image( opencv.image(), 0, 0 );  //  Display the difference image
  popMatrix();
  background(51);
  opencv.absDiff();               //  Calculates the absolute difference

and there it is!

From:


To...

Another Look at the Monsters of Media: Monster Media

"Global leaders in non-traditional Advertising".

http://www.monstermedia.net/

I love this company... They have been my main inspiration for deciding the create an interactive shopfront for this project. They have done some very simply, but effective designs, that have really worked (you only have to look at the public interacting with the interactive advertising in the videos on the website to see).

I know I have looked at them already, but Now I am a bit more knowledgeable on Processing and have a slightly better idea on what I would like to do for the project, I would like to look at specific projects they have done to help my finalise my idea.

Monster Media- Timberland- EarthKeepers


 Loving this video! Seems like such a simply idea... as people pass it sets off video clips. Fun idea! I don't know how veasable this is with my knowledge on processing. Will have to see!

Monster Media - ESET



 

This idea "looks" so simple but is really effective. as soon as someone walks past the display, particles seem to follow the person and this triggers a simple movie clip that happens to be just a couple of sentences and then the company logo.
A huge possibility!
I could see if I could adapt the following code that I already have to create a similar effect using movement and particles:

import hypermedia.video.*;

import processing.opengl.*;
import processing.video.*;

OpenCV opencv;                    //  Creates a new OpenCV Object

// add image to sketch
PImage particleImg;


Particle[] particles;

final int MAX_PARTICLES = 50;

void setup()
{
 
  size(640,480);
  frameRate(20);
   
  opencv = new OpenCV( this );    //  Initialises the OpenCV object
  opencv.capture( 320, 240 );     //  Opens a video capture stream

  particles = new Particle[0]; 
  particleImg = loadImage("ParticleBlue.png");
 
 
 //
 
 
 
 
 
}

 
void draw()
{
  background(0);
 
  noStroke();
  noTint(); 
  imageMode(CORNER);

  opencv.read();                  //  Grabs a frame from the camera
 
 
  pushMatrix();
  //scale(2,2);
  translate(width,0); 
  scale(-2,2);
  image( opencv.image(), 0, 0 );  //  Display the difference image
  popMatrix();
 
  opencv.absDiff();               //  Calculates the absolute difference



  imageMode(CENTER);

  updateParticles();
 
  makeParticles(opencv.image());
   
  if(particles.length>MAX_PARTICLES)  
    particles = (Particle[]) subset(particles, particles.length-MAX_PARTICLES);
  
  opencv.remember();              //  Remembers the current frame
 
 
}

void updateParticles()
{
 
  for(int i =0; i<particles.length; i++)
  {
   
    Particle p = particles[i];
   
    p.update();
    p.draw();
   
   
  }
 
 
}


void makeParticles(PImage img)
{
 
  Particle p;
  for(int i= 0; i<200; i++)
  {
    int xpos = (int) random(img.width);
    int ypos = (int) random(img.height);
    if(brightness(img.get(xpos,ypos))>50)
    {
      p = new Particle(width - (xpos*2), ypos*2);
      p.draw();
      particles = (Particle[]) append(particles, p);
    }
  }
}


class Particle
{
 
  float xPos;
  float yPos;
  float xVel;
  float yVel;
 
  float rotation = 0;
  float spin;
 
  float currentAlpha = 255;
  float currentScale = 0.5;
 
  float drag = 0.98;
  float fadeSpeed = 1;
  float shrink = 0.8;
  float gravity  = 0.1;
 
  Particle(float xpos, float ypos)
  {
    this.xPos = xpos;
    this.yPos = ypos;
    this.xVel = random(-20,20);
    this.yVel = random(-20,20);
    this.currentScale = random(0.01,0.05);
    this.currentAlpha = 255;   
    //this.rotation = random(0,360);  
    //this.spin =  random(-2,-5);

  }
 
 
  void update()
  {
    xVel*=drag;
    yVel*=drag;
   
    yVel+=gravity;
   
    xPos += xVel;
    yPos += yVel;
    currentAlpha -=fadeSpeed;
    currentScale*=shrink;
    rotation+=spin;
   
  }
 
  void draw()
  {
    if(currentAlpha<=0) return;
   
    pushMatrix();
   
    tint(255,currentAlpha);
    translate(xPos, yPos);
    scale(currentScale);
    rotate(radians(rotation));
    image(particleImg, 0, 0);
    popMatrix();
   
  }
 }

Wicked!!!

The JumpMan Look!

As I want to base my idea on the Jumpman website, A sister Nike Website advertising Nike inspired by Michael Jordan, I thought I ought to look in depth at the website. I have briefly described the appearance of the website on this blog, but now I am going to break the website apart and look exactly at what features I want to capture in my Processing sketch.
The Website can be found at:

www.nike.com/jumpman23/index.html

First let's have a look at the website introduction:


Modern, Funky and Fun!


Simple loading icon


Funky, exciting-looking homepage! I love the use of quite dull background colours with a nice range of bold colours used for the graphics, which really make them stand out from the plain background.
One element of the above page really caught my eye; the Jordan revolution blog link. I made a quick recording of it to show you exactly what I liked about it:



I took this video from the Jumpman website; A nice animation demonstrating the Nike Jordan Evolution progression through the years.
I love the use of simple blueprints to demonstrate the creation and development of the trainer, finishing with the new, up to date model of the famous range.
I would love to show some sort of "development in my sketch.

I also saw the link to the new Michael Jordan clothing range, a winter range, that looks very "magical" with the use of snow and icy effects. Very interesting!:






Last, but not least, I love the look of the following flash video from the Jumpman website. It's exploding effect is exciting and creates the dynamic, pulse-racing feel that Jordan is trying to achieve:

(Again, sorry for the rushed video quality!)

 

All are very exciting ideas to try and develop on, in order to link my product to the "feel" of Nike. Which one shall I choose!!!
What I think I need to do is take another look at the monster media website, and in particular their shop front advertising and this is what I want to focus my idea on. This should help me discover a little better exactly what is expected and needed in a Shopfront interactive display and then will help[ me decide what is the most practical idea to develop.

Thursday

Forever popping bubbles!

I found this amazing effect (plus code) online from:
http://andybest.net/2009/02/processing-opencv-tutorial-2-bubbles/

Here is the creators video of the sketch:


Processing OpenCV Tutorial Video #2- Bubbles! from Andy Best on Vimeo.

I downloaded the code and started experimenting immediately.

Here is my first experiment. A simple one; delete the scoreboard at the top of the screen:






(Sorry about the festive nostalgia soundtrack!)

and here comes the original code:

import hypermedia.video.*;          //  Imports the OpenCV library

OpenCV opencv;                      //  Creates a new OpenCV object
PImage movementImg;                 //  Creates a new PImage to hold the movement image
int poppedBubbles;                  //  Creates a variable to hold the total number of popped bubbles
ArrayList bubbles;                  //  Creates an ArrayList to hold the Bubble objects
PImage bubblePNG;                   //  Creates a PImage that will hold the image of the bubble
PFont font;                         //  Creates a new font object

void setup()
{
  size ( 640, 480 );                      //  Window size of 640 x 480
  opencv = new OpenCV( this );            //  Initialises the OpenCV library
  opencv.capture( 640, 480 );             //  Sets the capture size to 640 x 480
  movementImg = new PImage( 640, 480 );   //  Initialises the PImage that holds the movement image
 
  poppedBubbles = 0;                    
 
  bubbles = new ArrayList();              //  Initialises the ArrayList
 
  bubblePNG = loadImage("bubble.png");    //  Load the bubble image into memory
  font = loadFont("Serif-48.vlw");        //  Load the font file into memory
 textFont(font, 32);                      

}

void draw()
{
  bubbles.add(new Bubble( (int)random( 0, width - 40), -bubblePNG.height, bubblePNG.width, bubblePNG.height));   //  Adds a new bubble to the array with a random x position
 
  opencv.read();                              //  Captures a frame from the camera   
  opencv.flip(OpenCV.FLIP_HORIZONTAL);        //  Flips the image horizontally
  image( opencv.image(), 0, 0 );              //  Draws the camera image to the screen
  opencv.absDiff();                           //  Creates a difference image
   
  opencv.convert(OpenCV.GRAY);                //  Converts to greyscale
  opencv.blur(OpenCV.BLUR, 3);                //  Blur to remove camera noise
  opencv.threshold(20);                       //  Thresholds to convert to black and white
  movementImg = opencv.image();               //  Puts the OpenCV buffer into an image object
 // background(51);
 
  for ( int i = 0; i < bubbles.size(); i++ ){    //  For every bubble in the bubbles array
    Bubble _bubble = (Bubble) bubbles.get(i);    //  Copies the current bubble into a temporary object
   
    if(_bubble.update() == 1){                  //  If the bubble's update function returns '1'
      bubbles.remove(i);                        //  then remove the bubble from the array
      _bubble = null;                           //  and make the temporary bubble object null
      i--;                                      //  since we've removed a bubble from the array, we need to subtract 1 from i, or we'll skip the next bubble
   
  }else{                                        //  If the bubble's update function doesn't return '1'
      bubbles.set(i, _bubble);                  //  Copys the updated temporary bubble object back into the array
      _bubble = null;                           //  Makes the temporary bubble object null.
    }
  }
 
  opencv.remember(OpenCV.SOURCE, OpenCV.FLIP_HORIZONTAL);    //  Remembers the camera image so we can generate a difference image next frame. Since we've
                                                             //  flipped the image earlier, we need to flip it here too.
//  text("Bubbles popped: " + poppedBubbles, 20, 40);          //  Displays some text showing how many bubbles have been popped
 
}

class Bubble
{
 
  int bubbleX, bubbleY, bubbleWidth, bubbleHeight;    //  Some variables to hold information about the bubble
 
  Bubble ( int bX, int bY, int bW, int bH )           //  The class constructor- sets the values when a new bubble object is made
  {
    bubbleX = bX;
    bubbleY = bY;
    bubbleWidth = bW;
    bubbleHeight = bH;
  }
 
  int update()      //   The Bubble update function
  {
    int movementAmount;          //  Create and set a variable to hold the amount of white pixels detected in the area where the bubble is
    movementAmount = 0;
   
    for( int y = bubbleY; y < (bubbleY + (bubbleHeight-1)); y++ ){    //  For loop that cycles through all of the pixels in the area the bubble occupies
      for( int x = bubbleX; x < (bubbleX + (bubbleWidth-1)); x++ ){
       
        if ( x < width && x > 0 && y < height && y > 0 ){             //  If the current pixel is within the screen bondaries
          if (brightness(movementImg.pixels[x + (y * width)]) > 127)  //  and if the brightness is above 127 (in this case, if it is white)
          {
            movementAmount++;                                         //  Add 1 to the movementAmount variable.
          }
        }
      }
    }
   
    if (movementAmount > 5)               //  If more than 5 pixels of movement are detected in the bubble area
    {
      poppedBubbles++;                    //  Add 1 to the variable that holds the number of popped bubbles
      return 1;                           //  Return 1 so that the bubble object is destroyed
  
   }else{                                 //  If less than 5 pixels of movement are detected,
      bubbleY += 10;                      //  increase the y position of the bubble so that it falls down
     
      if (bubbleY > height)               //  If the bubble has dropped off of the bottom of the screen
      {  return 1; }                      //  Return '1' so that the bubble object is destroyed
     
      image(bubblePNG, bubbleX, bubbleY);    //  Draws the bubble to the screen
      return 0;                              //  Returns '0' so that the bubble isn't destroyed
    }
   
  }
 
}

Now time to adapt a little.
First little experiment: change the image to another bubble.png:

 

And now some Nike Trainers!:



finally, I went back to the bubbles and found a simple tip to blackout the background, so you cannot see my image, but the code still works in the same way:
(Extract of the code is displayed below):

opencvOpenCV.GRAY);                //  Converts to greyscale
  opencv.blur(OpenCV.BLUR, 3);                //  Blur to remove camera noise
  opencv.threshold(20);                       //  Thresholds to convert to black and white
  movementImg = opencv.image();               //  Puts the OpenCV buffer into an image object
  background(51);

The last line: background (51) is what blakcs out the background. Clever, eh? Here's the result:



So, now I am going to on and experiment from there with this effect. But first I need to research a bit more into the Nike branding- feel I want to get across.

Magical JumpMan!

I am still very interested in using the Michael Jordan Jumpman website as an inspiration for my processing sketch.
In some way, I want to incorporate the style, colours and atmosphere of the Jumpman site. I want my sketch to look bright and eye-catching, but also very sporty and appealing to all.
I am also inspired greatly by the monster media website and their vast array of shopfront interactive advertisements. I want to base my idea around the thought of it being displayed in a Nike shop window. I want it to advertise this fictional range of Nike trainers in a very obvious way. I want to display the trainers in the range in my sketch, and therefore with the added fun of an interactive shop display, lure my fictional customers into the store!
I am still very keen on adding music or sound effects to my piece, but this will be something that I will explore at the end of the project, If I still have time to play about. This will co-inside with my preliminary ideas of creating swooshing sounds, to coincide with eh Nike branding of the "Swoosh".
With the combination of Processing and openCv, using the code I am going to display in my next post and code using particles and frame differencing, I am going to experiment with colours, techniques, different images, backgrounds, settings until I get to an interactive advertisement I am happy with!

OpenCv is now up and running!

I manged to get openCv up and running on my Pc!!! Being on Windows it wasn't as straight forward as it seemed to be on the Mac, but at least now I can get on with being creative with Processing!
I wrote myself a set on instructions on how to install openCv on Windows:

- Save all of your working files that you want to keep onto a memory stick.
-Put all programmes and files associated with processing and opencv  on your computer into your recycling bin.
-Reinstall Processing from Processing.org
-Download and place it into your documents folder.
-Go to following page: http://ubaa.net/shared/processing/opencv/ and following link to downloading openCV. You need to download version called release version 1.0 (Direct link: http://sourceforge.net/projects/opencvlibrary/files/opencv-win/1.0/OpenCV_1.0.exe/download).
-Once downloaded, unzip and save to desktop.
-Once unzipped, the installation process should start automatically.
-During the installation process, make sure that you tick the box that says "add path".
-Next go back to the open cv page: http://ubaa.net/shared/processing/opencv/ and download "OpenCv processing library".
-Unzip straight into your "libraries" folder within processing.
-Open up processing folder. go to libraries>openCv.You should see 3 folders called reference, source and library. Create a new folder called "Examples".
-Go back to openCV webpage: http://ubaa.net/shared/processing/opencv/ click on "OpenCv processing examples" Download and save to desktop.
-Open Examples folder and highlight all examples. Press copy and go to your documents>Processing>libraries>openCv>Examples folder and paste the examples in there.
-Reboot your computer and hopefully it should work!

Let the processing begin!!!

Playing with Particles!!!

In Seb's lesson today, we were experimenting with Particles and webcams. We used the idea that particles streamed from placed with bright light. Here are a few photos of me playing with the code:





Interesting!!!

We also found out another couple of interesting points today.

Wednesday

Messing About with Motion...

I have been experimenting with code to try and get some ideas for my final idea. I have found a few pieces of code that I have been playing aorund with.

The first is:





Colour detection in Processing from James Alliban on Vimeo.

A colour detection code that I found on the following website:
http://jamesalliban.wordpress.com

(Specific file can be found at : http://jamesalliban.wordpress.com/2008/11/16/colour-detection-in-processing/)

it involves using a webcam to pick up the object in the far right corner. This might be handy, as I want to develop the idea o f picking things up on the stage. This is a very similar idea to the ford advertisement by monster media that I showed in my last post. It uses the idea of people being able to move items around the screen and therefore in the process of working as an advertisement as it makes people want to find out more. I will look into this idea further when I next go into college, as my webcam at home isn't good enough to pick up all the different colours.

This was the code for the project:

import processing.video.*;
Capture video;

int numPixels;                      // number of pixels in the video
int rectDivide = 4;                 // the stage width/height divided by this number is the video width/height
int vidW;                           // video width
int vidH;                           // video height
int[][] colouredPixels;             // the different colour references for each pixel
int[][] colourCompareData;          // captured r, g and b colours
int currR;                          //
int currG;                          //
int currB;                          //
int[][] squareCoords;               // x, y, w + h of the coloured areas
color[] colours;                    // captured colours
int colourRange = 25;               // colour threshold
int[][] centrePoints;               // centres of the coloured squares
color[] pixelColours;
boolean isShowPixels = false;       // determines whether the square and coloured pixels are displayed
int colourMax = 2;                  // max amount of colours - also adjust the amount of colours added to pixelColours in setup()
int coloursAssigned = 0;            // amount of cours currently assigned


void setup()
{
  size(640, 480);
  vidW = width / rectDivide;
  vidH = height / rectDivide;
  video = new Capture(this, vidW, vidH, 30);
  noStroke();
  numPixels = vidW * vidH;
  colouredPixels = new int[vidH][vidW];
  colourCompareData = new int[colourMax][3];
  squareCoords = new int[colourMax][4];
  colours = new color[colourMax];
  centrePoints = new int[colourMax][2];
  color c1 = color(0, 255, 0);
  color c2 = color(255, 0, 0);
  pixelColours = new color[colourMax];
  pixelColours[0] = color(0, 255, 0);
  pixelColours[1] = color(255, 0, 0);

}

void captureEvent(Capture video)
{
  video.read();
}

void draw()
{
  noStroke();
  fill(255, 255, 255);
  rect(0, 0, width, height);
  drawVideo();
 
  for (int i = 0; i < coloursAssigned; i++)
  {
    if (isShowPixels) drawSquare(i);
  }
}


void drawVideo()
{
  for (int i = 0; i < coloursAssigned; i++)
  {
    fill(colours[i]);
    rect(i * 10, vidH, 10, 10);
  }
  image(video, 0, 0);
  noFill();
  stroke(255, 0, 0);
  strokeWeight(2);
  rect(vidW - 4, vidH - 4, 4, 4);
}

void drawSquare(int i)
{
  int sqX = squareCoords[i][0];
  int sqY = squareCoords[i][1];
  int sqW = squareCoords[i][2];
  int sqH = squareCoords[i][3];
  noFill();
  stroke(0, 0, 255);
  strokeWeight(3);
  rect(sqX, sqY, sqW, sqH);
 
  //stroke(0, 0, 255);
  //strokeWeight(4);
  rect(sqX * rectDivide, sqY * rectDivide, sqW * rectDivide, sqH * rectDivide);
  line(sqX * rectDivide, sqY * rectDivide, ((sqX * rectDivide) + (sqW * rectDivide)), ((sqY * rectDivide) + (sqH * rectDivide)));
  line(((sqX * rectDivide) + (sqW * rectDivide)), sqY * rectDivide, sqX * rectDivide, (sqY * rectDivide + sqH * rectDivide));
}

void keyPressed()
{
  println("key pressed = " + key);
  color currPixColor = video.pixels[numPixels - (vidW * 2) - 3];
  int pixR = (currPixColor >> 16) & 0xFF;
  int pixG = (currPixColor >> 8) & 0xFF;
  int pixB = currPixColor & 0xFF;
  if (key == 'p')
  {
    isShowPixels = !isShowPixels;
  }
  if (key == '1')
  {
    coloursAssigned = 1;
    colourCompareData[0][0] = pixR;
    colourCompareData[0][1] = pixG;
    colourCompareData[0][2] = pixB;
    colours[0] = color(pixR, pixG, pixB);
  }
  if (colourMax < 2 || coloursAssigned < 1) return;
  if (key == '2')
  {
    coloursAssigned = 2;
    colourCompareData[1][0] = pixR;
    colourCompareData[1][1] = pixG;
    colourCompareData[1][2] = pixB;
    colours[1] = color(pixR, pixG, pixB);
  }
  if (key == '0')
  {
    coloursAssigned = 0;
  }
}
class CoordsCalc
{
  CoordsCalc()
  {
  }
 
  void update()
  {
    int currX = vidW;
    int currW = 0;
    boolean isYAssigned = false;
    boolean isWAssigned = false;
    for (int j = 0; j < coloursAssigned; j++)
    {
      currX = vidW;
      currW = 0;
      isYAssigned = false;
      isWAssigned = false;
      for (int i = 0; i < numPixels; i++)
      {
        colouredPixels[abs(i / vidW)][i % vidW] = 0;
        color currColor = video.pixels[i];
        currR = (currColor >> 16) & 0xFF;
        currG = (currColor >> 8) & 0xFF;
        currB = currColor & 0xFF;
        if(isColourWithinRange(j))
        {
          noStroke();
          if (isShowPixels)
          {
            fill(pixelColours[j]);
            rect((i % vidW), (abs(i / vidW)), 1, 1);
            rect((i % vidW) * rectDivide, (abs(i / vidW)) * rectDivide, 1 * rectDivide, 1 * rectDivide);
          }
          if ((i % vidW) < currX)
          {
            currX = i % vidW;
            squareCoords[j][0] = currX;
          }
          if (!isYAssigned)
          {
            isYAssigned = true;
            squareCoords[j][1] = abs(i / vidW);
          }
          squareCoords[j][3] = (abs(i / vidW)) - squareCoords[j][1] + 1;
          if((i % vidW) > currW)
          {
            currW = i % vidW;
            isWAssigned = true;
          }
        }
        if(i == numPixels - 1 && isWAssigned)
        {
          squareCoords[j][2] = currW - squareCoords[j][0] + 1;
        }
      }
    }
    for (int i = 0; i < coloursAssigned; i++)
    {
      centrePoints[i][0] = (squareCoords[i][0] * rectDivide) + ((squareCoords[i][2] * rectDivide) / 2);
      centrePoints[i][1] = (squareCoords[i][1] * rectDivide) + ((squareCoords[i][3] * rectDivide) / 2);
      fill(0, 0, 0);
      ellipse(centrePoints[i][0], centrePoints[i][1], 10, 10);
    }
  }

  boolean isColourWithinRange(int j)
  {
    if(currR > (colourCompareData[j][0] + colourRange) || currR < (colourCompareData[j][0] - colourRange))
    {
      return false;
    }
    if(currG > (colourCompareData[j][1] + colourRange) || currG < (colourCompareData[j][1] - colourRange))
    {
      return false;
    }
    if(currB > (colourCompareData[j][2] + colourRange) || currB < (colourCompareData[j][2] - colourRange))
    {
      return false;
    }
    return true;
  }
}


I have also had a look at Myron camera as mouse. It tracks movement and acts accordingly. Here is the code for the project:

/*

the green oval is an averaged position of all the detected dark movement in the camera's view.

physical setup:
  - make sure there is a strong value contrast between your hand and a white background.
  - set all camera settings to "manual" for the most stable results.
 
 last tested to work in Processing 0090

 JTNIMOY

*/

import JMyron.*;

JMyron m;//a camera object

//variables to maintain the floating green circle
float objx = 160;
float objy = 120;
float objdestx = 160;
float objdesty = 120;

void setup(){
  size(320,240);
  m = new JMyron();//make a new instance of the object
  m.start(width,height);//start a capture at 320x240
  m.trackColor(255,255,255,256*3-100);//track white
  m.update();
  m.adaptivity(10);
  m.adapt();// immediately take a snapshot of the background for differencing
  println("Myron " + m.version());
  rectMode(CENTER);
  noStroke();
}


void draw(){
  m.update();//update the camera view
  drawCamera();
 
  int[][] centers = m.globCenters();//get the center points
  //draw all the dots while calculating the average.
  float avX=0;
  float avY=0;
  for(int i=0;i<centers.length;i++){
    fill(80);
    rect(centers[i][0],centers[i][1],5,5);
    avX += centers[i][0];
    avY += centers[i][1];
  }
  if(centers.length-1>0){
    avX/=centers.length-1;
    avY/=centers.length-1;
  }

  //draw the average of all the points in red.
  fill(255,0,0);
  rect(avX,avY,5,5);

  //update the location of the thing on the screen.
  if(!(avX==0&&avY==0)&&centers.length>0){
    objdestx = avX;
    objdesty = avY;
  }
  objx += (objdestx-objx)/10.0f;
  objy += (objdesty-objy)/10.0f;
  fill(30,100,0);
  ellipseMode(CENTER);
  ellipse(objx,objy,30,30);
}

void drawCamera(){
  int[] img = m.differenceImage(); //get the normal image of the camera
  loadPixels();
  for(int i=0;i<width*height;i++){ //loop through all the pixels
    pixels[i] = img[i]; //draw each pixel to the screen
  }
  updatePixels();
}

void mousePressed(){
  m.settings();//click the window to get the settings
}

public void stop(){
  m.stop();//stop the object
  super.stop();
}

Again it doesn't work very well with my camera. I can't wait to try them out at College!

Monster Media!!!

We found this amazing website called Monster Media today. It is an advertising company, based in America, that specialist in new media advetsing campaigns, and in particular, interactive advertising.

http://www.monstermedia.net/

Here are a couple of examples of their work that I feel are more relevant for this project:

http://monstermedia.net/portfolio.php#116

http://monstermedia.net/portfolio.php#116

Animal!!!

I love this video from R.E.M:


And I was lucky enough to find the source code from this website:


Here's the code:

// Star Nursery
// by Ryan Alexander at Motion Theory
// 
// Press tilde (~) to show the bounding circles
// Press the mouse to hide the video
// 
// Make sure to have your camera turned on before you run me!
// 
// Deep currents
// Like the sky
// Use no pentameter
// Mike Stipe is just this guy

int maxStars = 10000;
int nStars = 0;
Star stars[] = new Star[maxStars];

BImage basicStar, vidThumb;

boolean newFrame;
float brightest = 255;
float darkest = 0;

float brightX[][] = new float[8][6];
float brightY[][] = new float[8][6];
float brightVelX[][] = new float[8][6];
float brightVelY[][] = new float[8][6];

boolean firstFrame = true;

void setup()
{
  size(500, 375);
  background(0);

  basicStar = loadImage("pfx_star.gif");
  brightToAlpha(basicStar);

  beginVideo(width, height, 15);
  framerate(30);
  
  conceiveStars(300);
}

void loop()
{
  // Calibrate the video for next frame
  if(newFrame) {
    calibrateVideo();
    newFrame = false;
    firstFrame = false;
  }
  
  if(mousePressed) {
    background(0);
  } else {
    background(video);
  }

  noFill();
  stroke(255,0,0,40);
  for(int i=0; i < nStars; i++) {
    stars[i].update();
  }
  stroke(128,128,128,40);
  for(int i=0; i < nStars; i++) {
    stars[i].display();
  }
}

void mouseReleased()
{
  background(0);
}

void videoEvent()
{
  newFrame = true;
}

void conceiveStars(int n)
{
  n += nStars;
  for(int i=nStars; i < n; i++) {
    stars[i] = new Star(i, random(width), random(height), random(-5, 5), random(-5, 5));
  }
  nStars += n;
}

void calibrateVideo()
{
  vidThumb = video.copy(40, 30);
  float tempb = brightness(vidThumb.pixels[0]);
  brightest = tempb;
  darkest = tempb;

  for(int i=1; i < 1200; i++) {
    tempb = brightness(vidThumb.pixels[i]);
    if(tempb > brightest) {
      brightest = tempb;
    }
    if(tempb < darkest) {
      darkest = tempb;
    }
  }
  
  // Make sure that lightest is > darkest
  if(brightest <= darkest) brightest = darkest + 1;
  
  // Calculate the changes in general brightness
  float tempx, tempy, totalBright;
  int pxx, pxy;
  for(int j=0; j < 6; j++) {
    for(int i=0; i < 8; i++) {
      tempx = 0;
      tempy = 0;
      totalBright = 0;
      
      for(int jj=0; jj < 5; jj++) {
        for(int ii=0; ii < 5; ii++) {
          pxx = i*5 + ii;
          pxy = j*5 + jj;
          tempb = brightness(vidThumb.pixels[pxx + pxy * 40]);
          tempx += pxx * tempb;
          tempy += pxy * tempb;
          totalBright += tempb;
        }
      }
      
      // Adjust the general velocity of the brightness
      if(totalBright == 0) {  // Avoid divide by 0
        if(firstFrame) {
          brightX[i][j] = i * (width/8) + (width/16);
          brightY[i][j] = j * (height/6) + (height/12);
        }
      } else {
        tempx = (tempx / totalBright + .5) * (width / 40.0);
        tempy = (tempy / totalBright + .5) * (height / 30.0);
        if(firstFrame) {
          brightX[i][j] = tempx;
          brightY[i][j] = tempy;
        } else {
          brightX[i][j] += brightVelX[i][j];
          brightY[i][j] += brightVelY[i][j];
          brightVelX[i][j] = ((tempx - brightX[i][j]) - brightVelX[i][j]) * .2;
          brightVelY[i][j] = ((tempy - brightY[i][j]) - brightVelY[i][j]) * .2;
        }
      }
    }
  }
}


//      //
// Star //
//      //
class Star
{
  float x, y, xv, yv;
  float diameter, inside, minD, maxD;
  int id, age;

  boolean connected[] = new boolean[maxStars];

  float red, green, blue;

  Star(int iid, float ix, float iy, float ixv, float iyv)
  {
    x = ix; y = iy;
    xv = ixv; yv = iyv;
    id = iid;
    age = (int)random(500);

    minD = 20;
    maxD = 60;
  }

  void update()
  {
    // Decay the inside diameter
    if(inside > 0) inside -= 1;

    // React to video input
    if(x >= 0 && x < width && y >= 0 && y < height) {
      color tempPixel = video.pixels[(int)x + (int)y*width];

      // Diameter grows with brightness
      float multiplier = 255.0 / (brightest - darkest);
      float bright = constrain(((brightness(tempPixel) - darkest) * multiplier) / 255, 0, 1);
      float sizeMult = sin(age * .05) * .4 + .6;
      diameter += ( (((1 - bright) * (maxD - minD) + minD) * sizeMult) - diameter ) * .2;
      
      // Influence by movement of general brightness
      xv += brightVelX[(int)floor(x / (width/8.0))][(int)floor(y / (height/6.0))] * 1;
      yv += brightVelY[(int)floor(x / (width/8.0))][(int)floor(y / (height/6.0))] * 1;
    }

    // Check for new connections
    for(int i=0; i < nStars; i++) {
      if(i != id) {// && touching < 1 && stars[i].touching < 1) {  // If it's not me
         float xd = stars[i].x - x;
         float yd = stars[i].y - y;
         float diff = xd*xd + yd*yd;
         float radii = stars[i].diameter/2 + diameter/2;

         // If touching
         if(diff < radii*radii) {
           springAdd(stars[i].x, stars[i].y, radii);

           // Assimilate Velocity
           xv += (stars[i].xv - xv) * .01;
           yv += (stars[i].yv - yv) * .01;

           // Color changes with distance
           float dist = sqrt(diff);

           if(dist < 30) {
             stroke(255, ((30 - dist) * 6) * (min(inside, stars[i].inside) / 60));
             line(x, y, stars[i].x, stars[i].y);
           }

           if(!connected[i]) {
             inside += diameter / (maxD / 5);
             connected[i] = true;
           }
         } else {
           connected[i] = false;
         }
       }
     }

     inside = constrain(inside, 0, 60);
   }

   void display()
   {
     ellipseMode(CENTER_DIAMETER);
     if(keyPressed && key == '`') {
       ellipse(x, y, diameter, diameter);
     }
     if(inside > 1) {
       imageMode(CENTER_DIAMETER);
       tint(255, (inside-4)*64);

       push();
       translate(0,0,-1);
       image(basicStar, x, y, inside, inside);
       pop();
     }
     //line(x, y, x+xv, y+yv);

     x += xv;
     y += yv;

     // Constrain to screen
     float d2 = diameter/2;
     if(x < d2) { x = d2; }
     if(x > width - d2) { x = width - d2; }
     if(y < d2) { y = d2; }
     if(y > height - d2) { y = height - d2; }

     xv *= .8;
     yv *= .8;

     age++;
   }

   float dx, dy, mag, ext;
   void springAdd(float sx, float sy, float rest)
   {
     dx = sx - x;
     dy = sy - y;
     if(dx != 0 || dy != 0) {  // Prevent / by zero
       mag = sqrt(dx*dx + dy*dy);
       ext = mag - rest;
       xv += (dx / mag * ext) * .1;
       yv += (dy / mag * ext) * .1;
     }
   }
 }

 void brightToAlpha(BImage b)
 {
   b.format = RGBA;
   for(int i=0; i < b.pixels.length; i++) {
     b.pixels[i] = color(255,255,255,brightness(b.pixels[i]));
   }
 }

The only problem is that it was built for processing 1.0 and therefore there are a lot of changes that need to be made to the code to update it.

I was also lucky to find an updated version online from the following website:

http://processing.org/discourse/yabb2/YaBB.pl?num=1206398682


import processing.video.*;

// Star Nursery
// by Ryan Alexander at Motion Theory
//
// Press tilde (~) to show the bounding circles
// Press the mouse to hide the video
//
// Make sure to have your camera turned on before you run me!
//
// Deep currents
// Like the sky
// Use no pentameter
// Mike Stipe is just this guy

int maxStars = 10000;
int nStars = 0;
Star stars[] = new Star[maxStars];

PImage basicStar, vidThumb;

boolean newFrame;
float brightest = 255;
float darkest = 0;

float brightX[][] = new float[8][6];
float brightY[][] = new float[8][6];
float brightVelX[][] = new float[8][6];
float brightVelY[][] = new float[8][6];

boolean firstFrame = true;

Capture video;

void setup()
{
  size(800, 600);
  background(0);

  basicStar = loadImage("pfx_star.gif");
  brightToAlpha(basicStar);

  video = new Capture(this, width, height, 15);
  frameRate(30);
 
  conceiveStars(1000);
}

void captureEvent(Capture video) {
  video.read();
}

void draw()
{
  image(video, 0, 0);
  // Calibrate the video for next frame
  if(newFrame) {
    calibrateVideo();
    newFrame = false;
    firstFrame = false;
  }
 
 if(mousePressed) {
    background(0);
 } else {
   background(video);
 }

  noFill();
  stroke(255,0,0,40);
  for(int i=0; i < nStars; i++) {
    stars[i].update();
  }
  stroke(128,128,128,40);
  for(int i=0; i < nStars; i++) {
    stars[i].display();
  }
}

//void mouseReleased()
//{
 // background(0);
//}

void videoEvent()
{
  newFrame = true;
}

void conceiveStars(int n)
{
  n += nStars;
  for(int i=nStars; i < n; i++) {
    stars[i] = new Star(i, random(width), random(height), random(-5, 5), random(-5, 5));
  }
  nStars += n;
}

void calibrateVideo()
{
  vidThumb.copy(video, 0, 0 ,width, height, 0, 0, 40, 30);
  float tempb = brightness(vidThumb.pixels[0]);
  brightest = tempb;
  darkest = tempb;

  for(int i=1; i < 1200; i++) {
    tempb = brightness(vidThumb.pixels[i]);
    if(tempb > brightest) {
brightest = tempb;
    }
    if(tempb < darkest) {
darkest = tempb;
    }
  }
 
  // Make sure that lightest is > darkest
  if(brightest <= darkest) brightest = darkest + 1;
 
  // Calculate the changes in general brightness
  float tempx, tempy, totalBright;
  int pxx, pxy;
  for(int j=0; j < 6; j++) {
    for(int i=0; i < 8; i++) {
tempx = 0;
tempy = 0;
totalBright = 0;

for(int jj=0; jj < 5; jj++) {
 for(int ii=0; ii < 5; ii++) {
   pxx = i*5 + ii;
   pxy = j*5 + jj;
   tempb = brightness(vidThumb.pixels[pxx + pxy * 40]);
   tempx += pxx * tempb;
   tempy += pxy * tempb;
   totalBright += tempb;
 }
}
// Adjust the general velocity of the brightness
if(totalBright == 0) {  // Avoid divide by 0
 if(firstFrame) {
   brightX[i][j] = i * (width/8) + (width/16);
   brightY[i][j] = j * (height/6) + (height/12);
 }
} else {
 tempx = (tempx / totalBright + .5) * (width / 40.0);
 tempy = (tempy / totalBright + .5) * (height / 30.0);
 if(firstFrame) {
   brightX[i][j] = tempx;
   brightY[i][j] = tempy;
 } else {
   brightX[i][j] += brightVelX[i][j];
   brightY[i][j] += brightVelY[i][j];
   brightVelX[i][j] = ((tempx - brightX[i][j]) - brightVelX[i][j]) * .2;
   brightVelY[i][j] = ((tempy - brightY[i][j]) - brightVelY[i][j]) * .2;
 }
}
    }
  }
}


// //
// Star //
// //
class Star
{
  float x, y, xv, yv;
  float diameter, inside, minD, maxD;
  int id, age;

  boolean connected[] = new boolean[maxStars];

  float red, green, blue;

  Star(int iid, float ix, float iy, float ixv, float iyv)
  {
    x = ix; y = iy;
    xv = ixv; yv = iyv;
    id = iid;
    age = (int)random(500);

    minD = 20;
    maxD = 60;
  }

  void update()
  {
    // Decay the inside diameter
    if(inside > 0) inside -= 1;

    // React to video input
    if(x >= 0 && x < width && y >= 0 && y < height) {
color tempPixel = video.pixels[(int)x + (int)y*width];

// Diameter grows with brightness
float multiplier = 255.0 / (brightest - darkest);
float bright = constrain(((brightness(tempPixel) - darkest) * multiplier) / 255, 0, 1);
float sizeMult = sin(age * .05) * .4 + .6;
diameter += ( (((1 - bright) * (maxD - minD) + minD) * sizeMult) - diameter ) * .2;

// Influence by movement of general brightness
xv += brightVelX[(int)floor(x / (width/8.0))][(int)floor(y / (height/6.0))] * 1;
yv += brightVelY[(int)floor(x / (width/8.0))][(int)floor(y / (height/6.0))] * 1;
    }

    // Check for new connections
    for(int i=0; i < nStars; i++) {
if(i != id) {// && touching < 1 && stars[i].touching < 1) {  // If it's not me
  float xd = stars[i].x - x;
  float yd = stars[i].y - y;
  float diff = xd*xd + yd*yd;
  float radii = stars[i].diameter/2 + diameter/2;

  // If touching
  if(diff < radii*radii) {
    springAdd(stars[i].x, stars[i].y, radii);

    // Assimilate Velocity
    xv += (stars[i].xv - xv) * .01;
    yv += (stars[i].yv - yv) * .01;

    // Color changes with distance
    float dist = sqrt(diff);

    if(dist < 30) {
stroke(255, ((30 - dist) * 6) * (min(inside, stars[i].inside) / 60));
line(x, y, stars[i].x, stars[i].y);
    }

    if(!connected[i]) {
inside += diameter / (maxD / 5);
connected[i] = true;
    }
  } else {
    connected[i] = false;
  }
}
     }

     inside = constrain(inside, 0, 60);
   }

   void display()
   {
     ellipseMode(CENTER_DIAMETER);
     if(keyPressed && key == '`') {
ellipse(x, y, diameter, diameter);
     }
     if(inside > 1) {
imageMode(CENTER_DIAMETER);
tint(255, (inside-4)*64);

pushMatrix();
translate(0,0,-1);
image(basicStar, x, y, inside, inside);
popMatrix();
     }
     //line(x, y, x+xv, y+yv);

     x += xv;
     y += yv;

     // Constrain to screen
     float d2 = diameter/2;
     if(x < d2) { x = d2; }
     if(x > width - d2) { x = width - d2; }
     if(y < d2) { y = d2; }
     if(y > height - d2) { y = height - d2; }

     xv *= .8;
     yv *= .8;

     age++;
   }

   float dx, dy, mag, ext;
   void springAdd(float sx, float sy, float rest)
   {
     dx = sx - x;
     dy = sy - y;
     if(dx != 0 || dy != 0) {  // Prevent / by zero
mag = sqrt(dx*dx + dy*dy);
ext = mag - rest;
xv += (dx / mag * ext) * .1;
yv += (dy / mag * ext) * .1;
     }
   }
 }

 void brightToAlpha(PImage b)
 {
   b.format = ARGB;
   for(int i=0; i < b.pixels.length; i++) {
     b.pixels[i] = color(255,255,255,brightness(b.pixels[i]));
   }
 }

Open the CV!!!

I have been trying to open the openCV on my P.C at home... hard but hopefully we will be getting some help from Seb on this issue.
When we do get the openCV to work, it will be very helpful as it has a lot of examples on motion detection and blob detection.

I have found this help pages/ forums on how to get openCV to work:

http://processing.org/discourse/yabb2/YaBB.pl?num=1238338691/0

http://ubaa.net/shared/processing/opencv/

Tuesday

Preliminary ideas

My proposed ideas for my Interactive Installation:

Idea No.1- Motion tracking movement, using particles and "Swoosh!"

This would be the "Simple" idea of picking up movement i.e someone walks passed the camera. Creates a "Swoosh" Like effect, using particles. Perhaps speed and movement of particles varies dependent on user's speed of movement? (I am not sure how feasible this would be?). Also, perhaps creating simple simple swoosh sounds, also dependent on the amount of movement from the user and the speed in which they are travelling at.

Similar idea (Image found during Google search):


This could also encorporate some of the most widely used colours in Nike Advertising also used for current Nike trainer ranges, which are Black, White, Grey and Red.

I know this is a fictional trainer, but this may help people to identify it to the Nike Brand, before seeing any slogans or information on the new range.

Idea No.2: Colour tracking

This idea inlvolves colour tracking a specific colour present. I would develop the idea further, by perhaps adpating the code to react in different ways, when the user moves the tracked item at different speeds or distances, again creating a "Swoosh"- like effect to fit in with the Nike Brand Logo.Perhaps it could also be seen as a "magic" Wand, using particles to create the swoosh effect, tracking the chosen colour. The user may also be able to change the colour that is being tracked on the screen and different "Swooshing" effects occur accordingly. I have had a little practice with colour tracking, which worked ok, but my experiments were in need of some improvement!:

http://rebeccaallennikemagic.blogspot.com/2010/11/sourcing-source.html