Final Project Idea

Aya and I will be working on the final project together.

Materials needed:

  1. LED’s
  2. 360 degree Motors (7)
  3. Cotton or Pillow filling
  4. Lanters and foam
  5. Computer
  6. Arduino
  7. Beads and string

This is the idea:

The project will be an artificial cloud that will show how the weather is in a certain city. If in said city the weather is raining, rain like beads will drop out of the cloud (like kinetic rain). The same will occur with snowflakes if it’s snowing in said city. If it is sunny, LED’s will display a bright light or a warm orange etc. The data will come from a computer which will update every 10-24 hours (once we figure out how to do it).

Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers – Response

For me, this reading was actually really interesting and enjoyable to read; not only because it’s relevant to my IM final project, but also because it was interesting to understand the different types of manipulating computer vision algorithms and understanding the different situations in which each type is used.

One idea that struck out to me the most is the idea that “computer vision algorithms can be selected to best negotiate the physical conditions presented by the world, and likewise, physical conditions can be modified to be more easily legible to vision algorithms”. This idea refers to the bidirectional process in which computer vision works – the environment of the “project” and the algorithm itself need to complement each other. In terms of how this idea would apply to my project, it would mean that in addition to having code that looks like it would work, I would have to create a physical condition which would allow the code to work. Especially because the project is interactive, it has to be highly sensitive to changes caused by user action and the only way to do this would be to create an environment that allows the code to run smoothly.

My Final Project plan

My plan as I mentioned in class is to make a system that allows users to mash up different songs together based on their own choice.

Firstly, the screen would be divided into two vertical halves(not visibly) and the controls would differ depending on which hand the user uses that act on the ‘karaoke’ side or the ‘acapella’ side.

There are a few controls I would like to give the user:

  • Volume control by moving vertically along the left side for kareoke and right side for acapella.
  • On the top right there will be a pause/play as well as the left, if the user hovers his or her hand over the respective text for more than 2 seconds, that side(kareoke or acapella) would only be acted on.
  • There will be lists of songs on either side, where the user can hover for more than 2 seconds to play.

Things I need

  • Kinect, because I feel I can only get the seamless control of the music using someones free hands in the air to control volume, pausing/playing each side and selecting a song.
  • A projector.

The list of songs I have made for now

 

Karaoke Acapella
Earned it – The Weeknd https://www.youtube.com/watch?v=o-gH2KedX60
Somebody I Used To Know https://www.youtube.com/watch?v=XvVmZmMLojc
Lady GaGa – Paparazzi https://www.youtube.com/watch?v=Hz_9FNImfMs
Lady Gaga – Poker Face https://www.youtube.com/watch?v=9smUUbtgxSM
Ed Sheeran – Thinking Out Loud https://www.youtube.com/watch?v=byXb3KT1-3A
Michael Jackson – Billie Jean   https://www.youtube.com/watch?v=eXqBhDAlVCc
Clean Bandit – Rockabye https://www.youtube.com/watch?v=Gl78zFKbQbE
Twenty One Pilots – Ride https://www.youtube.com/watch?v=muSbrUYiqrI
Cheerleader https://www.youtube.com/watch?v=ivJBs_wbOyY
Adele Rolling in the deep https://www.youtube.com/watch?v=WppvmpLKS-Y
Humble – Kendrick Lamar https://www.youtube.com/watch?v=AnQESyZisU0
Thriftshop https://www.youtube.com/watch?v=qx668eVJKeo
Yodeling Kid https://www.youtube.com/watch?v=bOZT-UpRA2Y
Eminem – Lose yourself https://www.youtube.com/watch?v=7_QK8yGjhH0
Logic’s type beat https://www.youtube.com/watch?v=hGAjM4qcWcg
Aggressive trap beat https://www.youtube.com/watch?v=i4zbFSbN_BY
Stranger things soundtrack https://www.youtube.com/watch?v=a3wGYbq6_Mc
Drake – Blessings https://www.youtube.com/watch?v=gqvFEHlwqVs
Ed Sheeran Shape Of You https://www.youtube.com/watch?v=o71_MatpYV0

 

Golan Levin’s notes on computer Vision for Artists Response

I really liked the section of the text that discussed different sort of projects that artists created. I really liked the concept of Krueger’s Videoplace, it was fascinating that he was one of the first artists to incorporate the entire human body to have a role in our interactions with computers. This project serves as an inspiration for my IM final project. I also liked reading about the Suicide Box, I did extra research on the controversial debate on the subject and that was really interesting to read as well. Although I really liked how the author discussed various projects and explained computer vision algorithms and how it is essential to design physical conditions in tandem with the development of computer vision code, I found it really difficult to follow his explanations. I felt that his text was challenging to read, and it made way too many conceptual leaps for someone who is still relatively new to the material.

Final Project Idea

For my IM final project, I want to make a solar system learning system that allows the user to move a rocket ship around on a table to different planets/stars/moons in our solar system in which animations/videos will play that allows you to learn about the “station” that the rocket ship is currently at. The purpose of this project is education and could be used as a method of teaching or simply as an installation at astronomy museums.

Materials I would need:

  • table: obvious reasons
  • projector: so that I can project the image onto the table
  • camera: to track the rocket ship figure around the table

Next steps:

  • understand how TUIO works and/or understand how to track an object using a camera
  • find an image of the solar system that could be projected onto the table
  • find videos/animations that would play at each station
  • make a rocket ship figurine, potentially make the stations as well (making it so that it’s more than just a projection)

Train to Dubai – IM final project proposal by Ross Jiang and Alex Wuqi Zhang

Our final project is a first-person shooter game in which the protagonist Ross tries to save his friend Alex from a zombie-infested train headed to Dubai. The player uses a toy gun to aim and shoot in front of projected screen. The player can pull the trigger to shoot or select a button and move the gun around to move the cursor on the screen. We are probably going to use a camera to capture the infrared laser emitted by the gun (we definitely need some technical expertise from Aaron here).

Our game starts with a menu with the title of our game and two options “START” and “ACKNOWLEDGEMENTS”. Click “START”, then a few lines of typewriter effect texts tell the background story. Then a video (or animation) of the last exchange of text messages between Ross and Alex display on the screen (Essentially a cry for help from Alex).

Afterwards, the actual game starts, a train arrives in a station and zombies pop out of windows, doors and top of the train. The player has a certain amount of time to eliminate each zombie by moving the cursor to aim and pulling the trigger to shoot. The player has three lives to start with, and each failure to eliminate a zombie would result in loss of a life. Depending on how much time we have, we might have multiple levels. If the player survives the game (clears all zombies from the train and saves Alex), the player will be congratulated and has the option to restart the game.

 

 

IM Final Project Idea

Materials needed:

  • laptop
  • projector
  • little cubicle-like space that can be constructed from panels – or any other ideas that you have?

Idea:

  1. The program will ask the user to input difficult feelings they have been experiencing. They must input at least 5 feelings, each input cannot be longer than two words.
  2. The program will then show them a diagram of a body, and they must select specific areas of the body where they feel the distressing emotions they have stated already. The body sections they can choose from will be divided into four parts: head, chest, stomach and legs. This step is important for users to be more thoughtful of their feelings by recognizing where they are feeling it. Recognizing where an emotion is felt can help individuals be more mindful of themselves.
  3. There will be a button that asks whether the user is “ready to start the meditation”. Once they click on the screen to start the meditation, a body sensor will be activated where they can see their body mirrored on a projection. The specific feelings they matched with the specific part of their body will be shown. In 10 seconds, the words will start vibrating and slowly flow out of the body while shaking faster and faster. It is important for the words to flow out of the body as it is a way for users to visually distance themselves from their emotions – to literally take a step back – and perhaps get a new perspective. After 10 seconds, the text will disperse into dust particles and the screen will go black. After five seconds, the small cubicle will brighten again. The purpose of the screen going from black to white was inspired by a quote- “you need the darkness to see the light”. I wanted to represent this quote visually. Throughout the whole project, there will be suspenseful music in the background.
  4. The whole purpose of this project is to make users more mindful of their emotions, provide a new perspective through distance and some sort of short-term relief from the difficult feelings they are experiencing.

Computer Vision – Response

This is a fairly simple read that demystifies computer vision for me. low-level computer vision algorithms through detecting motion, presence or brightness threshold are actually not that far beyond my knowledge. Moreover, many plug-ins and libraries are available for artists who wish to use computer vision for their multi-media installation.

This reading also gave me some ideas of how to make our first-person shooter game. The player can be given a laser gun and the camera would capture the single brightest point and translate that onto the screen so that by moving the gun in hand, the player can move the cursor on the screen.

Serial Communication Project

I decided to use one of my previous processing sketches and incorporate serial communication into it. The processing sketch I used was my self-portrait. I decided that I wanted to create something that could be used in a party. When the light dims down, the kaleidoscopic-like portrait would fade onto the wall. The project is hyperlinked here. The code I used is incorporated below, I used the handshake technique between processing and Arduino:

Self-portrait:

void setup(){
  size(480,640); //w,h
  frameRate(5);
  
  //fill(255,192,203);
  //ellipse(240,400,100,100);
}

float radius;

void draw(){
  background(0, 0, 0);
  
  fill(0);
  stroke(255);
  ellipse(240,325,730,800);
  stroke(255);
  ellipse(240,325,630,700);
  stroke(255);
  ellipse(240,325,530,600);
  stroke(255);
  ellipse(240,325,430,500);
  stroke(255);
  ellipse(240,325,330,400);
  stroke(255);
  ellipse(240,325,230,300);
  
  
  fill(255);
  stroke(0);
  ellipse(240,200, 150, 190);
  
  fill(255);
  ellipse(115,350, 50, 80);
  
  fill(255);
  ellipse(365,350, 50, 80);
  
  fill(0);
  ellipse(100,370,5,5);
  
  fill(0);
  ellipse(100,370,5,5);
  
  fill(0);
  ellipse(379,370,5,5);
 
  
  fill(random(0, 204), random(0, 255), random(0, 255));
  ellipse(240,330,275,350); //x,y,w,h

  

  fill(255);
  ellipse(240,360,275,350); //x,y,w,h
  fill(random(0, 255), random(0, 255), random(0, 255));
  ellipse(300,340,75,75); //glasses 1
  fill(random(0, 255), random(0, 255), random(0, 255));
  ellipse(180,340,75,75); //glasses 2
  line(216, 340, 264, 340); //middle portion of glasses
  line(104, 340, 141, 340); //side portion of glasses
  line(336, 340, 375, 340); //side portion of glasses

fill(255);
stroke(0);
arc(240, 410, 50, 35, PI, PI+QUARTER_PI);
arc(225, 410, 20, 20, HALF_PI, PI);

arc(270, 450, 170, 50, HALF_PI, PI);
arc(170, 440, 50, 35, PI, PI+QUARTER_PI);

arc(260, 485, 100, 10, HALF_PI, PI);
arc(260, 470, 50, 30, PI+QUARTER_PI, TWO_PI);
arc(210, 460, 60, 30, PI+QUARTER_PI, TWO_PI);

fill(0);
ellipse(225, 410, 3, 3);

//fill(0);
//stroke(255);
//ellipse(240,150,115,130);

stroke(0);
line(257, 410, 255, 380);


}

Arduino:

bool ledState=LOW;
int knobpin = A1;
int led=3;

void setup() {
  Serial.begin(9600);
  pinMode(led, OUTPUT);
  Serial.write(0);
  
  // put your setup code here, to run once:

}

void loop() {
  if(Serial.available()>0){ //if arduino receving something
    int inByte=Serial.read();//when you read something off buffer, it goes off the buffer
    int readvalue=analogRead(knobpin); //knob
    int writevalue = map(readvalue,0,1024,0,255);
    Serial.write(writevalue);
    analogWrite(led,writevalue);
    delay(1);
  }
    

I wanted to add more elements to this project such as adding music using an mp3 shield once the processing sketch is revealed. I also wanted to add a flex sensor that would simulate a wristband on people. Every time it moved, making the assumption that it moved because they people were dancing, my glasses would start changing color (as it does already). I have had tremendous difficulty coding this, and because I got slightly injured I didn’t have the time to go back and work on this aspect. If I am able to complete my weekly project for this week on time, I really want to spend some time enhancing this project.

What Computing Means To Me

I originally took this course hoping to gain some understanding of how computers and machines work (and I think I have – ish). But at the same time, ironically, instead of simply being amazed at all the cool stuff computers and programming allows us to do, I have come to appreciate the human qualities more. This might sound very random to many people and I don’t know how accurate it would be to describe programming as leaving the time-consuming processes to computers which make less mistakes than humans do. But my point is, getting a glimpse of how computers work allowed me to highlight the differences between humans and machines and came to appreciate the human capabilities I took for granted as well as technology.

When I tried to write a very simple code of less than 20 lines with the limited knowledge that I have, an error that I could not work out kept on occurring. I took multiple approaches to fix the code. First, I went over line by line and see what wasn’t working. Then, I opened a new screen and started writing from scratch. Finally, I got someone else to write the code for me and tried to replicate it. After failing to write 20 lines of code correctly, I realised that I had spelt ‘colour’ instead of ‘color’. The fact that computers which can carry out such complex processes without making a mistake, could not recognise ‘colour’ as ‘color’ made computers seem ridiculous to me but at the same time thought that humans are actually capable of a lot of things that don’t happen deservedly for machines.