Midterm Documentation

Initially at a loss for what to do in the class discussion the idea of making some sort of rube-goldberg-esque ball game came up. Though it wasn’t stated at the time, some one later refered to it as inverse pinball, and that is an accurate succinct description.

My initial idea was to have a fully 3D tower like set of paths and various controls to direct the ball along these paths, as in the image below. That type of ball-block set was the primary motivating inspiration aesthetically, even as my idea itself changed.

The first and most significant change to this idea was primarily due to the complexity of making the project I originally envisioned. While technically equivalent to the updated idea, in order to give the 3D version sufficient choices and moments of interaction be be worthwhile it would need to be large, and the additional size would require additional techniques, without displaying much additional technical prowess. Instead, I decided to make the game flat and wall mounted, and replaces the choice of paths with various pitfalls.

In response to this change, it made sense to use a different control mechanism. In my original concept the player would control the game using various inputs located on the game itself, many requiring physical contact. With the game now wall mounted, it no longer made sense to require the player be in such close proximity or require such precise controls. Instead, all the mechanisms are controlled by a single input, a light sensor, which is set to have a very low threshold. The following images show the game at various stages of construction, unfortunately I dissembled the electronics before collecting video documentation.

 1

2

3

4

Each of the servo motors controls a flap in the path, as can be seen in picture 3. They are set so that at one of the extremes of their motion they create a smooth path, while at the other the flap is fully open. Unfortunately, during construction the cardboard lost much of its structure, and so pieces that initially would return to true now remain in the open position. In addition, there is a solenoid that if used with proper timing should propel the ball across a small gap in the path. The ball itself is a ping-pong ball wrapped in tinfoil. In the games current implementation the tinfoil serves no purpose, but mechanisms have been implemented that partially allow the result of the game (win or loss), to be displayed. The game collects this data, but does not make use of it currently.

While I am content with this project from the perspective of a proof of concept/early prototype, it is very flawed, most obviously in level design but in almost every aspect. However, I do intend on re making it, using more permanent material such as wood, implementing a scoring system, improving user interaction and accessibility, and potential adding additional features, such as markers that identify whenever the player passes each challenge. As we study using the computer to communicate with the arduino I may have additional inspiration for features, but my current visualization is intentionally minimalist.

 

(the following the code I used)

#include <Servo.h>
Servo startServo;
Servo servo1;
Servo servo2;
Servo servo3;
Servo servo4;
Servo servo5;
const int button= A0;
const int solenoid= 13;
int background;
boolean game=false;
int startime=0;
int lastgame=0;
boolean moving= false;
const int winning=A1;
int baseline;
void setup() {
  Serial.begin(9600);
  startServo.attach(3);
  servo1.attach(9);
  servo2.attach(11);
  servo3.attach(5);
  servo4.attach(6);
  servo5.attach(10);
  background= analogRead(button);
  baseline= analogRead(winning);
  startServo.write(0);
  servo1.write(180);
  servo2.write(0);
  servo3.write(0);
  servo4.write(0);
  servo5.write(0);
  pinMode(solenoid,OUTPUT);
  //digitalWrite(solenoid,HIGH);
}

void loop() {
  if(!moving){
    if(!game&&activated()){
      game=true;
      startServo.write(180);
      startime=millis();
    }
    else if(activated()){
      moveServos();
    }
    if(game && millis()>(startime+10000)){
      lastgame=2;
      game=false;
      setup();
    }
    if(!game){
      if(lastgame==1){
        displayWin();
      }
      else if(lastgame==2){
        displayLoss();
      }
    }
  }
  if(win()){
    lastgame=1;
    game=false;
    setup();
  }
}
void displayWin(){
}
void displayLoss(){
  
}
boolean activated(){
  return analogRead(button)<background/2;
}
boolean win(){
  return analogRead(winning)>(baseline+250);
  Serial.println("won!");
}
void moveServos(){
  moving= true;
  Serial.println("here");
  servo1.write(180);
  servo2.write(0);
  servo3.write(0);
  servo4.write(0);
  servo5.write(0);
  digitalWrite(solenoid,LOW);
  delay(500);
  servo1.write(0);
  servo2.write(180);
  servo3.write(180);
  servo4.write(180);
  servo5.write(180);
  digitalWrite(solenoid,HIGH);
  delay(500);
  moving= false;
}

 

Mr. Greenie McGreen Reacts

Since I really enjoy using servos (I like them more than LEDs), I wanted to create a stupid pet trip that would at least use more than one of them. I originally just wanted to have a LDR that would basically take into the account the light levels that the plant was getting and if there weren’t enough sunlight, the servo, acting as the eyebrows of the face on the pot, would point more downwards to depict an angry/sad face – like in the image below.

But then during the class discussion, I found two other alternatives that would be cooler 1) detect the water levels of the plant and then that would trigger the servo to move, 2) anthropomorphosise the plant and have a sound detector/microphone attached that would sense that there is sound next to it so that it’ll be happy when it hears that someone is talking to it.

I started playing around with the soil moisture sensor first because I was most unfamiliar with it. After some confusion with how to connect it, I finally got it to work and tried to actually water the plant so that the soil moisture level would change. Below is a video of the serial monitor that showed the moisture level.

The code for this is shown here:

int val = 0; //value for storing moisture value
int soilPin = A0;//Declare a variable for the soil moisture sensor
int soilPower = 7;//Variable for Soil moisture Power

//Rather than powering the sensor through the 3.3V or 5V pins,
//we’ll use a digital pin to power the sensor. This will
//prevent corrosion of the sensor as it sits in the soil.

void setup()
{
Serial.begin(9600); // open serial over USB

pinMode(soilPower, OUTPUT);//Set D7 as an OUTPUT
digitalWrite(soilPower, LOW);//Set to LOW so no power is flowing through the sensor
}

void loop()
{
Serial.print(“Soil Moisture = “);
//get soil moisture value from the function below and print it
Serial.println(readSoil());

//This 1 second timefrme is used so you can test the sensor and see it change in real-time.
//For in-plant applications, you will want to take readings much less frequently.
delay(1000);//take a reading every second
}
//This is a function used to get the soil moisture content
int readSoil()
{

digitalWrite(soilPower, HIGH);//turn D7 “On”
delay(10);//wait 10 milliseconds
val = analogRead(soilPin);//Read the SIG value form sensor
digitalWrite(soilPower, LOW);//turn D7 “Off”
return val;//send current moisture value
}

 

After making sure that I knew how to work the soil moisture sensor, I moved on to programming my servos. At this point, I realised that it would be a lot more fun if the servos could react to sound (as in cooler and more different for the final product). I, therefore, decided to move towards using the sound detector sensor. Like the soil moisture sensor, I had to figure out how to connect the wires and where to solder headers and what not.

After making sure that it works, I tried to programme the servos so that they would react to changes in the sound detector. This process took me many many hours because I couldn’t figure out why, even if there were no sound, the sound detector would detect that there is a sound and so the servo would be in a constant movement phase where there would be no time that it was not moving. Using the serial monitor, I realised that it was in a constant loop of “quiet” and “loud”.

Messing around with the code, I realised that I couldn’t tell the servo to move 0, 90, or 180 degrees or it would malfunction, I could only use numbers that were not those 3. I still don’t understand why that is, but my code worked after that. The video for that is shown before.

The code for this is shown below:

 

#define PIN_GATE_IN 2
#define IRQ_GATE_IN 0
#define PIN_LED_OUT 13
#define PIN_ANALOG_IN A1

void soundISR()
{
int pin_val;

pin_val = digitalRead(2);

}

#include <Servo.h>
int servoPin = 10;
int servoPin2 = 9;
Servo Servo1;
Servo Servo2;

void setup()
{
Servo1.attach(servoPin);
Servo2.attach(servoPin2);
Serial.begin(9600);

// configure input to interrupt
pinMode(2, INPUT);
// attachInterrupt(IRQ_GATE_IN, soundISR, CHANGE);

// Display status
Serial.println(“Initialized”);

}

void loop()
{
int value;

// Check the envelope input
value = analogRead(A5);

// Convert envelope value into a message
Serial.print(“Status: “);
if(value <=10)
{
Serial.println(“Quiet.”);
Serial.println(“In Quiet”);

Servo1.write(45);
Servo2.write(125);
delay(250);
}

else if(value >= 11)
{
Serial.println(“Loud.”);
Serial.println(“In Loud”);
Servo1.write(10);
Servo2.write(160);
delay(250);

//Servo1.write(90);

delay(100);
}

// pause for 1 second
delay(750);

}

 

It was then time for me to make the actual hardware part in order to show what I wanted to do with the eyebrows for the plant pot. After measuring where the servos would be located in the pot, I drilled 2 holes into the side of the pot where they would stick out. I realised that the “head” of the servo was too short to extend to the outside of the pot so I attached two small pieces of some plastic straws to the tip of the servo. After that, I messed around with the angles that the servos should turn in so that the eyebrows would move in the direction that I want them to move in.

Making sure that the angles were okay, I then glued two pieces of popsicle sticks to the straws so that the eyebrows would actually be attached to the servos. I then gave the pot 2 eyes and a mouth.

I put a flat layer of hard styrofoam on top of the servos and placed a real plant on the top so that the fake pot could actually do something in useful terms – have a real plant so that the whole point of the project would work.

I then decided to test the project out. The video of my final project in action is shown below:

And here’s my plant, Mr. Greenie McGreen reacting to Adele’s “Hello” because why not.

So yea, I didn’t end up using the moisture sensor- but at least I know how it works now!

Also, I originally wanted to make it so that the faces would change from a angry/sad face to a happy face but the shocked face worked out better because then it would just recognise that someone was talking to it and if someone were to swear at it *ahem Adham*, it would show a shocked face and not a happy face. So, I guess it worked out!

NOTE: I basically created the base in which you put your plant pot onto, not the plant itself.

Responses: A Brief Rant on the Future of Interaction Design Response

Touch screens are actually great — for now. They facilitate for immediate, responsive, and efficient interactions with our smart phones. In recent years, companies have been trying to move towards voice control as an alternative/supplemental mode of input. Though the technology has been available for years, voice control still does not feel as convenient, efficient, and useful compared to touch screens. Sure, it’s lovely to be able to holler at Siri and ask her to read you your notifications while you’re in the shower. However, I still do not see voice control becoming something that users could fully integrate and replace touch gestures with. For one thing, it’s definitely not as discrete/private as touch and that would be a problem when in the public. Until the next great interactive technological advancement takes place, the touch screen is probably the best option we have.

A Brief Rant on the Future of Interaction Design Response

The future depicted in the video is cool, but not because of the technology exhibited in it. What attracted me the most was the bezel-less, minimalistic, ultra-modern design, which I found very aesthetically pleasing. I strongly agree with the author on the observation that the featured technology really is a rather small increment in the functionality department. The design makes everything look fresh and much more interesting, but if you look closely, we’re essentially doing the exact same thing we’ve been doing all this time. Victor’s analysis on how we should pay more attention to our humanly capabilities (maybe other than our hands) and go on from there is definitely a good start. Perhaps one day in the near future, our “smart phones” would cease to be handheld devices. Perhaps the display and controls could be seamlessly integrated into our natural vision via high tech contacts — just like those featured in season 1 episode 3 of the Black Mirror.

Assignment #4 – Lubnah & Kristopher

Creation of the rainstick:

We attached to toilet paper rolls together. Initially, we use tape to cover the open ends of the roll. However, that didn’t emphasise the sound clearly. It sounded quite dull when he aluminium balls hit the surface of the tape. So we ended up using aluminium to emphasise the sound further. We created a stand rainstick, however the weight of the rainstick was heavier than the stand, so we needed a platform, some sort of rectangular structure and weight in the empty space to balance the overall weight of the structure.

 

  1. Tissue paper rolls
    2. Creating the “drum-skin” for the rainstick

3. Rolled aluminium pieces to create rain-drop effect

4. Final Structure

Ways of improving the project:

We attached a piezo element to one end of the rain stick to complements the sound of the rain stick. Ideally the piezo element would have reverberated with the end cap to turn the entire rain stick into a resonance chamber, but we could not get the aluminum sufficiently tight.

We used the example code for the servo, modified to pause at each end and allow the rain stick to come to equilibrium. Ideally we would have rungs throughout the rain stick to increase the noise it makes, but they are not present in this version.

The musical melody was made extremely last minute, and it was quite upbeat. It did not provide the relaxing tone needed for an effective soundscape. Next time, ideally we would spend more time focusing on the quality of the musical piece to genuinely resemble an accurate sleep soundscape.

A Brief Rant on the Future of Interactive Design & Responses – Response

Before reading the post, my initial response to the video was being thrilled for the  technological advancement to come and curious to see how it will change the everyday lives of people. However, reading the first few paragraphs reminded me of an opinion I have consistently held for a few years – I often times do not appreciate touch screens. Here are a few anecdotes that describe my past experience with touch screens: I got rid of my first touch-screen phone because I could not deal with not being able to text without looking at the keyboard on the screen; I got rid of my iPad because I lacked the patience it takes to type on the screen with the keyboard covering half of the screen; I got rid of my ebook reader because swiping to flip over the page just felt wrong. Remembering my history with touch screens made me wonder why my initial response was excitement for future interaction with a whole lot of touch screens.

The commonality of the touch screens that I disliked is the fact that I was unable to use my fingers to figure out where to place the finger movement to induce the intended reaction. To be more specific, a touch screen lacks the physical difference in the sense of touch of a key that a user wants to press, a key one does not wish to press and the gap between two different keys. In other words, the user will not know whether one was successful in entering an item until the item is printed on the screen. This means the user has to fix one’s eyes on the screen for at least two reasons: 1. To locate where the key the user wants to press is; 2. To make sure that the user has not made a typo. The problem of having to rely solely on vision could be a critical limitation for individuals with visual impairment using touch screen devices. It could be said that, a touch screen can, unnecessarily, eliminate the sensory information effortlessly collected by the fingers and increase the need for visual information, leading to an overall increase in the attention required for operation.

Despite everything I have said, I don’t mean to say that touch screens are bad – iPhone good.

Response to Responses: A Brief Rant on the Future of Interaction Design

I really like the author’s enthusiasm in the haptic technology ( or any kind of technology that could create the described effect of a more realistic interaction). After I rewatched the video, it does feel that all the gadgets there are just better, bigger, thinner and more transparent iphones. Perhaps the author is describing something so futuristic that very few of us can even imagine what it is.

Let’s say in the future some sort of nano robots can let you “flip” a page, or let you “touch” your partner’s hand while you are thousands of miles away from each other. However, is that really a “page” you are flipping and a human hand you are touching. They sure feel like they are. Nobody would want to read an actual book because your computer would feel like a real book in every way to a point people would fail to distinguish a computer simulated feeling from a “real” feeling. Your computer is not just a thing, it becomes everything. Would that necessarily be a bad thing? I don’t know. But it will completely change the way humans perceive the world

A Brief Rant on the Future of Interactive Design – RESPONSE

The author is “ranting” about some people’s expectation of how our designs in the future looks like. He believes that the “Picture under Glass” interface, like those on our phones and TVs and computers, is nothing but a transitional phase to a more interactive and “sensory” designs in the future.

I have mixed feelings about the author’s opinion. He points out that we should not limit our interaction to only with our fingertips and the sensation we get from interacting with real life objects give us way more information than a touch screen. We are already so used to screens and we all know what to do when we interact with phones or computers. In addition, touchscreens are very versatile. However, I do enjoy the sensation of flipping a page when I read,  or holding a “gun” when I play shooting games and not to mention I REALLY miss writing Chinese with a pen rather than a keyboard. Anyway, I will be very excited if interaction with interfaces of any kind resembles interaction with real objects in the future.