User:Pas3ad/ENG100/Project 1

Project Preference
''Your class was given a list of projects to choose from. Please list the three projects you want to work on, from highest to lowest.''

Problem Statement
''We are working on a Robotic Vision. We are trying to design a machine that can detects things around it using either arduino or raspberry using different kinds of software that can help us to program the arduino or raspberry to see. ''

Project Plan
''First, we want to find out which software is easier to deal with so it would be easy for the next group to understand and deal with. Second, we want to decide which should we use arduino or raspberry. Third, we are deciding on what should we have the robot to follow. either a special color, movement or shape. Forth, we want to decide what would be the outside look like by meaning the structure of it. ''

Week 1 Narrative
''I went online and did a lot of research to try to understand the new software that we were introduced to through our project. I decided to try to understand Matlab. I searched about it through different projects that were done through different people through different colleges and found that the language that is used for Matlab is very difficult to understand easily. I know that our main point is to find some software that it would be easy to use and understand, but with Matlab it has so many different ways of coding that might cause a lot of errors at the end of every run. I also found that it hard to fix the codes once they are typed and run and worked the first time. For example, if you run the code for an led to comes on and it works and then you add different materials and equipment and you put in new codes. After you run everything, you found that the led will not light again. Matlab would not be telling you where is the problem is with the coding. You have to go through all the new codes and see if the new code does not conflict with the old one. Also, when you run some codes that could be similar in   definition, but they have to be used in different usage. I could not find any information that could help me with that. I told me team that Matlab would require more time from us to understand that could be more than 4 weeks, so I told them that we should use a different software to use.''

Week 2 Narrative
''I bought a pdf file of a book called "making things see" I am reading through that book trying to understand the different ways for using a new software which is process. I have no experience at all with process, but according to the book that I reading the codes are simple which is one of our main points for this project. I found great codes that could help us with our project using kinect. I do not have one, but I found one in the school and I asked if we can use it, but I was told that we should try to use a regular camera to run our project because it is suppose to be easy for anyone to create with cheap materials. I am working now on how to change the code using the sensor and the camera that is made within the kinect to just use the camera. So far, it is really difficult because I do not have a good computer to run the software with lagging and also I do not have an external camera to test with it. I trying to find a different ways to run the camera because I thought that we could have used the kinect, but I was told not to use it this Friday so I am working this weekend and next week on a new code for a regular camera and also a new code that could be run easily with the camera that we will use. ''

Week 3 Narrative
 This week I had a lot of progress.I downloaded the newest version of Processor 2 which is 2.0.1 and we download the detection blob, but the blob didn't work. Thanks to professor Edelen, he found that there was a bug with the new version of processor and recommended for us to download version 2.1. After trying the blob on 2.1 processing, I found that one word need to be capitalized which was the word Capture on the first couple of lines for these codes. I ran the codes, but there are no changes so far. It says that it is loading nut there are new changes or any new pages that opened with the camera. As I am reading in the comments I found that it says for this blob they used 1.5.1 processing so I downloaded and changed the capitalized later and ran it. It still didn’t work and gave me an error and said that I need to download Quick Time 7. I downloaded that hoping that it would work, but at the end it still gave me the same error with download Quick Time 7 or reinstall it. I tried that again and it still failed. I did now some research and found that these problems happens with 2.1 processing for 64-bits for windows and I should download 32-bites for windows. I am working on that right now. After trying out the new processing, I gave it about 20 minutes to give finally an error which is new to me now, but my partner has already gotten the same error. The error is “the requested resolution of 160x120, 15/1fps is not supported by the selected capture device.” Later on we found that there is code that you can run on your computer to find out for you which resolution does your camera has. Here is the Code  import processing.video.*;

Capture cam;

void setup { size(640, 480);

String[] cameras = Capture.list; if (cameras.length == 0) { println("There are no cameras available for capture."); exit; } else { println("Available cameras:"); for (int i = 0; i < cameras.length; i++) { println(cameras[i]); } }      }

After you run the code you will get more than one resolution as it did with my computer. I took the code and changed the resolution code on the detection blob and it worked, so here is the new code that I came up to be after everything. // - Super Fast Blur v1.1 by Mario Klingemann  // - BlobDetection library

import processing.video.*; import blobDetection.*;

Capture cam; BlobDetection theBlobDetection; PImage img; boolean newFrame=false;

// ================================================== // setup // ================================================== void setup {	// Size of applet size(640, 480); // Capture cam = new Capture(this,640,480); // Comment the following line if you use Processing 1.5 cam.start; // BlobDetection // img which will be sent to detection (a smaller copy of the cam frame); img = new PImage(80,60); theBlobDetection = new BlobDetection(img.width, img.height); theBlobDetection.setPosDiscrimination(true); theBlobDetection.setThreshold(0.2f); // will detect bright areas whose luminosity > 0.2f; }

// ================================================== // captureEvent // ================================================== void captureEvent(Capture cam) {	cam.read; newFrame = true; }

// ================================================== // draw // ================================================== void draw {	if (newFrame) {		newFrame=false; image(cam,0,0,width,height); img.copy(cam, 0, 0, cam.width, cam.height, 				0, 0, img.width, img.height); fastblur(img, 2); theBlobDetection.computeBlobs(img.pixels); drawBlobsAndEdges(true,true); } }

// ================================================== // drawBlobsAndEdges // ================================================== void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges) {	noFill; Blob b;	EdgeVertex eA,eB; for (int n=0 ; n<theBlobDetection.getBlobNb ; n++) {		b=theBlobDetection.getBlob(n); if (b!=null) {			// Edges if (drawEdges) {				strokeWeight(3); stroke(0,255,0); for (int m=0;m // ================================================== void fastblur(PImage img,int radius) { if (radius<1){ return; } int w=img.width; int h=img.height; int wm=w-1; int hm=h-1; int wh=w*h; int div=radius+radius+1; int r[]=new int[wh]; int g[]=new int[wh]; int b[]=new int[wh]; int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw; int vmin[] = new int[max(w,h)]; int vmax[] = new int[max(w,h)]; int[] pix=img.pixels; int dv[]=new int[256*div]; for (i=0;i<256*div;i++){ dv[i]=(i/div); }

yw=yi=0;

for (y=0;y>16; gsum+=(p & 0x00ff00)>>8; bsum+= p & 0x0000ff; }   for (x=0;x>16; gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8; bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff); yi++; }   yw+=w; }

for (x=0;x<w;x++){ rsum=gsum=bsum=0; yp=-radius*w; for(i=-radius;i<=radius;i++){ yi=max(0,yp)+x; rsum+=r[yi]; gsum+=g[yi]; bsum+=b[yi]; yp+=w; }   yi=x; for (y=0;y<h;y++){ pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum]; if(x==0){ vmin[y]=min(y+radius+1,hm)*w; vmax[y]=max(y-radius,0)*w; }     p1=x+vmin[y]; p2=x+vmax[y];

rsum+=r[p1]-r[p2]; gsum+=g[p1]-g[p2]; bsum+=b[p1]-b[p2];

yi+=w; } }

} According to processing website to add a center of a rectangle you will need to follow these steps rectMode(CENTER); rect(x,y,width,height); I ran a lot of tries, but I am still having hard time trying to figure out this code and also I am working on vectors. I am working on understanding if a project is moving in a constant speed and we can use the code to tell us where an object will end up to be. I thought that would be an easy code, but I found a lot of people struggle with that code more than any of the rest of their project which is getting me scared of not finding an answer because I would like to finish this project with a simple example.  

Week 4 Narrative
''When we started the project, we started from the zero. There were no other groups that started this project before. At first the project sounded exciting and very interesting, but it turned out to be very difficult. I have tried so many different kinds of software, but most of them were very difficult and their codes were so complicated and were very hard to understand and code require different kinds of materials that are expensive. Finally after doing a lot of research we found about processing. I found about processing through a book called "Making Things See" and I had never used processing before and I found a lot of codes that can help us using an Xbox 360 Kniect. I asked if we can try to use it since we were told to only use a regular camera. I found that the codes uses the sensors that sees the people and then the camera gives the image that it sees. We were told that we cannot use it because the requirement for this project is to work with even just regular camera. So we had to work on another code that can work for any type of camera. We found a processing code that could be used with any regular cameras. It is called "Blob Detection" we downloaded it and tested it, but we started with so many problems. One of us had a problem with just the camera resolution. For me, I had so many problems like the code was miss spelled in some words and also I had to try the older version of processing. We ended up by finding a code that you can run on your computer that can help you to find out what is your camera's resolution. After running this code on processing you will get a codes that you would need to try on Blob Detection code and see which one of these codes would match with your contacted camera. After running many different codes we found out the problems that we had before and we fix it. We played around with it for a while trying so many different codes on there. We tried to work how to put a center point for the rectangles that were drawn on there. We found how to get the point to be drawn on the top edge of the rectangle, but we could find an easier way to move that point to the center. I want to try to work more on this project because I would like to continue with at least a small example that can run and follow an object that is moving in a constant velocity.

Check this code if you want to draw a center point. WE COULDN'T FIGURE THE CORRECT FORMAT FOR THE CODE, BUT WE FOUND A WAY WHERE YOU CAN DRAW A POINT AT THE TOP LEFT EDGE OF EVERY RECTANGLE.

YOU WOULD PUT IN THE CODE RIGHT AFTER WHERE IT SAYS

 strokeWeight(1); stroke(0,0,400);//this changes the color of the box (r,g,b) rect(b.xMin*width,b.yMin*height,				b.w*width,b.h*height);

YOU WOULD PLAY WITH THIS CODE TO MOVE TO THE CENTER OF THE RECTANGLES

strokeWeight(3) stroke(400) point(b.xMin*width,b.yMin*height); point(b.w*width,b.h*height);

We tried with the code so many different times, but we never got to a correct code that works to put the point in the center of the rectangles.

Here is what need to be done next for the project:

1. Find a code to draw the center point.

2. Find a code to draw vectors.

3. Find a code that tells you where would an item end to be while it is moving using the vectors.

Here is a website that can help you to draw the center point

http://www.processing.org/tutorials/drawing/

check our CDIO page for tutorial videos.

Thank you and good luck ''