User:Mir07/mywork

Link to team page.

Write problem/project Goal
By Using xbox360 kinect scan a 3D model and then using a 3D printer create a 3d model

My First Task
Research on kinect, how it works and its different features. Figure out how is it able to create a 3d Model.

Summary of actual work over first weekend
Basically thought that the windows kinect sdk would handle all the software requirements, however i ended up downloading different software. They all all work in conjunction with the kinect in creating a 3d image. Furthermore drivers for the kinect were needed and the libraries that allow this to happen.

Week1 Narrative
Downloaded the sdk for kinect, openGL, Processing software, kinect libraries that allows the kinect to scan an object in 3d and store a mesh object which then can be put together as a polygonal model using any 3d package.

My Second Task
Bring everything i've worked on so far and create a 3d scan of an object using xbox kinect.

Summary of actual work over second weekend
In week one I decided to work with Microsoft sdk for the kinect, however the sdk did not allow for the saving of the depth scan in any file format. Therefore I decided to work with another sdk based on the libfreenect-software from http://openkinect.org. I scanned an object using Dlibs demo software from https://github.com/diwi/dLibs and the Xbox 360 kinect and then saved that file in an obj format. I was able to complete my task, I successfully scanned a 3D model of myself,saved it as an obj file and rendered it in Maya.

Week2 Narrative
I downloaded the drivers required to make the kinect work with the computer from https://github.com/diwi/dLibs. I downloaded the zip file which contained the examples, library, reference and src folders, however for this project I only needed the examples folder and the library folder. The examples folder holds all the demo software which allows the kinect to scan and save a 3D model. The library folder holds the Drivers for the kinect, it has three drivers and they are individually installed in the device manger. Furthermore, in order to run the demo software the processing environment needs to be installed http://processing.org/, it is an open source programming language and editor which is really good at creating interactions. Finally I was able to scan and save a 3d image of myself by running the example program kinect_basic_3D_export_scan in processing. I also used mesh lab to tweak, rotate and export the 3d model as an alias wave front object extension, which would allow me to open the mesh in any 3d package.

My Third task
Up until now I was able to scan an object only from a single point of view and it was only a single snap shot, for my third task I will begin working with other demo software’s that would allow me to scan an object from multiple points without have to scan multiple times. This will create a 3d model which I would be able to rotate in all directions 3d modeling package like Maya or 123d.

Summary of actual work over third weekend
In week two I decided that I would look for demo software for the kinect that would allow me to scan an object in 360 degree and save that model as an obj format. I was able to find this software as well as the drivers required to make the kinect work with the software. http://labs.manctl.com/rgbdemo/index.php/Main/Download

Week3 Narrative
To make the kinect work with the software I installed three drives, however there is another all-in-one installer provide by Zigfu that take cares of the entire process of installing drivers. I download and installed the Zigfu installer http://zigfu.com/browserplugin.html or http://labs.manctl.com/rgbdemo/index.php/Main/Download. After installing the drivers I downloaded the RGBDemo-7.0 from http://labs.manctl.com/rgbdemo/index.php/Documentation/Demos. This software demo contains exe files that take depth data from kinect and show it as point cloud data, there are also other demos that can render in other 3d formats, ie triangles. The one I was interested in most was the Demo of object model acquisition with markers, this demo software allows for the scanning and saving of an object in full 3d, the model can then be rotated in full 360 degrees in any 3d package and even printed using a 3d printer. However I encounter a minor problem, the demo software is available only for 32bit operating systems and does not work well with the 64bit systems. I was able to run some of the components, such as the point cloud and save it but I was not able to run the object model acquisition demo. Therefore I was not able to create a 360 degree view of an object. There are a few other things that are needed for the object model acquisition demo to work properly in addition to the 32bit operating system. First the kinect needs to be collaborated using one of the demos provide in the rgbd demo with a marker board Some of the details can be found here http://nicolas.burrus.name/index.php/Research/KinectCalibration and here http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration. After running the collaborating demo software, the kinect collaborates with the checkered marker board should able to scan a 360 degree model.

My Fourth task
For my fourth task I want to run the demo software on a 32bit operating system and try to scan an object in 3D and hopefully get the same result as in this video.http://www.youtube.com/watch?v=iK_r2Ru3wP4

Summary of actual work over fourth weekend
At the end of week three I decided that I would make use of a 32 bit computer to run the rgbDemo software, however I was unable to find one and was not able to scan an object in full 360 degree view. I decided to take a different approach that involved cloud point data from kinect to create a model that looks 3D. The process for creating a cloud point requires installing some software and libraries,I have covered the steps in the following tutorial.


 * Tutorial on scanning object using kinect

Week4 Narrative
The first step was to create a cloud point of an object, in this case i decided to make a cloud point of my face. This was done using kinect and processing. I then brought the mesh in mesh lab for cleaning up. The actual cleaning up of the mesh requires several steps which I have covered in this tutorial. I was able to create a clean version of the cloud point mesh which can be brought into the maker bot software for preparing it to be printed.
 * Tutorial on cleaning up cloud point data

Complete Project Page
I was able to get the kinect to work on the computer using drivers, I was also able to make use of some of the features of kinect such as the depth data and cloud point data. I was able to scan and save cloud point data which I cleaned up using mesh lab and created a file ready for 3D printing. However I was not able to scan an object in full 360 degree view due to the fact that I was not able to find a computer with 32bit operating system installed on it, but the process of creating a full 3d scan is covered in my 360 degree scan tutorial.Scanning an object using kinect: Full 360degree view