"Capturing the reality as-built for various purposes (renovation, rapid energy analysis, add-on design, historic preservation, game development, visual effects, fun, etc.) is possible using your standard point and shoot digital camera thanks to advanced computer vision technologies made available through Project Photofly."I am thinking of using it more for a project so I started off with a few experiments with two things on my table. You might recognize these as the characters from Carpetface... and well, they're the things that live on my table...
The Excitable Dog
In case you are wondering, the "Excitable Dog" is a $2 water-gun squeeze toy from Daiso...
123D Catch requires you to take about 40 photos around the object, after which you tap on the thumbnail on the iPhone app and wait for a very long while as it processes. After it has uploaded to the cloud, you can tap it again and approve it for sharing with the community. Unfortunately, at this point, if the app is unable to process the images, it will show a big "X" on top and you will have to try again with a new set. For me, items which are shiny or translucent almost always fail...
This was the first successful item to be "caught".
The Horrifying Dog (View on 123d)
The result is that I realised that the bottom and the back of the excitable dog was not evenly lit and thus did not show up at all. So I decided to use an ikea lamp with a flat top as a kind of makeshift lightbox.
And so this is what a broken piece of Aristotle's head looks like.
Aristotle's Head (View on 123d)
Clearly a few improvements still need to be made with regards to how I take the pictures - I need to change the surface on which the items are placed because if both the item and the surface are white then there is no visual contrast. I think I will find a grid or something, and also adhere some random coloured dots on the objects which might help as markers. Another thing I want to build next is a camera and lighting rig.
Needless to say I'm really excited about 123D Catch even though it seems to be its infancy (the cloud-based service is only about a year old now) but I think it will be important because it works on consumer mobile devices and computers, and anyone can use it. As it becomes more and more accurate I can see people taking pictures of everyday things, and museums making use of this technology for educational and archival purposes.
Next steps: I intend to figure out how to build a camera and lighting rig, clean up the models and experiment with Meshmixer, but in the meantime, here are some more of their official tips on how to improve the image quality: