Virtual Sandbox: Sim City meets Augmented Reality

20 11 2009

Hey guys! Sorry for the lack of updates. Things have been really busy this semester with school projects and all. School is done for the semester for me! Thankfully I have no exams.

Anywayz, here’s one of the projects I’ve been spending a lot of time working on with my group! It is an educational Augmented Reality software targeted at little 4-6 year old kids. Basically it is an interactive system where children can use cards to build their own virtual town and populate it with people, while at the same time learn English vocabulary.

Here’s a quick video demo of our project. Please bear with the education style used for the video as we had to submit this to our professor. ^^;;

Unfortunately I can’t share the videos from our user testing session with the children at the kindergarten. This is because of the provisions in the Child Rights laws in Singapore. Hence, those videos can’t be posted anywhere in the public domain.

How was the user testing?
We conducted the testing at a kindergarten for 2 hours, with 3 groups of 3 children each, ages 6. The children had a lot of fun and were very engaged with the system. They particularly enjoyed the process of building their own town while populating it with people from the occupations that match the places. They had so much fun it took us quite a bit of effort to get them to stop playing.

Here is a picture of the city the kids built!

Due to our limited video editing and recording skills, the video demo above is unable to portray the true nature, functions and elements of our project which the children experienced. We’ll be recording a more proper demo in January.

In the meantime, if you wish to see the videos from the user testing session, do drop me an e-mail and I’ll share the private link with you ^_^

Advertisements




In conversation with Phil McKinney on NUI

26 08 2009


Phil McKinney and Anne in the Halo Room

2 weeks ago I was invited to participate in a discussion with HP‘s CTO, Phil McKinney, about Natural User Interfaces(NUI). As Phil is a very busy person, the discussion was held via HP’s cutting-edge video conference technology, Halo. Why cutting edge? It is because I felt as though Phil was sitting on the same meeting table as me in real-time. I could even see what Phil was sketching on a paper clearly.
 

Richness vs Reach/Mobility

Above, we see a sketch that Phil drew to start off his discussion. This is a graph showing the state of technology right now in the balance between Richness (user experience, high definition, sound quality, etc) and Reach/Mobility (reaching out and making technology available to more markets). For example, devices like televisions are high in Richness but low in Reach/Mobility.

The line slanting downwards shows the state of technology for the various products at the moment. Unfortunately, there is a void in the middle of products that are high in mobility with average richness. Not many products are able to balance this well while maintaining usability. There is also the “Laws of Physics” that is preventing technologies like a 75″ TV fitting into your pocket.

The challenge faced by most technology players right now is:
1) Filling the void
2) Getting off and above the line

HP’s strategy for achieving (2) is by exploiting Touch technology to increase the Reach of PCs to untapped markets. Currently, 80% of the market has never owned a PC due to literacy barriers. Another reason for this is the intimidation with using a keyboard and mouse. Some of us might find it difficult to understand why using a mouse/keyboard may be intimidating but if you look at it, there are people around us even in developed nations (e.g: the elder generation) who are afraid of performing mouse clicks in fear of what unintended actions may be triggered on the PC.
 

Why Touch?
Apart from there being non-gimmicky uses for Touch technology, it is a form of interaction which we have been using in our everyday lives – for example, the ATM machine. The learning curve for Touch technology is also less steep as users has one lesser step of learning how to use a new device.
 

Challenges faced with Touch
HP’s motivation for using Touch is to drive the ease of use of new technologies by designing devices that adopt to users. The very first question that popped into my head was:

“There are so many different type of uses around the world with different levels of literacy. Wouldn’t HP have to design many different devices for different communities?”
 

Sharings from the lab
Phil shared some research findings in HP’s labs. They tested touch with 2700 users in homes of mixed literacy and discovered that people touch devices differently based on where they grew up.

Here are some of the examples Phil mentioned:
– Some used a whole hand, their thumb or items like an eraser and pencil
– Pulling both sides of an image to enlarge an image
– Grabbing one corner to enlarge the image
– When grabbing one corner and realizing the image follows the hand, the user shakes his hand rigorously as though he was trying to shake it off

From all this research they were able to map out which set of gestures were natural for the different types of users. Based on the first few gestures used by the user, the computer will be able to tell which set of gestures to load for the rest of the user’s interactions. A similar analogy would be how users would select a language on the computer before your mobile phone loads up in the appropriate language.

Do note that this is just for research and not all gesture sets will be going into the final product.
 

Moving into emerging markets
Lastly, I was wondering how HP plans to break into emerging markets where there are a large number of non-PC owners. Due to cost and infrastructure issues, current PCs are too expensive for people to own in these countries. E.g: In Africa, the cost of one month of broadband is equal to 14 months of their salary).

Instead of focusing on bringing existing PCs into emerging markets, HP is focusing on building basic PCs with the right technologies to meet their needs. They are currently experimenting with concept PCs in some of these markets.
 

For a later post..
Phil did send me quite a bit of information on their work on emerging markets and also segments of the community with special needs. However, as this post is getting lengthy, I will keep it for another day. 🙂
 

Concluding words
From the session, I could tell that most of the players in the PC market are jumping into the bandwagon of natural user interfaces. There are lots of other cool technologies HP is experimenting with out there like gaze, motion and tactile feedback. However, most of these are still just “gimmick”s and not at a stage where it can be integrated into HP’s products. HP went with Touch as it is found to be the most practical and usable of them all.

Personally, I can’t wait to see what technologies will come into our computers 5 years down the road! (Maybe some cool holography/Augmented Reality stuff). On the other hand, I’m glad that we are not flooded with useless tech filled with only “gimmicks” for novelty. 😀

P.S: Oh yes! Before I forget, many thanks to the Amelia and Calvin from Waggener Edstrom for giving me this opportunity to be in a discussion with Phil. 😀





Ready for work in 5 mins!

10 06 2009

This came in my mail early this morning. Cheered up my morning for work!

Get up, change clothes, eat breakfast and be ready for work in 5 minutes 😀 Perfect solution for sleepyheadzz!!





Building a Touch Table 101

5 05 2009

As promised, here is the article on how my team and I built our interactive kitchen table step by step with all the details from hardware to software. Note that this is just ONE of a few methods to build a touch table in your garage and other methods may be more suited for the purpose of your touch table. To learn more about the various different methods, I highly recommend joining the NUIGroup community who is always there to contribute to discussions and help you with any questions you have 😀

Alright, lets begin!

Methodology used: Diffused Illumination

Hardware Setup
Materials required:
1) Piece of glass / acrylic
– this is the surface you’d like to project your screen onto. My team chose to use glass instead of acrylic as acrylic is a bit soft and bendable, hence not very stable for a kitchen counter top where you’ll be messing around with your ingredients.

2) Stand to place your piece of glass/acrylic
– can be wooden, metal, self-made, as long as you have enough height to project your screen onto the surface from below.

3) Projector
4) Web camera
5) Piece of Mirror

Once you have the hardware needed, set it up such that the arrangement is similar to the image below. (Please excuse the illustration.. drew it in a hurry with no thought for the colours chosen haha!)

Adjust the mirror angle and projector settings until you get the size you want of the projected screen on the surface. Once you get the alignment right, position your web camera such that you are able to see the whole projected surface area. Note that you need not position the web camera where I placed it.

Alright that’s it for the hardware! Simple huh? Now lets move on to the software needed.
 

Software setup

1) TBeta by NUIGroup

This is the software needed to track shadows detected by the web cam and send the data retrieved from it to another channel or software. I highly recommend that you read the tutorial on how to adjust the different settings (e.g: Threshold, contrast, callibration, dynamic subtraction, etc). For our table, we took a while to get the settings right so that it will detect only shadows that were pressed on the surface.

2) FLOSC
What FLOSC does is that it takes values sent to it by Tbeta and sends it over to your interface application. For ours, we used it to send positional coordinate values to our application written in Flash.

3) TUIO
TUIO will contain the code which you will need to add to your Flash ActionScript file so that your Flash application listens to data from the same port that FLOSC is sending data to. In other words, if FLOSC is sending data to port 3333, you set your Flash to listen to data from port 3333 using TUIO. You can then use TUIO to extract the necessary data for your application like the X and Y position of the shadows detected on the interface. Note that the TUIO API also has code which you can add to your Flash application to detect TouchEvents (e.g: TouchEvent.CLICK).

4) Adobe Flash
Thanks to the TUIO API, all you have to do is code your whole Flash application as per normal, add the necessary 2 or 3 lines of code to read the TUIO API and replace all your MouseEvents with TouchEvents :D! It is really that simple. The way I coded the interface was to do it fully using MouseEvents in Flash and then later changing it to TouchEvent when I wanted to test it out on the touch table.
 

Limitations of not having InfraRed Lasers
The limitations of our setup is that we didn’t have any InfraRed Lasers / Camera to detect the touch. We were using shadows and sometimes when the shadows can’t be seen clearly especially in areas where the background colour is not dark enough, it is not detected as a touch. Hence, I designed the interface such that all the buttons that will be pressed have dark-coloured backgrounds. If you used InfraRed lasers on the other hand, it would be very sensitive and accurate leading to a much more responsive table. The reason why we didn’t include the InfraRed gadgets was because we couldn’t find them here in Singapore and didn’t want to spend a bomb ordering them from overseas. If you do get the opportunity however, I highly recommend using InfraRed lasers.

Alright! So there you have it, my quick guide to building a touch table. If you find any parts of this tutorial unclear or have further questions, feel free to e-mail me or drop a comment here! Will reply and update this post accordingly as soon as I can 😀

And thanks for the very encouraging comments on the project guys! Really appreciate it! Enjoy hacking in your backyard! ^_~

Additional resources you might want to check out:
Discussions about building a touch table on NUIGroup
Build your own InfraRed Camera (thanks James!)
Quick tutorial on building a touch table using Total Frustrated Internal Reflection
Reactivision for tracking fiducial markers





Recipease: Interactive Kitchen Table

20 04 2009

Hey guys! Know I’ve not been blogging for a looong time… mainly because I’ve been busy working on projects at school and exams are around the corner. However, thought I should just put up a quick post about one of my projects, Recipease, a project I worked on as part of the CS3248: Design of Interactive Media module. What we did was create a multi-touch table that is supposed to show a concept of how major problems faced by people during the food preparation process can be reduced.

Before I say more, here is a video demonstration of our table.

Yup so that’s about it in short! Promise to update this post with more information later. In the meantime, if you have any questions about the project / engineering behind it, etc etc do feel free to either e-mail me or leave comments here. I promise to follow-up on it as soon as my exams are over!

And oh yea, do let me know what you think of it as well in terms of concept, interface and interaction design, functions, or even the actor! Thanks for watching! ^__^

Update:
Thanks for all the encouraging and positive comments guys! Really appreciate them. As promised, here is the follow-up with more information about the project.

Our assignment requirement is to take any existing electronic appliance or propose a new one that would solve a problem or make a house chore easier. The problem we set out to solve was that faced by users in the kitchen.

Before we built this, we performed some surveys and interviews to identify the common problems that caused people anxiety during both food preparation and cooking. Our user group consisted both of young adults and housewives, including both novice and expert cooks. Based on the list of problems gathered, we came up with 3 primary functions:

Use Case: Our primary use case is where a person, has some ingredients in his fridge but do not want to crack his head thinking what he can cook with them. He wants a system that can tell him what he can cook based on what he has or feels like eating (sometimes you may have fish and chicken but only feel like eating chicken).

1) Ingredient Recognition
Problem it solves:
– trouble of not recognizing what user has especially with ingredients user hardly purchase
– unsure of what user can cook with what he/she has

2) Recipe Recommendation
Problem it solves:
– saves trouble of thinking about what he can cook
– widens choice of recipes user can cook. Not limited to only what the user knows
– informs users of which ingredients are missing
– if user is cooking more than one dish, system notifies of whether there is enough for both dishes combined

3) Recipe Scaling
Problem it solves:
– most recipes available are based on the assumption that users are cooking for one diner. However, sometimes users throw parties/gatherings that require preparing a dish for more people.
– able to tell user whether he has enough for a gathering and if not, how much of which ingredient is missing

Our user testing of the table after it was built showed that this concept does help in reducing the anxieties users felt during food preparation and pre-cooking. Glad that those of you who have seen the video felt the same as well.

About the module
Got a couple of e-mails asking whether this module teaches students how to build a touch table. The answer is no, but in fact the professor (Prof Zhao ShengDong) made it better than that. What he did was he made the assignment very open-ended, with the only requirement being to come up with a concept that will make a house chore easier for a user. This can be a new electronic appliance or an improvement of an existing invention. It is totally up to students how they want to implement/show their idea. Hence you have students building touch tables, interactive screens that utilize the Wii mote and even remote controls for house appliances using the cell phone. In other words, this module does not teach or spoon feed you on how to do things but rather create an environment that encourages students to experiment. How much you learn depends on how much time and energy you invest into your idea. Having said that, Prof Zhao will be constantly keeping track of the progress of projects and is a great resource of how to go about your project and who to approach for further advice.

The Team!

This is my crazy team who just can’t stop making me laugh while working throughout the project. Somehow we never fail to do something really silly which we’ll be stuck at for hours before suddenly encountering an “Eureka!” moment. Lots of silly things happened during video filming as well, like someone’s handphone ringing and ingredients rolling off the table.

Back row (from left) : Teong Leong, Joel a.k.a Kar Meng, Jeremy Wong
Front row (from left) : Me! , Yiyang

Concluding words
This was a really fun project which made me realize how fun research can potentially be. I used to shun away from it thinking it was for n3rds but came to realize it can turn out to be pretty cool especially if you’re working on a fun project within your scope of interest. Would really love to thank my Prof Zhao for making CS3248 a really fun module and encouraging us to go on with our ideas!

How to build a touch table?
Got a lot of questions about how my team built the touch table and all the various programs we used. I’m currently in the process of writing a step by step tutorial on it and will upload it within the next 2-3 days! In the meantime, look out for it! ^_~

Update: The tutorial is done. Check it out here 😀





TinEye: Reverse Image Search

8 02 2009

Personal Note
Hey guys!! I’m still around. Sorry for not blogging for a long time. Again, time is the bane of my blogging. To give you a summary of what I’ve been up to:

December – Jan : was in Dubai for a little over a month (Yes, I will blog about my travels there soon!).
Jan – Feb: Busy with Chinese New Year and startup!


 


When my friend told me about a new image search engine, I was skeptical. Following the discussion at the Future of Search forum last year, it was well understood that there is still the issue of defining the grammar that will help machine understand images. However, I went on to give TinEye a try and in return, found myself fiddling with the site for over an hour!

What exactly is TinEye?
TinEye is a reverse image search engine where you use images to search for other similar images around the web. Note that by similar, it does not mean only fully but also partially similar images.

Fully and Partially similar images???
Yes, their algorithm is able to find you not only images that are exactly the same but also images that have matching parts! You can choose to either upload your image or provide the address of the online image. Check out this demo video:

Some exciting examples:
Input image:
Lets try the campbell soup can!
Campbell Soup Can Input

Some of the more interesting results:
Campbell result1 Campbell result 2 Campbell result 3

So what have we got from left to right?
1) Similar but in a different perspective
2) A different flavor!
3) Matches that have been designed over by other elements.
 

Input image:
What about the new Nike iPod shoes?

Results:
Fun stuff!

So, what is TinEye’s weakness?
Like all systems, TinEye has a weakness as well. Although it can search objects it is unable to match them when they are at different angles. For example, if you try searching for any image of a human face, all you’ll get is the exact same angle and face with some parts of the picture matching.

Here’s an example:
Input image:

Results:

So if you saw a picture of someone and wanted to find more pictures of the person from different angles, you wouldn’t be able to do so unless you somehow successfully identify the person and do a text search on Google Images (or have TinEye integrate iPhoto’s face recognition algorithm into their engine).

Also, TinEye’s database is now very small so images that are not popular on the web usually returns 0 results. I am unsure of how TinEye is going to catch up and index all the billions of images out there especially since tens to hundreds of thousands more are uploaded each day.

What will people use it for?
From my 1 hour+ of fiddling with TinEye, I can see what I personally, and a few others will be using it for:
1) Finding more higher resolution pictures of an image
– Very often designers find a low-resolution version of the perfect picture they want. TinEye can find similar images elsewhere on the Internet with possibly a higher-resolution

2) Got a damaged/distorted image? Get TinEye to find the full, complete image for you

3) Find the original collection series where the image came from. (E.g: Every single campbell soup can design that ever existed)

4) I’m not sure whether people will actually use it for this but I got some surprising results with some images that hinted this use case: Measure the amount of buzz your company/product is creating. For example, are a lot of people placing and talking about your stuff all over the internet? Are they editing your images and adding cool effects to them?

I put the image of the WindowsMobile Iphone skin from my PDA and got this
Putting in Facebook’s logo, I got this

On the overall, I really love TinEye and am really excited about its development. In the meantime, I’ll be using it frequently for the first 3 reasons mentioned above! 😀

P.S: I forgot to mention, get the Tineye Firefox plugin!! It really adds the fun to image search as you can now just right click on an image and select “Search image on TinEye”. A new tab will then open with the search results 😀





Beyonce on SNL, Justin in Leotard!!

16 11 2008

I know this is unrelated to this blog but I JUST HAD TO POST THIS UP!! This video is just SO HOT! 😀

Hahahahaha Justin!!!