Code-lab

My half-baked coding projects.
Work in progress and works that should have been progressed...

2018~ | LIKED it!

Facebook plugin that likes when you really, truly mean it

No more of those disingenuous likes! It only likes when I really smile. But then you start to wonder if something sinister like this is actually running on your laptop to collect data already...(scary)

Solo project : From concept to development ( processing + javascript )

2018~ | Surveillance Go  

Training custom neural network model to detect surveillance cameras

What if we can train our own neural network to detect surveillance cameras that are watching us? With this question in mind, I started training neural networks model (tensorflow.js) with more than 1045 images of different surveillance cameras. Currently, the accuracy is still low. It needs more data. Do send me more pics of those surveillance cameras around you!

Try the prototype on the browser (Github)

Solo project : From concept to development ( HTML5 + python + tensorflow.js )

2018 | Using YOLO to make objects talk

AR+AI app that makes objects talk to each other

What do bananas feel everyday? How about beer bottles? We now know with this unique AR+AI app which I made in a day at the CIID machine learning course. It uses YOLO, an object recognition framework built on neural networks to detect the thing it is seeing. I created a XML list for the objects that it recognizes to build a playful dialogue between things.

In collaboration with : Salem Al-Mansoori
My part : I worked on the code part. ( openFrameworks + oFxDarknet + XML )

2016 | Voice to Text Bazooka

Bazooka that converts voice into texts for concerts

At the Academy for Programming and Art (BaPa), I developed an installation where your voice gets shot as 3D text objects on the screen. This was used in an actual concert for the pop idol group Rainbow Konkista Dolls in Tokyo.

In collaboration with : Shogo Tabuchi, Tsutomu Shiihara , Risa Horikoshi, Naho Yamaguchi,
My part : I worked on the entire code+engineering part. ( Unity C# )

2015 | Interactive AR Washing Mirror

Washing Mirror that teaches kids the proper way to wash hands

Interactive AR washing mirror to teach children the proper way to wash hands. One of the most effective way to prevent the spread of infections is to wash hands in the proper way. But most children tend to only rub their hands together, which will not sterilize the germs on the back of the hand and between the fingers. To meet this challenge we have developed an interactive washing mirror that makes washing hands fun whilst teaching children the proper way of washing hands.

In collaboration with :  Yuta Takeuchi, Eri Nishihara, Katsufumi Matsui, Yuta Kato, Wataru Ito
My part : I worked on the hand detection algorithm (openFrameworks C++) and the movie

2013 | Chronobelt

Wearable belt that enhances your time perception.

If we had the ability to tell time accurately by the second, what would it be like? To find this out, I made a wearable belt which has 12 vibration actuators embedded inside. These motors map the direction of the second hand of the clock around your waist. Without looking at the time, users can always understand the correct time from the location of the vibration around their body. By wearing this belt daily I could understand subjective time objectively, becoming aware of the length of every activity I do. I started to quantify my every actions such as the length I take to read a page, or how fast I was speaking. Wearing this belt for a long period of time may have changed my vague perception of time completely, but I developed a unique headache after 2 month and quit. Still, a cool idea, maybe might try again. Anyone interested in trying out?

Solo project : From concept to development ( 3D modelling in Rhinoceros + Arduino )


That's it folks. Thank you for reading all the way down here.
Stay a bit longer if you have time ;)