The project was developed in a collaboration with the University of Toronto and Autodesk Research under the mentorship David Ledo, Tovi Grossman, George Fitzmaurice, and Fraser Anderson. During my 4 month internship, we developed the MoodCubes system and ran a study with 8 creative practitioners to learn more about how how tool can support creative practices. Our MoodCubes Research Paper was then accepted and was presented at DIS ’22, the ACM SIGCHI Conference on Designing Interactive Systems. Be sure to watch the full conference talk below, and read on for more information!
In early stages of creative processes, practitioners externalize and combine inspirational materials, using strategies such as mood board creation to achieve a desired vision and aesthetic.

However, collecting and combining these different materials for mood boards can be difficult for a number of reasons.
- Mood boards bias towards 2D images, neglecting audio, video, and 3D models. In an increasingly multimedia rich world, static 2D mood boards are also becoming less adequate to express ideas for interactive video games, film, and more.
- Alternative externalizations such as prototypes are best suited for later stages and can be time-consuming and tedious to create. For example, in order to visualize an AR concept, a designer might have to learn professional editing software like Adobe After Effects.
- Online searches lead to disjointed sources between different websites and assets in the file system. For example, all images are collected in Pinterest while YouTube playlists contain video inspiration materials.
To address these challenges, I created various early sketches on iPad, Sketch App, and in TiltBrush VR then discussed the concepts with the project collaborators for additional feedback.












After iterating on the concepts, we created MoodCubes, a 3D mood boarding system for rapid creation, collection, and manipulation of multimedia content. MoodCubes runs on a client-server architecture combining JavaScript and Python.

While the client web app enables all key interactions with MoodCubes, the server maintains a database of all imported items and analyzes them to generate suggestions. Both the client and server use BabylonJS for 3D processing, where the client is responsible to render visuals and enable 3D interactions. Below are some MoodCubes we created with our final MoodCubes system.
To add in content, viewers can drag in images, videos, audio files, and 3D .gltf files from their computer to import them. When adding content, MoodCubes decomposes objects on the server (e.g., extracting colour palettes) and suggests new materials without the need to search (e.g., 3D models, images, lighting effects). These suggestions are generated using libraries and services including Google’s Image Vision API, Google’s Video Vision API, the SketchFab API, and more freely available tools.

The system also provides filters to change the scene’s aesthetic and layout. The Lenses tool can change the material properties of the elements in the scene (below).
Meanwhile, the Align and Rotate tools can move the objects in the scene to face or move towards one of the walls of the cube (below).
Finally, the system includes a versioning feature to keep track of multiple iterations of MoodCube designs. To save a MoodCube version, the viewer can press the upper + button in the centre of the top menu bar. After saving multiple versions, they can view their previously saved designs and click on previously deleted assets to restore them to the current scene (below).
After using the system ourselves, we recruited eight creative professionals (6 Female, 2 Male) aged 25 to 39 from disciplines including design, architecture, film, and theatre to learn more about how MoodCubes might integrate into existing creative practices. Below are some of the cubes they created during our research study.

For more details about MoodCubes, be sure to check out our full paper!