Digital Activation In Physical Space
How could new technologies like image recognition and augmented reality help user make better in-store shopping decision?
How would these technologies change the way we discover and interact with digital content in a physical world?
Online + In-Store
With the use of image recognition, we can leverage a lot of digital content exist today on the e-commerce platform to get the advantage from both online and in-store shopping.
You can see customers' ratings, reviews, product pictures in detail, recommended that goes together.
Sometimes even demo videos showing you how to take full advantage of the product.
It is a better discovery and browsing experience in general.
Products you can see from your eyes are organized by physical proximity.
You can look at more products at the same time.
What are the needs in different departments?
How would that inspire different features and results in different ways of interactions happened?
What story or interaction in turn would the potential functionality of the app enable?
Unified Interface = Unified Experience
At the very beginning when we lay down the ground for the UI structure, our vision was to use a single entry point for all kinds of activations. From 2D materials to 3D objects, still items to video clips, including locations and sound, no matter what the subject is, we would activate it from the same interface. There shouldn't be a need to switch. Therefore the UI structure should be adaptive to present various digital contents coming from different activations.
Different Contents needed to be presented differently
Some content require the camera to focus on the subject throughout the entire experience, otherwise once the tracking is lost, the content would be gone too.
Some contents stay on the screen once the subject is recognized, even if the camera is moved away.
card as a basic structure for the interface
In-store User Testing
We take turns to be moderator & note taker, meeting with 5 to 7 participants per day for 4 days. I shared some thoughts & tips about user testing in my blog here. Welcome to check it out.
Inform users about what's coming next
This is the biggest take away from the user study. So we started evolving on ways of organizing content so that we can always show a bit of what's coming and what else available.
During the early user testing, some participants tried to fill the whole camera screen with the item barcode because this is how they understand a item scan.
We need to explicitly communication the idea of "camera reading the whole item" . This would be the one and only thing on the initial instruction.
In the following tests, we found out that if we make the instruction into slides, people tend to swipe through very quickly without actually reading the text and understanding the illustration on each slide. So we changed it into a short animation that auto starts the first time user launches the app.
As we keep adding new types of content, new features and new subjects that we can activate to enrich the experience, we developed a design language, a principle, for the app.
Multiple item recognition was introduced. Different filters can be applied so that users can quickly know which one is suitable for them. As the camera move closer to focus on only the one item that user wants, the filter feature would disappear and a preview of that product detail page would come in near the bottom of the screen as a preview.
If the item contains rich AR content, it will show up automatically. once user moves away the camera, product detail page will be serve on full screen. However if that’s a video, it will continue to play in full screen. A product detail page featuring the video will be serve after the video is finished.
You can save a particular product or content to your list.
Tap and hold to enable sound recognition.
We also integrate video activation. Point the camera to a playing video to get a list of item related to this particular video.
Go to your list to see what you saved previously.