Sobieski

Case study of a WebGL experience with party cups

Merci-Michel ®
5 min readDec 5, 2018

The brief

At the end of March 2018, we collaborated with Sid Lee Paris on the launch of a new website for the Sobieski Vodka brand. A new version that takes part in the new brand identity and positioning created by the agency that boils down to: simplicity, against the flow, without bullshit.

For this occasion, the idea was to create a website with an interactive experience, inviting the users to interact with the iconic object of any home party: the red party cup! Try it here: sobieskivodka.com

No bullshit, ok.

At this stage, the first step was to define what actually mean “no bullshit”, what does it involve?! And we decided with the agency to go deep inside this concept with the very strict minimum wording, no “how to” stuff, no explanation, no clear goal and minimalistic user interface.

Here we had the interactive concept: an exploratory experience where the user got to find out how to interact with that cup and what to do with it — by himself.

The design issue

Building that kind of experience is all about finding the right balance between too little indications and too much (because we wanted as few as possible). Yet indications are pretty much everything on a website : text, motion, camera movement (we are in a 3D environment), UI and so on.

Here are several of the design challenges and the tricks we set up :

The tech answer

We use three.js for rendering and cannon.js as physics engine.

Controls

The first challenge was how to easily manipulate all objects in the scene. Since it is at the core of all possible actions, it needed to be simple and intuitive.

We allowed the user to grab objects with raycasting and a point to point constraint. When grabbed, for the sake of simplicity, objects can only be moved on two axes define by an abstract plane facing the camera.

But to let the the user build pyramids, stacks, do flips, etc… we needed a third dimensions. We tried several possibilities with keyboard and scroll. But since we wanted to only use mouse/touch interactions, we finally decided moving the camera by clicking on the floor with a classic orbit controls.

To make things easier for the user and help him navigate the 3D space we added helpers: a grid, a line and a target to visualize where the cup will fall. We also allowed the possibility for the user to catch and throw balls with respectively a short and a long click.

Helpers

For cups, the grab was a little bit more trickier than we thought. First we let the user grab the cup at any points of its surface but then it was too difficult to put it straight or with a lot of efforts. The solution we found was to only enable grabbing the cup by its top or its bottom.

But this approach creates another problem: if the user does not click the cup near its top or its bottom, it will jump to the constraint position instantly and generate unwanted movements.

So instead of a point to point constraint, we used a spring between the cursor and the cup that hardens the more the user move his mouse. The linear damping for cups are also different when they are grabbed or not, to facilitate manipulation.

Physics and events

For the physical body of the cup, since collision is difficult to compute for concave shape, we only used boxes — thin and positioned on the surface of the cup — and one cylinder for the bottom. To allow cups to stack, the boxes needed to be as thinner as possible. But small rigid bodies are less reliable for collision detection. To avoid tunneling, we added two additional overlays of boxes that are only enable against balls when their velocity is greater than a predefined threshold.

To determine user actions we needed precise details about the state of each object. Are they in the air, stable, right side up, upside down, stacked with other, idling ? How many rebounds did they made ? etc…

We added sphere-shaped sensors inside the cup, distributed homogeneously on its height and with different sizes to fit the model. (We also create an additional sensor — when two cups are close enough and on approximately the same height — to detect pyramids)

We could then for example, check if a ball was inside a cup, or by checking if all sensors were intersecting with their counterpart of another cup, if two cups were stacked, etc…

To detect if a flip or more were complete, we used a stabilization delay at landing to make sure the cup will not fall just right after the collision with the floor occurred.

3 cups pyramids detection

Sound

To add a realistic touch we bought some cups and ping pong balls we recorded during a pong session at the studio (we got some good snipers here). And spatialized those with the web audio API to make them sound closer or far-off.

VR prototype

To go further.

Conclusion

Na zdrowie! Enjoy responsibly ;)

Want to join us?

We are actually looking for talented people!

More informations here:
https://jobs.merci-michel.com/
Submit your profile/references here: jobs@merci-michel.com

Sincerely Merci-Michel
www.merci-michel.com
Follow @mercimichel on Twitter
Like us on Facebook

--

--