Just Screw It

Russell Okamoto
6 min readMay 19, 2018

Exploring The Depths Of Spatial Design

Earlier this week I met an exciting spatial design tool vendor and had fun showing a few approaches to UX challenges in XR. In particular, I demoed a prototype that addresses Z-Axis/depth navigation challenges when designing AR in AR using a screen-based device, my mobile phone.

So today, when I read, “Behind the Scenes of the World’s Best XR Designers / Interview with Sam Brewton from R/GA”, I was excited to hear Sam mention:

  1. How AR/VR design mistakes commonly begin by “[mis]understanding the Z-Axis and the camera as the viewport”
  2. The desire for an “AR and VR learning platform specifically for creating AR and VR” and “the best way to learn about designing for interactive three dimensions is in three dimensions.”

I’ve been developing interaction design solutions for 25 years. I started building spatial hypertext systems in 1992 when the first wave of Pen Computing came and died quickly that same year.

For me, the holy grail of interaction design is — and always has been — about enabling end-user tailorability. That is, the ability for people of any skill level to not only browse and experience XR worlds, but to create them quickly and easily.

In my opinion, one of the first hurdles to tailorability in XR is Z-Axis/depth control. Overcoming this hurdle is essential for developing tools that can run on low-cost hardware and let anybody build and learn “XR in XR”, what I call, “in holo res” (ala “in media res”).

Problem of Z-Axis/Depth Control

When designing with physical tools or with digital XR tools, artists want to naturally express three-dimensional depth, e.g., by placing objects in front of other objects to create the illusion of perspective.

However, using screen-based devices — e.g., laptops, tablets, phones — for XR composition fundamentally limit artistic expressiveness due to the planar, two-dimensional constraints of touch-based input.

As your mouse, finger, or pen slides along a glass plane, touch screen input systems record raw X and Y touch input data. But not Z-Axis information. So Z (depth) data must be inferred indirectly, e.g., through additional hints about touch pressure and touch radius. Apple 3D Touch APIs, e.g., inform apps about every touchpoint’s force and radius (based on how much screen area is covered by your stylus or fingertip).

Apps can interpret these extra touch hints as a rough estimate of how hard you are pressing downward/inward into the device screen and consequently how “deep” you are gesturing into a virtual Z direction of a scene.

But, there is no corresponding touch data available that informs apps about upward/outward movement in the Z direction, away from the device screen.

Resorting To Workarounds

In absence of a holistic depth gesture (communicating both downward/inward and upward/outward intents), screen-based device apps — like a 2-D drawing or painting program — often resort to layering as a way to manage and simulate scene depth.

Management of layers is typically implemented as a stack of user interface views that are attached to a main canvas. Consequently, to select and manipulate a particular layer within a composition — e.g., to move a layer forward or backward within a composition — you must first swap focus from a main canvas to the adjunct layer stack, manipulate the layer stack, and then return focus back to the main canvas. Such layer management causes undesirable workflow latency, mandating a context switch where your attention deviates away from the main canvas area.

For screen-based 3-D prototyping apps, Z-Axis translation is done in a variety of ways that varies from app to app. But each solution typcically involves indirect methods — multi-step operations whereby dedicated modes, menus, or object handles are first selected while a secondary gesture effects the actual Z-Axis translation. This indirect, multi-step process is slow and cumbersome.

In summary, when it comes to using screen-based devices — e.g. laptops, mobile phones, tablets — for XR creation, a direct manipulation method for depth gesturing that lets you communicate both downward/inward and upward/outward Z movement is heretofore undefined. Absence of such depth gesturing limits expressiveness in digital creativity apps across many fields including art, social media, web communication, games, drafting, VR, and AR.

A Solution: The Corkscrew Gesture

I recently developed a mobile art app called Spriteville and launched it in the AppStore a few months ago. The goal of Spriteville is to make animating with mobile digital ink as quick, easy, and expressive as typing. To do so, Spriteville introduces a novel gesture for translating the Z-Axis/depth of drawn objects (aka “sprites”).

In Spriteville, you can directly manipulate the Z-Axis of objects (or layer of objects) seamlessly using the Corkscrew gesture: in one motion, you select a sprite and then move the sprite in a circular-like path where the direction of circular movement (e.g. clockwise) changes the selected sprite’s Z-position, e.g., placing it behind the nearest other sprite (based on Z-position). Conversely, circular movement in the opposite direction changes the selected sprite’s Z-position placing it in front of the nearest other sprite. The degree of movement and the number of circular revolutions required to trigger a Z-position change can be pre-configured. The Corkscrew gesture can be done iteratively and incrementally in a seamless motion to continuously change the Z-position of a sprite in relation to many other sprites. The speed of the circular movement and/or the radius of the circular movement control how far a sprite’s Z-position changes in relation to other sprites; e.g., a relatively fast or wide circular movement causes a sprite to change its Z-position so that appears in front of (or behind) several sprites (instead of just a single sprite). Z-Axis translation can move a sprite continuously or discretely — where the selected sprite “snaps” to alignment positions behind or in-front of nearby sprites.

The Corkscrew gesture works on any 2-D input surfaces enabling signaling of 3-D axis movement in a single motion without lifting your finger up. This is desirable because it allows for fast, precise, direct manipulation versus methods that require multiple steps.

The Corkscrew gesture also preserves the Pinch gesture for the familiar action of resizing/scaling an object. While Corkscrew changes the Z-Axis of an object, the well-known Pinch gesture changes how big/small an object is. Note, both the Corkscrew and Pinch gestures can be applied to a single selected object or to all objects at once in a scene…

So “augmenting” a 2-D Corkscrew gesture into 3-D, here’s a quick Corkscrew gesture prototype in AR …

XR In XR Tailorability

The Corkscrew gesture is just one of many gestures I think are needed to unlock XR in XR tailorability, particularly if targeting low-cost, screen-based devices like phones and tablets and laptops.

We’ve seen the power of VR in VR tailorability from established game engine vendors and ambitious startups.

Likewise, I believe there are compelling benefits for AR in AR tailorability such as:

  1. In-situ context — e.g. incorporating real world forms, lighting, textures, and spatial audio into your designs via sensor mapping as well as placement/pathway triggers.
  2. Instant live feedback — simultaneously “building and playing” enables a tight loop where tailorability decisions are rapidly tested and iterated.
  3. Shareability — multi-user experiences enable collaborative solutions and inspire applications based on audiences with thousands of people.

But more XR UX lexicon and grammar is required. The gesture language of XR needs to be prototyped, proven, and promulgated.

Stay tuned for more posts!

For fun, I’m developing an “AR in AR” platform and a suite of XR games. If you are looking for XR design and development, please reach out. I’d love to help your team!

--

--

Russell Okamoto

Co-creator of Spriteville, Dynamic Art, http://spriteville.com / Co-founder of Celly, Emergent Social Networks, http://cel.ly