Fabio Matsui / April 25, 2017

Unity 3D: a primer for XAML developers

As a creator of customer experiences, Wire Stone has been developing XAML-based applications like Silverlight, Windows Phone, WPF, and UWP apps for years. In fact, with so many technologies using XAML, we simply use the term generically to represent entire classes of applications. But you can’t get too comfortable if you want to stay relevant, and relevant customer experiences today include the immersive experiences virtual reality provides—and Unity enables.

Here at Wire Stone, diving into the world of Unity 3D to create VR and HoloLens apps for our clients motivated us to quickly adapt to a new development paradigm. We didn’t find much out there to help XAML developers make that shift. The goal here is to offer a solid starting point that we hope makes it easier for other developers to take the Unity 3D plunge.

Pages versus scenes

XAML pages typically start with a control that contains other controls. Layouts are normally based on calculating rectangles where controls eventually have a size and location in 2D space. An XAML file may contain elements that are not necessarily visual, like elements for supplemental logic, gluing things, animations, etc.

In Unity, the closest equivalent to a page is a scene, where 3D objects have 3D coordinates. There’s not much of a layout concept as with XAML Grids and StackPanels. However, similar to XAML, you can add other objects that are not necessarily visual but provide supplemental logic.

Unity3D Editor

Hierarchies

XAML hierarchies are based both on nesting elements and parent/child elements. Unity has a similar concept, however, the hierarchy in Unity has a strong relationship with 3D transforms. Think of 3D transformation as basically moving, rotating, and scaling 3D objects. The idea, for example, is that if you scale the parent object, any children are also scaled.

This is what I initially found confusing about hierarchies in Unity. With XAML you look for the children of an element. In Unity, you first get the “transform” of a GameObject and then look for the transform children to transverse the hierarchy.

Seeing 3D objects

If you want to see a button or image in XAML, you simply add one. Where that button or image appears on screen depends on its position and size. Unity is a bit more complicated. A button or image in Unity requires three things: a 3D object, a light, and a camera. What you actually see depends on where the 3D object is located, how big the object is, how the light illuminates the object, where the viewer camera is located, and the direction in which the camera is pointed.

Even this is an over simplification. To really understand how 3D objects are seen, we need to dive a little deeper into how 3D objects are rendered.

Rendering 3D objects

In 2D graphics the smallest unit is a pixel, which has a (x, y) coordinate and a color. In 3D graphics the smallest unit is a triangle, which has a (x, y, z) coordinate on each corner. A complex 3D object is made of a collection of 3D triangles called mesh.

What surprises new 3D developers is that sometimes they see a beautiful preview of a 3D model, but when they import it into Unity, the 3D model doesn’t look the same.

It’s an incredibly complex subject, but I’ll offer an oversimplified explanation of how this rendering works by breaking it down to meshes and materials.

Meshes are used to describe the model geometry, that is, where the 3D triangles are located in 3D space. 

Materials are then applied to the mesh. Think of it as applying paint to the mesh. In addition to the paint color, you can define its glossiness. And this gloss is particularly affected by the location of the viewer relative to the lights in the environment. 

You can have a mesh for a car, but that only describes the geometry of the car, not its look.

 

You could define the car paint as “red,” but that’s not enough to provide a full 3D effect.

For a realistic sheen, you need to create reflection, which means defining where a light source is located.

Shaders

I’ve intentionally skipped some key details. For example, materials like the red paint of our car are implemented using highly specialized programs called shaders. These programs are executed in the GPU, rather than the CPU. A material is basically a shader configured with specific parameters like color, gloss, texture, etc.

The more complex the shader, the more realistic and cool the material. The problem is that complex shaders are extremely slow to render, so oftentimes we are forced to sacrifice some realism (and coolness) for a simpler shader with a faster rendering time.

Putting it all together

Now that you have an idea of the minimum required elements needed to render a 3D object, let’s see how they come together to render a cube.

  • Mesh: Set as cube.
  • Material: Under MeshRenderer, element 0 is set as Unity’s built-in default material.
  • Shader: The built-in default material uses the standard shader.
  • Light: This is the light that the shader uses to calculate the look of the rendered material. Note that a light is not mandatory. Some shaders are designated unlit, which means they ignore any lights in the scene.

Code-behind versus components

The logic in XAML applications can be implemented in code-behind or in-view models in MVVM pattern-based apps.

Unity is quite different. It’s more like behaviors in XAML: Logic is attached to an object via scripts most commonly written in C#. Component classes are typically subclasses of Unity’s MonoBehaviour.

You can add multiple components to a GameObject and the same components can be attached to multiple GameObjects. For example, you can create a component that moves an object up and down. That component can be added to any object in the scene (rocks, cars, bad guys, etc.) and these objects will move up and down. A component can access its associated GameObject via the GameObject property. It would be the XAML equivalent of accessing the behavior AssociatedObject property.

Events versus game loop

XAML applications rely primarily on events like button clicks, window size changes, network events, etc. In Unity 3D you also have events, but you’ll find yourself working a lot in the Update() function.

The Update() function is called on every frame. This is the most common place you would add logic that affects the app behavior.

This one isn’t a big paradigm shift, but after years of thinking events, it requires a small pause to think polling.

Further reading

Once you grasp the basics, there are other concepts that prove useful to understand. 

Unity has concepts that are somewhat similar to VisualStates and Storyboards. Look for animation. Animation controllers are more like VisualStates and animation is more like Storyboards.

Prefabs is another concept that you’ll want to study. They make reuse and duplication a lot easier. You can think of it as reusable controls. The same way you could add multiple instances of a control to a XAML page, you could add multiple instances of a prefab to a scene.

There’s a lot more to learn about Unity and its impact on developers, but the fundamentals here are an introduction. As with any paradigm shift, you can expect a learning curve—but there’s no time like the present to catch up with the future.

Fabio Matsui is Wire Stone’s chief technology officer, where he leads the research and development of advanced digital experiences.