CHAPTER 18 — A Hierarchy of Modeling

Seven types of modeling

Understanding hierarchies of complexity and types of modeling helps us organize our thinking and process, keeping our work in proper focus.

We model all manner of objects, qualities, and phenomena because we are creating a self-contained world. We can take nothing for granted, not even the earth and sky! In our workflow, we’ve found it useful to identify seven types of modeling: 

These determine the level of modeling:

  • OBJECT
  • COMPOSITION
  • MATERIAL
  • KINEMATICS
  • EFFECTS

These occur at every level of modeling:

  • LIGHTING
  • POINT OF VIEW (POV)

Each type of modeling has its own peculiar rules and workflows. And if we watch the credits of a major animated motion picture, we’ll often recognize someone credited as being in charge of only one type of modeling. That person often coordinates a whole team of artists responsible for it. Chances are, if you work at a studio, you’ll specialize in only one of these on a big-scale project.

However, not every model project needs every modeling type to be complete. A simple model needs only Object, Lighting, and POV modeling to be effective.

Modeling level definitions

Object: At least one element in a scene. We model an Object using volumetric objects or planar surfaces, but it must imply some tangible presence in a 3D world.

Composition: Relationships among two or more elements, and between Elements and the Environment. This modeling type is equivalent to the mise-en-scène in cinema — characters, props, architecture, et cetera — anything on-camera. Organizing this involves developing hierarchic relationships that depend on careful naming strategies externally referencing elements from other files. Think of it as a well-organized prop room on a movie set.

Material: Properties such as color, texture and reflectivity are not integral to Objects, and as such we must model them. In a good Material modeling job the application doesn’t look like wallpaper (unless, of course, that’s the material!), but rather feels authentic to the context. Materials and textures are often modeled as bitmap image files externally referenced, although some materials are modeled parametrically and some exist in a predefined library.

Kinematics: When the desired output is animation, we model time and motion through the use of a timeline, along with rigging and/or deformation to control the internal motion of an Entity.

Effects: Chaotic “soft” elements that respond to physical forces like wind or gravity use specially developed Effects modeling environments. In Maya, for example, we find nCloth or nHair. Atmosphere effects handle aerial perspective and uniform fog, and are often attributes of the camera. Particle systems generate snow, fire, hair, grass, cloth, or fluids. Each are variations on how particles respond to forces. In a mesh system, a subset of particle modeling, particles are linked to produce planarity or volume for cloth, tension surfaces like a balloon, or shattering like breaking glass. These need a passive collider that stops motion. Skin is a passive collider for cloth, for example.

Visualization definitions

Light: For the visualization of any hierarchic level or type of modeling, ambient, hidden and visible sources of light are modeled, although the overall predefined constant of ambient lighting is determined without a spatial coordinate. The results of light modeling are not seen accurately until tested in the rendering process: the creation of still images, panoramas or animations.

POV: The camera, a point representative of your point of view in the scene, is located spatially, and several cameras can exist in a scene since they are essentially invisible artifacts. Secondary motion (movement of the POV) can be modeled by moving the camera along a fixed pathway, and the camera target can move independently along another. Camera effects, such as depth of field and motion blur, can also be modeled and seen in the rendering process.

A case study

A modeled thing can be associated with more than one modeling type. Take, for example, an element modeled to create a sky. This kind of thing is usually modeled as a dome-like hemispherical element with clouds and blended shades of blue, giving the underside a realistic appearance. We can consider this sky dome as it relates to several modeling types:

  • Object: It is an Object modeled in the background.
  • Composition: It is one among several (a ground plane, trees, buildings) Objects organized in the overall scene.
  • Material: It requires a bitmap image of clouds or a ramp shader or gradient color applied for realism.
  • Kinematics: It can be set to rotate slowly to suggest the movement of clouds.
  • Lighting: In some instances, we assign a directional light to illuminate only the dome with no shadows. In others, the dome is the light, as we see with the Arnold Physical Sky and Sun here — each is its own dome.
  • POV: It only makes sense as an object seen from underneath with a camera whose position is limited to remaining inside the dome.
Arnold sun and sky. The sun is directional and casts shadows, while the sky provides ambient, atmospheric light scatter.

The sum of all these modeling types determines the function of this one element, the sky dome, in a model completed to the Entity level of complexity. Confusing? Let’s clarify the relationship between the four hierarchic levels of complexity and the seven types of modeling next.

Four levels of complexity

This hierarchy helps with process, which in turn helps a modeler to determine the types of things one might expect to model. The modeling process can range from the very simple to the highly complex. Simple projects depict formal relationships among elements in static images that a single artist can create in just a few minutes. A more complex model requires deeper levels of information, involving more decision-making in projects that might take a week to a month or more. The most sophisticated work, seen in studio films like The Lord of the Rings, involves crews of animators and man-years of time.

At Weta Digital, home of Oscar-winning CGI effects, it took three years to make Gollum, with a team of five people working on his face alone. For planning purposes, we have found it useful to understand the process needed for projects based on four hierarchic levels of complexity: Element, Environment, Entity, and Effects.

Element

The Element level includes Object, Lighting, and POV modeling.

williamCromar, AquaBlock elements, 2017

Modeling to the Element level yields a static Object that exhibits properties of volume or surface. Don’t confuse Element as a modeled thing with a visual element that refers to basic 3D visual design phenomena. The distinction will usually be clear in context. All other levels of hierarchy depend on the development of Element modeling.

Environment

The Environment level includes everything at the Element level plus Composition and Material modeliing.

williamCromar, AquaBlock village, 2017

Modeling to the Environment level generates settings for Elements. It is comprised of secondary elements but includes such aspects of the visual world as material. Some refer to an array of secondary elements as an entourage. Environment modeling must be developed to provide a plausible world for images, panoramas, or animations that exhibit secondary movement (movement of the POV only).

Entity

The Entity level includes everything at the Environment level plus Kinematics modeling.

williamCromar, AquaBlock promotional video, 2017

The Entity level creates elements with primary movement, defined as the movement of an object. Movement can be passive or active. A thrown rock is passive because it depends on an outside force and needs no rigging. Anything with articulate, internal, or self-generated motion — a walking human, a robot arm — must be rigged and is considered active, even if it depends on an outside force. A bicycle is an active entity, even though its motion depends on a rider, another active entity.

Effects

The Effects level includes everything at the Entity level plus Effects modeling.

williamCromar, Blood flow particle effect test footage, 2017

Physics Effects are used whenever an environment or entity needs a dynamic accompaniment. For example, hair or cloth that responds to the physics of wind or gravity is added to a character. Snow or fire will be the product of a particle effect. Some software includes parametric vegetation for environments that will randomly distribute foliage.

When modeling, focus on your tangible end game to determine how deep you need to dive into this hierarchy. Everything starts with modeling an Element, but if you plan to use your element for 3D printing or milling, for importing to another program, or render a simple illustration, you can stop right there. Environment modeling is a typical end game for designers. To illustrate their concepts, industrial designers, interior designers, architects, and installation artists need still images (sometimes called renderings), VR interactive panoramas or simple secondary motion animation (movement of the POV only) in a walk-through to illustrate their concepts. If we plan anything involving time-based activity, such as gaming or cinema, we most likely work to the Entity level of modeling. Effects bring on high realism and the most sophisticated level of complexity.

Sidebar