So, a few months back I was looking into clean ways of scripting a custom shelf in Maya, that can be easily shared, and I was surprised that there were not many resources on how to go about it. Therefore, I thought I would share the way that I am maintaining my shelf.

tl;dr Grab the code for the shelf base class from here and derive your own shelf class from it, overwriting and populating the build() method with calls to the addButton, addMenuItem, etc. calls to create your own custom maya shelf.

Scripting your own custom shelf in Maya is much easier than you would think, or at least than I thought. For some odd reason, I always assumed that it would involve a lot of messing about with MEL, which I would really rather not. If that is what you thought as well, you will be pleasantly surprised.

Here is what we are trying to achieve.

Custom shelf in maya

Since, we want to keep the code as modular and versatile as possible, I wrote the main functionality in a base class which needs to be extended for each shelf you want to build. The code for the base class with a little example is on github.

Now we will go through it to make see how it works, so it can be extended and modified to include more functionalities.


def __init__(self, name="customShelf", iconPath=""): = name

self.iconPath = iconPath

self.labelBackground = (0, 0, 0, 0)
self.labelColour = (.9, .9, .9)


In the constructor we initialize the variables we make two calls to _cleanOldShelf() and build(), which we will look at in a bit. The name argument is going to be the name of the shelf. It is important to note that if a shelf with this name already exists it will be replaced with this one. The iconPath argument can be used to define a directory from which to get images to be used as icons of the shelf buttons and commands in the popup menus. If it is not defined, you can still use maya’s default icons like commandButton.png for example.

Additionally there are the labelBackground and labelColour variables which can be seen in the following image.

Custom shelf in maya - button colours

The reason I have hardcoded them in the base class is because I think for the sake of consistency all our shelves should have the same style, but if you want to change them, obviously feel free to do it.

And lastly, there is that mc.setParent( function, which makes sure that whatever ui elements we are going to build in the following build() method, they will be build as children to our shelf. Otherwise, we might end up with buttons breaking the other layouts.

Clean old shelf

Let’s have a look at what the _cleanOldShelf() method does.

def _cleanOldShelf(self):
if mc.shelfLayout(, ex=1):
if mc.shelfLayout(, q=1, ca=1):
for each in mc.shelfLayout(, q=1, ca=1):
mc.shelfLayout(, p="ShelfLayout")

Essentially, we are checking if our shelf already exists by using the mc.shelfLayout(, ex=1) command where is the name of our shelf and ex is the flag for checking existence. If it does we go through all children of the shelf which we get from mc.shelfLayout(, q=1, ca=1) and delete them. That makes sure we start our build() method with a fresh clean shelf.

If the shelf does not exist though, we simply create it, passing "ShelfLayout" as parent, because that is the parent layout of all the shelves in maya. For example, turn on History > Echo all commands in the the script editor and click through some of the shelves. You will see the paths to some of their popupMenus as so

So, knowing that CurvesSurfaces is a name of a shelf, we can deduce that the ShelfLayout is the parent of all shelves.

Then, before we get into the build() method, let’s have a look at some of the methods we are going to use to actually populate our shelf.

Add button

def addButon(self, label, icon="commandButton.png", command=_null, doubleCommand=_null):
if icon:
icon = self.iconPath + icon
mc.shelfButton(width=37, height=37, image=icon, l=label, command=command, dcc=doubleCommand, imageOverlayLabel=label, olb=self.labelBackground, olc=self.labelColour)

Very simply, this is the method that creates the main buttons in the shelf. Essentially, we just need to pass it a label which is the name of our button, an optional icon and an optional command and doubleCommand which refer to single click and double click. The command is optional, because we might want buttons that just have popup menus. We will look at those in a sec. By default the _null() command, which is at the top of the file is called. The reason I have that command is that if you pass None to the mc.shelfButton command flag, it will error. So the _null() method essentially does nothing. It is important to note that when we pass a command we are not using the brackets after the name (), because that would call the command instead of passing it.

Add menu item

def addMenuItem(self, parent, label, command=_null, icon=""):
if icon:
icon = self.iconPath + icon
return mc.menuItem(p=parent, l=label, c=command, i="")

This method is very similar to the add button one, but instead of adding a button to the shelf, this one adds a menuItem to an existing popupMenu passed as the parent argument. The popupMenu needs to be created manually, as it doesn’t make sense to wrap it in a method if it is just a simple one line command anyway. So, a quick example of how that works is

p = mc.popupMenu(b=1)
self.addMenuItem(p, "popupMenuItem1")

where the b=1 stands for left mouse click. This snippet if added inside the build() method will attach a popupMenu with one menuItem to the popup button.

Add sub menu

def addSubMenu(self, parent, label, icon=None):
if icon:
icon = self.iconPath + icon
return mc.menuItem(p=parent, l=label, i=icon, subMenu=1)

Now, I realize that sub menus inside a popup on a shelf button is getting a bit too deep, as the whole point of a shelf is to streamline everything, but I thought I would add it, as even I have used it on a couple of my buttons.

What this method does is, it creates a menuItem which is a menu in itself, attached to an existing popupMenu which is passed as the parent.

The result is something similar to this.

Custom shelf in maya example popup menus


You know how some of the shelves have separating lines between the buttons. You can achieve that with the mc.separator() command. I usually set the style flag to "none" as I rather having blank space than having lines, but have a look at the docs page for the separator command for more info.


And then finally let us have a look at the build() method.

def build(self):
'''This method should be overwritten in derived classes to actually build the shelf
elements. Otherwise, nothing is added to the shelf.'''

As the docstring says, this is an empty method that needs to be defined in each shelf we build. The way this works is, we just populate it with addButton(), mc.popup(), addMenuItem() and addSubMenu() calls as we need them.

Example of a custom shelf in maya

Let’s have a look at the simple example in the end of the file.

class customShelf(_shelf):
def build(self):
self.addButon(label="button1", command=mc.polyCube)
p = mc.popupMenu(b=1)
self.addMenuItem(p, "popupMenuItem1")
self.addMenuItem(p, "popupMenuItem2")
sub = self.addSubMenu(p, "subMenuLevel1")
self.addMenuItem(sub, "subMenuLevel1Item1")
sub2 = self.addSubMenu(sub, "subMenuLevel2")
self.addMenuItem(sub2, "subMenuLevel2Item1")
self.addMenuItem(sub2, "subMenuLevel2Item2")
self.addMenuItem(sub, "subMenuLevel1Item2")
self.addMenuItem(p, "popupMenuItem3")

As you can see, all we do is we create a new class inheriting from the _shelf class. By the way the _ prefix is used to remind us that it is a class that should not be directly accessed.

Then we overwrite the build() method by simply defining it, and we populate it with the necessary shelf elements.

This example results in the image from above. Here is it again.


I have not demonstrated the use of any commands on these buttons, but all it takes is just passing the name of the function as a command argument. The one tricky bit about it is that the menuItem commands get passed some arguments, so we need to have *args in our function calls to prevent errors. For example

def createPolyCube(*args):

We would include this command in our shelf by doing self.addButton(label="cube", command=createPolyCube) in our build() method.

For the sake of clarity though, I recommend having a separate file which contains only the functions that are going to be used in the shelf and import them from there. So at the top of the file where are custom shelf is defined, import your shelf functions file with import shelfFunctions or however you decide to package them. I would advise having a reload(shelfFunctions) statement immediately after that, so you can be sure the latest changes to the file have been taken into account.

And then when creating buttons, just do

self.addButton(label="cube", command=shelfFunctions.createPolyCube)

Additionally, you can also pass maya native commands as well. So to have a button that creates a poly cube we would do

self.addButton(label="cube", command=mc.polyCube)

Building our shelf on load up

The way we will include our shelf on load up is to just add it to our file. If you are not familiar with it, take a look at this. I know this one is for MEL, but it is essentially the same thing for Python as well.

So, assuming that our shelf file is in the maya scripts path, in our we can just do

import maya.cmds as mc
import shelf

where shelf is the file containing our shelf and customShelf is the name of the class.

We are using the evalDeferred() command, because maya loads the file as it is starting, so we are not sure if the shelfLayout is already in place as it gets executed. Therefore, evalDeferred() helps us call our script after maya has been initialized.


And that is it, you have built your own shelf. What I really like about it is that if you are collaborating with other people you can add the shelf file to your version control system and in your file just source it from there. That way you can all contribute to it and have it always updated.

If you have any questions, shoot me a message.

EDIT: I would advise against using this exact same script as the parentConstraints have been known to cause issues and are not a graceful solution at all. Until I update the article, either try replacing them with xform calls or have a look at Alessandra’s comment.

I remember the first time I tried to set up a seamless IK FK switch with Python vividly. There was this mechanical EVA suit that I was rigging for a masterclass assignment at uni given by Frontier. The IK to FK switching was trivial and there were not many issues with that, but I had a very hard time figuring out the FK to IK one, as I had no idea what the pole vector really is and also, my IK control was not oriented the same way as my FK one.

Im sure that throughout the web there are many solutions to the problem, but most of the ones I found were in MEL and some of them were a bit unstable, because they were relying too much on the xform command or the rotate one with the ws flag, which I am assuming causes issues sometimes when mapping from world space to relative, where a joint will have the exact same world rotation, so it looks perfectly, but if you blend between IK and FK you can see it shifting and then coming back in place. That’s why I decided to use constraints to achieve my rotations, which seems to be a simple enough and stable solution.

EDIT: It seems like even with constraints it is possible to get that issue in the case where the IK control is oriented differently. What fixes is though is switching back and forth once more.

Here is what we are trying to achieve

Seamless IK FK swich demo

Basically, there is just one command for the seamless IK FK Switch, which detects the current kinematics and switches to the other one maintaining the pose. I have added the button to a custom marking menu for easier access.

So, in order to give you a bit of a better context I have uploaded the example scene that I am using, so you can have a look at the exact structure, but feel free to use your own scene with IK/FK blending setup. The full code (which is very short anyway) is in this gist and there are three scene files in here for each version of our setup. The files contain just a simple IK/FK blending system, on which we can test our matching setup, but with different control orientations.

It is important to understand the limitations of a seamless IK FK switch before we dive in. Mainly, I am talking about the limited rotation of the second joint in the chain, as IK setups allow for rotations only in one axis. What this means is that if we have rotations in multiple axis on our FK control for that middle joint (elbow, knee, etc.) the IK/FK matching will not work properly. All this is due to the nature of inverse kinematics.

Also, for easier explaining I assume we are working on an arm and hand setup, but obviously the same approach would work for any IK/FK chain.

We will consider three cases:
All controls and joints are oriented the same
IK Control oriented in world space
IK Control and IK hand joint both oriented in world

Again, you do not have to use the same file as I do as it is just an example, but it is important to be clear on the existing setup. We assume that we have an arm joint chain – L_arm01_JNT > L_arm02_JNT > L_arm03_JNT and a hand joint chain – L_hand01_JNT > L_hand02_JNT with their correspondent IK and FK chains – L_armIk01_JNT > …, L_armFk01_JNT > …, etc. These two chains are blended via a few blendColors nodes for translate, rotate and scale into the final chain. The blending is controlled by L_armIkFk_CTL.fkIk. Then we have a simple non-stretchy IK setup, but obviously a stretchy one would work in the same way. Lastly, the L_hand01_JNT is point constrained to L_arm03_JNT and we only blend the rotate and scale attributes on it, as otherwise the wrist becomes dislocated during blending, because we are interpolating linearly translation values.

Now that we know what we have to work with, let us get on with it.

Seamless IK FK Switch when everything shares orientation

So, in this case, all of our controls and joints have the exact same orientation in both IK and FK. What this means is that essentially all we need to do to match the kinematics is to just plug the rotations from one setup to the other. Let’s have a look. The scene file for this one is called

IK to FK

This one is always the easier setup, as FK controls generally just need to get the same rotation values as the IK joints and that’s it. Now, initially I tried copying the rotation via rotate and xform commands, but whenever a control was rotated a bit too extreme these would cause flipping when blending between IK and FK, which I am assuming is because these commands have a hard time converting the world space rotation to a relative one, causing differences of 360 degrees. So, even though in full FK and full IK everything looks perfect, in-between the joint rotates 360 degrees. Luckily, maya has provided us with constraints which have all the math complexity built in. Assuming you have named your joints the same way as me we use the following code.

mc.delete(mc.orientConstraint("L_armIk01_JNT", "L_armFk01_CTL"))
mc.delete(mc.orientConstraint("L_armIk02_JNT", "L_armFk02_CTL"))
mc.delete(mc.orientConstraint("L_handIk01_JNT", "L_handFk01_CTL"))

mc.setAttr("L_armIkFk_CTL.fkIk", 0)

As I said, this one is fairly trivial. We just orient each of our FK controls to match the rotations of the IK joints. Then in the end we change our blending control to FK to finalize the switch.

FK to IK

Now, this one was a pain the first time I was trying to do it, because I had no idea how pole vectors worked at all. As soon as I understood that all we need to know about them is that they just need to lie on the same plane as the three joints in the chain, it became easy. So essentially, we need to place the IK control on the FK joint to solve the end position. And then to get the elbow (or whatever your mid joint is representing) to match the FK, we just place the pole vector control at the exact location of the corresponding joint in the FK chain. So, we get something like this.

mc.delete(mc.parentConstraint("L_handFk01_JNT", "L_armIk_CTL"))
mc.xform("L_armPv_CTL", t=mc.xform("L_armFk02_JNT", t=1, q=1, ws=1), ws=1)

mc.setAttr("L_armIkFk_CTL.fkIk", 1)

Now even though this does the matching job perfectly it is not great for the animators to have the control snap at the mid joint location as it might go inside the geometry, which is just an unnecessary pain. What we can do is, get the two vectors from arm01 to arm02 and from arm03 to arm02 and use them to offset our pole vector a bit. Here’s the way we do that.

arm01Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk01_JNT", t=1, ws=1, q=1)[i] for i in range(3)]
arm02Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk03_JNT", t=1, ws=1, q=1)[i] for i in range(3)]

mc.xform("L_armPv_CTL", t=[mc.xform("L_armFk02_JNT", t=1, q=1, ws=1)[i] + arm01Vec[i] * .75 + arm02Vec[i] * .75 for i in range(3)], ws=1)

So, since xform returns lists in order to subtract them to get the vectors we just loop through them and subtract the individual elements. If you are new to the shorter loop form in Python have a look at this. Then once we have the two vectors we add 75% of them to the position of the arm02 FK joint and we arrive at a position slightly offset from the elbow, but still on the same plane, thus the matching is still precise. Then our whole FK to IK code would look like this

mc.delete(mc.parentConstraint("L_handFk01_JNT", "L_armIk_CTL"))

arm01Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk01_JNT", t=1, ws=1, q=1)[i] for i in range(3)]
arm02Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk03_JNT", t=1, ws=1, q=1)[i] for i in range(3)]

mc.xform("L_armPv_CTL", t=[mc.xform("L_armFk02_JNT", t=1, q=1, ws=1)[i] + arm01Vec[i] * .75 + arm02Vec[i] * .75 for i in range(3)], ws=1)

mc.setAttr("L_armIkFk_CTL.fkIk", 1)

Seamless IK FK switch when the IK control is oriented in world space

Now, in this case, the orientation of the IK control is not the same as the hand01 joint. I think in most cases people go for that kind of setup as it is much nicer for animators to have the world axis to work with in IK. The scene file for this one is called

The IK to FK switch is exactly the same as the previous one, so we will skip it.

FK to IK

So, in order to get this to work, we need to do the same as what we did in the previous case, but introduce an offset for our IK control. How do we get this offset then? Well, since we can apply transformations only on the controls, we need to calculate what rotation we need to apply to that control in order to get the desired rotation. Even though, we can calculate the offsets using maths and then apply them using maths, we might run into the same issue with flipping that I discussed in the previous case. So, instead, a much easier solution, but somewhat dirtier is to create a locator which will act as our dummy object to orient to.

Then, in our case where only the IK control is oriented differently from the joints, what we need to do is create a locator and have it assume the transformation of the IK control. The easiest way would be to just parent it underneath the control and zero out the transformations. Then parent the locator to the L_handFk01_JNT, as that’s the one that we want to match to. Now wherever that handFk01 joint goes, we have the locator parented underneath which shares the same orientation as our IK control. Therefore, just using parentConstraint will give us our matching pose. Assuming the locator is called L_hand01IkOfs_LOC all we do is this.

mc.delete(mc.parentConstraint("L_hand01IkOfs_LOC", "L_armIk_CTL"))

This will get our wrist match the pose perfectly. Then we apply the same code as before to get the pole vector to match as well and set the IK/FK blend attribute to IK.

arm01Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk01_JNT", t=1, ws=1, q=1)[i] for i in range(3)]
arm02Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk03_JNT", t=1, ws=1, q=1)[i] for i in range(3)]

mc.xform("L_armPv_CTL", t=[mc.xform("L_armFk02_JNT", t=1, q=1, ws=1)[i] + arm01Vec[i] * .75 + arm02Vec[i] * .75 for i in range(3)], ws=1)

mc.setAttr("L_armIkFk_CTL.fkIk", 1)

Seamless IK FK switch when the IK control and joint are both oriented in world space

Now, in this last scenario, we have the handIk01 joint oriented in world space, as well as the control. The reason you would want to do this again is to give the animators the easiest way to interact with the hand. In the previous case, the axis of the IK control do not properly align with the joint which is a bit awkward. So a solution would be to have the handIk01 joint oriented in the same space as our control, so the rotation is 1 to 1 and it is a bit more intuitive. The scene for this one is and it looks like this.

It is important to note that the IK joint is just rotated to match the position of the control, but the jointOrient attributes are still the same as the FK one and the blend one.

Seamless IK FK Switch with IK control and joint oriented in world space
IK FK Switch with IK control and joint oriented in world space

So again, going from IK to FK is the same as before, we are skipping it. Let us have a look at the FK to IK.

FK to IK

This one is very similar to the previous one, where we have an offset transform object to snap to. The difference is that now instead of having that offset be calculated just from the difference between the IK control and the FK joint, we also need to adjust for the existing rotation of the IK joint as well. So, we start with our locator the same way as before – parent it to the IK control, zero out transformations and parent to the handFk01 joint. And then, the extra step here is to apply the negative rotation of the IK joint to the locator in order to get the needed offset. So, this calculation would look like this.

ikRot = [-1 * mc.xform("L_handIk01_JNT", ro=1 ,q=1)[i] for i in range(3)]
mc.xform("L_hand01IkOfs_LOC", ro=ikRot, r=1)

We just take the rotation of the IK joint and multiply it by -1, which we then apply as a relative rotation to the locator.

And then again, as previously we just apply the pole vector calculation and we’re done.


So, as you can see, scripting a seamless IK FK switch is not really that complicated at all, but if you are trying to figure it out for the first time, without being very familiar with rigging and 3D maths it might be a bit of a pain. Again, if you want to see the full code it is in this gist.

Rigging in maya tends to be a very straight forward and no bullshit process, so getting a grasp of the fundamentals will pretty much be enough to let you build your knowledge further by experimenting with different approaches and techniques. Therefore, in this post I’ll describe all the essential building blocks of a rig.

Disclaimer: This post is going to be a continuous work in progress, as I will keep adding rigging concepts as we go.

Table of contents

  1. Nodes
  2. Joints
  3. Skinning
  4. Other deformers
  5. Weight painting


Now, even though the dependency graph is not an essential knowledge for a beginner rigger who just wants to build rigs, it is absolutely crucial for grasping the fundamentals of how rigging in maya works. As such, it is vital for understanding the following points of this article.

In a most broad aspect the dependency graph is the place where the scene is described in terms of it’s building blocks – nodes. A node is essentially similar to the concept of a function or method in traditional programming. If you are not into programming, then you can think of a node as a sort of a calculator, which takes in some data in the form of an input, modifies it and it returns the output data. Nodes seem to be used in a lot of the major packages in the 3D industry – Maya, Houdini, Nuke, Cinema 4D, etc., so it is a concept which is useful to understand.

Every object in a maya scene is described by one or more nodes. And when I say object I do not only mean geometries, but also locators, cameras, deformers, modeling history (polyExtrude commands, polyMerge, etc.) and a lot of other stuff we don’t need to worry about. If you are curious about everything that’s going on in a scene, you can uncheck the Show DAG objects only checkbox in the outliner.

A simple example is a piece of geometry. You would expect that it would be described by one node containing all the information, but there actually are two. One is a transform node, which contains all the transformation data of the object – translations, rotations and scale (world transformations). The other is the shape node, which contains all the vertex data to build the actual geometry (local transformations), but we don’t need to worry about this one just yet.

A very cool thing about nodes is that we can actually create our own. The Maya API allows us to write custom nodes in Python and C++.

Further reading: Autodesk User Guide: Node types


In a non-technical aspect a joint is incredibly similar to a bone of a skeleton. A number of them are chained together in order to create limbs, spines, tails, etc., which in turn together form exactly that, a skeleton. These joints then are told to influence the body of a character or any other object and that’s how you get a very basic rig.

In a more technical approach to describe a joint, we can start off by saying it is a transform node. Therefore it has all the usual attributes – translation, rotation, scale, etc., but there also are some new ones like joint orientation and segmentScaleCompensate.

The segmentScaleCompensate essentially prevents the children joints in the hierarchy from inheriting the scale of the parent joints. And it is generally something we don’t need to think about at all, if you are not rigging for games. If you are, you will need to turn this off and work around it in your rigs.

The joint orientation on the other hand, is absolutely crucial for rigging. It is such an extensive subject that I am considering writing a whole post just about it. Here is a good post on the topic. Describing it briefly, joint orientation describes the way the joint is going to rotate. That is the reason it is called orientation (I think), because in a sense it shows us which way is forward, which is up and which is to the side.

So to summarize, a joint is a transform node with extra attributes, used to create hierarchies (joint chains) which in turn form a skeleton, used to drive the deformations of a piece of geometry.


In non-technical terms skinning is like creating a sock puppet. You stick your arm (the joint chain) inside the sock (the geometry) and by moving your hand and fingers the sock is being deformed as well.

In a more technical manner, skinning is the process of binding a geometry to a joint chain. In maya nodes terms, this action is described by a skinCluster node. Essentially, when skinning a piece of geo, maya calculates all the vertex positions of the mesh relative to the joint they are influenced by and stores that data in the skinCluster. Then when the joint is transformed in any way – moved, rotated or scaled, the vertices of the mesh follow that transformation, by applying their relative positions on top of the transformed positions of the joint.

Other deformers

A skinCluster is what maya calls a deformer. It makes quite a lot of sense if you think about it as a skinCluster does exactly that – it deforms the geometry using the joints. Now, there are other deformers as well and even though they work in many different ways, essentially what they do is, they take a version of the geometry as an input (they can take as input the output of other deformers which allows us to chain deformations), deforms it and outputs the new deformed points into the shape node. Some good simple examples are the nonlinear deformers.

Just like with nodes (deformers are essentially nodes themselves) we can write our own custom ones, which is very useful. For example I have written a collision deformer which makes geometries collide and prevents intersections.

Weight painting

Remember how in the skinning section I said “maya calculates all the vertex positions of the mesh relative to the joint they are influenced by“? Well how does maya know which joint a vertex is influenced by? You see, each component of the geometry can be associated with one or more joints. On creating the skinCluster maya will assign some joints to the components, but more often than not you won’t be satisfied with that assignment. Therefore, you need to tell maya which joints should be driving a specific component. The way you do that is by a process called weight painting.

When we say weight, we basically mean amount of influence.

It is called painting, because by doing so, we are creating weight maps for each joint. As with many other aspects of CGI, these maps range from 0 to 1 although we can choose to go above and below if we want to, but I wouldn’t advise doing so, just because it’s much harder to keep control of your weights this way. So a value of 0 means that this specific region of the geometry will not be deformed at all, by the current joint. Obviously then, 1 would mean that the region will be deformed entirely by the current joint. Values in between mean that the joint will influence the component some but not entirely. It’s important to note that any component needs to have a total weight of 1 (again if we are chosen to work with normalized weights, but as I said this is the suggested approach), therefore if you lower the influence of one joint over a specific component, another or a combination of multiple other ones will receive that remaining influence.

Additionally, all deformers can have their weights painted. SkinClusters seem to be the most complex ones to paint because they have multiple influences, but generally with most other deformers – nonlinears, blendShapes, deltaMush, etc. – there is a single weight per vertex describing how much influence the deformer has over that specific vertex.

Some very useful features of weights are that they can be mirrored and copied. Additionally, they can be stored and loaded at a later point, though maya’s native options for doing that have been very very limited, but this article gives some good tips about it.

In most productions finalizing the rig is actually half the battle. Depending on the role of the asset, but most often than not, you will need to maintain that rig. Which means to constantly provide fixes for issues that the animators and you might have missed at first glance. Additionally you may need to adjust skin weights for better deformations or maybe some props need to be attached to a character. Things like that. Therefore it is very important to have good Communication with animators. It all starts when we present/publish our finalized rig.

Of course, you should not be making a big thing out of it, but if there is anything you are uncertain about in the rig it is always good to shoot a quick message or have a chat with the animators about it. They would appreciate it as well. So, I would like to go over some points to keep in mind when communicating with animators, which have helped me through my projects.


Even before the assets have been modeled, it is always great to grab some character concepts and pose/expression sheets and have a chat with the different departments. I cannot stress the importance of this, as often people tend to think of only their own little part in the big picture, which leads to lads of issues in other departments. Imagine having been passed a robot model that needs to move in a specific way, but in the place where you would expect a swivel mechanism or a ball and socket one, you have a solid plate. Happens way too often. Don’t do that to your animators.

At this early stage all you need to understand is what controls would they expect and how do they expect them to work. Obviously if the character is a standard biped you already know this and do not need to have this chat. But if your character has a long neck, more than two legs/arms, it is worth having a chat and seeing what the animators would expect from the rig. If you are asked to add a feature which you are not sure about, say so. Most of the time it would be something you can easily test within minutes, so if you are not pressured by time give it a go and come back with a clear answer. Never shoot yourself in the foot by saying you will add a feature without actually being sure how it would work. You would be surprised how well people respond to a reasonable explanation of why something would not work.

Old rigs and early model

In the past I have offered animators I have not worked with some of my previous rigs, just so they can see how I usually work and what I tend to provide in a generic rig. You do not have to do it of course, especially if you are not pleased with your old rigs, but it sometimes helps put context to what is being talked about.

Depending on the budget of the film you might be able to do some tests before starting the actual rigging process. If you are working on a personal project or a student group project, this is essential. Grab an early version of the model and build a basic rig around it. Think about where the joints will pivot from and test out what would give you best deformations. Do you need supporting joints? Maybe ribbons? Are you going to have to add some corrective shapes to areas? A lot of this can be identified very early on and the earliest you know about it the better. Obviously the more generic the character is, the more obsolete this point is.

The point of passing an early or older rig to the animator is to start getting the issues as soon as possible. You should always reasonably expect that there are going to be issues.

Publishing the rig

Once you are done with the rig, depending on the asset, you can either just publish it or do some explaining with it. In big productions there are so many assets that explaining them is not practical, and honestly having to describe how your rig works is something you should not have to do. The rig should speak for itself, through it’s controls, attributes, etc. In some cases though you might have a limited choice of control shapes. But I digress. The cases where you would explain things about your rig are when working on small projects with a couple of characters and you could really focus on them. In such small teams you are bound to have a chat about everything which is great.

Something that horrifies me is a rig with loads of features which end up not being used at all. It happens way too often. All these features do in that case is bog down the performance. So make sure the animators know about all the features and their benefits. It would make their lives easier and your time spent worth it.

You should not expect an animator to go through each and every control and make sure it works properly or has everything that it needs. That would be amazing, but you should not expect it. The way these things come up is by the animators actually posing the rig and seeing how it behaves. They cannot possibly know each pose needed, so you really should not be expecting them to have a look at everything. Let them play with the rig in front of you, while you mention all the good and bad stuff about your rig. This way they can have a bit of a clearer picture of what they have to work with.


From then on, rigs occasionally come back with the animators flagging up a missing or broken feature, messed up deformations, bad control shapes, etc. Needless to say, you should never get angry with an animator or feel like they are the enemy, because first that is stupid and second they are probably trying to get the best out of the asset just like you are.

Most of the time animators ask for features which are possible and sensible. Occasionally though, you would get asked to do something that just will not work. It might be a limitation of the software (Maya has plenty) or a limitation of the design. In either case, it is very important to learn to say no to animators and be firm about it. Of course, you should be trying to give them the best possible rigs, but when something will not work, you should say so. More importantly, you should explain and if possible illustrate why it will not work. Nobody will ever be convinced if you just say “Nah, that won’t work” and you will look like an asshole. You will find that everybody in the industry makes compromises, when you keep your cool and explain the problem. This is key.

As I mentioned in the What do we need from a model post, just remember to not be a dick and keep in mind you are all part of the team, trying to make the best possible project you can.

A lot of the time many beginner riggers complain that the bad deformations they are getting are down to the model. It is a possibility, but in any case we should examine the models we get before we start rigging them, so we can be relatively sure what we are working with can actually achieve what we want it to. Therefore I’d like to examine what do riggers need from a model.

We will go over the expectations that we should have from a model when we start rigging it. Most of these are really basic and I am sure you know them, but it is important to remember to look for them before we start working on an asset.


Obviously the big one. That is the first thing we were taught at university. It is the base of the technical part of modeling, as our whole production relies on deforming that topology.

What do we need from it then?

In my opinion we can split the most important aspects of the topology to:
– edge flow
– quads
– resolution

Edge flow

First and foremost, the edges should be following the features of the face and body. We will never be able to achieve a decent looking smile shape if the edge flow does not provide us with the circular loops around the mouth and the edges around the nasolabial fold. In beginner projects usually the faces look very very uncanny, and a major contribution to that is the bad edge flow.

What we are looking for in a good edge flow is a nice description of the anatomical structure of the model. We need to be constantly thinking about how would these verts look when we pull them in this or that shape.

Here is a nice example. Notice how the edges flow around the facial features

What do riggers need from a model - good topology example

And here is some bad topology. Notice how the edges just create a grid laid over on top of the face. None of them are actually following the shapes.

What do riggers need from a model - badtopology example

There are a lot of changes of directions of the loops in a good edge flow. These changes usually result in what is called a pole or a star, meaning a vertex which is connected to five other vertices instead of four. Take a look around the eye area of the good example and notice how the circular topology of the eye merges into the mask going around both eyes and eyebrows. This leads us to the next point.


This is the main thing people say when thinking about topology, that the model should consist only of quads. And yeah that is mostly true, but there are plenty of examples of good usage of triangles as well, so if you find that you need to have a triangle, by all means have it, provided that you have done your deformation tests and the triangle does not mess anything up.

That being said, I tend to keep all quads in my models as well.

Now another part of the conversation about quads and triangles is the poles I mentioned above. The thing with the poles is that they distort the quads around it quite a bit, making it easy to get shearing in that area which is a no no. If we are to have a proper edge flow, though, these are inevitable so we should make sure they are positioned at places where they will not have to be deformed a lot relative to their immediate neighbours. In other words, try to put these poles in places where the whole area will move more or less together.

Altogether though, I feel that the importance of all quads have been overstressed, mainly because it is an easy rule to define, and people tend to love having a solid rule to guide them. In this case though, if we really need triangles and we are smart about it there is no problem using them. Just again, make sure they do not get in the way of the deformation. Following the rule about poles – having the whole area move together – should be enough to make them work.


Another big thing with topology is resolution and we as riggers should learn to be very picky about this one, as it is easily adjusted in most cases, but can cause a world of pain when painting weights.

When rigging characters, if there is a problem with the resolution it is usually that is too high. A lot of people would argue that this should not a problem, but I strongly disagree. Yes, there are many workarounds of course, but it is still an issue.

Here are my two reasons for being aware of dense geometries.

They are denser than needed

I am a big proponent of doing everything with a specific purpose in mind, as otherwise you are just bloating your life. If the higher density does not have a specific purpose like fixing texture stretching or adding wrinkles, etc. just lower it. We should not be spending time nor mental resources on stuff we do not need.

Painting weights is a pain

It is tedious and annoying painting weights on very high res meshes. Yes we can get smoother results, but the price you pay is keeping track of much more vertices. Additionally, the brush size needs to be very small in order to paint precisely and that again slows us down. Additionally distributing twist along many edge loops is just a punishment.

Of course there are techniques to overcome these issues, but if we can completely get rid of them, for me that is the better choice. Some of these techniques include, having a lower res geometry which we actually rig and then either copy our skin weights on the higher res one or drive it through a wrap deformer. As for painting smoother weights, we can always create temporary geo, skin that and copy the weights to our models. I often do that for cylindrical parts of the bodies like legs, arms, spines, etc. Getting a nice twist distribution becomes very easy.


Riggers love housekeeping! We love stuff that is clean, elegant and makes sense. Who doesn’t actually? The difference is that we would usually go out of our way to make sure everything in the project is in order. Everything except our My Documents folders I guess.

So what does housekeeping mean in the context of a model?

It is more to do with the actual scene file than with the model, but it boils down to these few points.
– No history (except for the occasional lattice deformer for cartoony eyes)
– No transformations
– Proper normals
– Symmetry (except for cases where asymmetry is intended)
– No purposeless nodes
– Multiple shapes or multiple parents

No history

An obvious one. History makes the scene messy. And we do not like messy. Additionally, it slows down the scene quite a lot, because due to the nature of maya’s dependency graph all nodes that are upstream (are on the input side of a node) need to be evaluated before we evaluate the one we actually need. Considering, it is as easy as Alt + Shift + D, this one should never be a problem.

In some cases history is needed, like for having a character with non spherical eyes.

No transformations

This is not a deal breaker usually, but can cause issues sometimes, so generally a freeze transformations is considered a good practice. The issues that can be caused are mainly due to the inheritTransforms attribute we sometimes need.

Proper normals

99% of the time this is not a problem for us, but if we use some fancy setups that rely on normals, obviously it is important to have the normals working correctly. Additionally, this is going to be an issue for the further stages of the pipeline, so it is best if we catch it and solve it.


This obviously does not apply to models that are meant to be asymmetrical, in most cases of cartoony characters though, that should not be the case. The only area that might need asymmetrical features would be the face. Apart from that, having everything perfectly symmetrical makes our job much easier. Mirroring joints, skin weights, controls, etc. is a breeze.

No purposeless nodes

This is kind of similar to the No history one, but deleting history does not delete all unused nodes, as they might not be connected to anything. What we could do in cases like that is run the Optimize scene size command, but take a look at the potions and make sure you are not deleting stuff you might actually need. Additionally, we can always script something to clean up our scenes, but if there are native tools to do it, why bother.


With shape nodes, sometimes we might have more than one under a transform, depending on how the modelers arrived to that final shape. If that is the case, it would be best if these are deleted before they are passed to us. That being said, if they are needed for some special purpose, they can be left in there, although that should be kept in mind as there are a lot of scripts out there that grab the first shape of a transform and operate on that.

Additionally, a shape node can have more than one parent – it is instanced. Again, that might be needed for your production, but sometimes modelers forget they have instanced stuff that will need to deform in different ways, which will not work.


I keep reiterating that riggers should make sure rigs can achieve the desired shapes from the pre-production. Just take a look at the model side by side with the pose and expression sheets and think about whether the geometry will be able to support the necessary deformations. I know it is a bit arbitrary, but more often than not just taking a minute to think about that can be really helpful.

And lastly, I know this is a whole essay in itself, but please make sure you ask the modeler for changes nicely. We are all in the same boat together, part of the same pipeline, trying to get the best of our projects. Riggers are known to be kind and considerate and you should do your best to live up to the expectations!

As any other creative endeavour, rigging tends to have a steep learning curve. There is just so much to learn. But fear not, the steeper the learning curve the higher the satisfaction of going along.

Previously, I have written about how I got to be a rigger, but I do want to go over the thoughts that made me persist with it. Because, persistence and deliberate practice is the only way to acquire and cultivate a passion.

So, a lot of people getting into rigging for the first time start with trying to rig a character. I understand that, usually there is a need for that, so it only makes sense. It is unrealistic though to expect that the rig will be any good. Which is of course fine, considering it is your first time rigging. But making something that sucks is quite discouraging. To make it easier for yourself to persist with it, I would suggest to predispose yourself to winning. How do you go about doing that?

Well, a common example is making your bed. If you do it first thing in the morning you have started your day with a small win. If you setup more tasks like that you create a chain of success, which tricks your brain on expecting and predisposing yourself more towards the same thing.

So, instead of a character rig, how about starting with the bouncing ball? If the animators start with it, why not us? I did not rig a bouncing ball though, so preaching rigging one does not sit right. What I started with was a super simple anglepoise lamp like this one.

Anglepoise lamp

It is quite similar to pixar’s luxo junior, so I thought it would be a good fun and it really was.

After that I moved on to rigging an actual character. It still sucked, but I definitely felt good about it, because even though I had rigged something before, it was still my first character rig. Additionally I remember how excited one of the two rigging workshops that I had at uni got me. There was this setup of train wheels.

Train wheels pistons rigging

The lecturer asked us how to go about rigging this, so we only control the rotation of the wheels and the pistons would follow properly. If you have some basic rigging knowledge, give it a go. Pistons are always a fun thing to rig.

The reason this got me excited was that it actually opened me up to the problem solving aspect of rigging. You are presented with htis situation and you have to find a way to make it work in a specific manner. That’s it. There might be many ways to go about it, but in the end the desired result is only one. This means you cannot bullshit your way out of it, because if it does not work, it is pretty clear that it does not.

So after you have got some simpler tasks under your belt and maybe you have started rigging your first character there will be a lot of roadblocks. And I do mean a lot. Sometimes you might think that your problem is unique and you will not be able to solve it but I assure you, everything you are going to run into in your first rigs most of us have gone through, so you can always look it up or ask. Honestly, we seem to be a friendly and helpful lot.

The way I overcame most of my roadblocks was to open a new file, build an incredibly simplified version of what I am trying to do and see if I can build it, however messy it gets. This helps isolate the problem and see clearly all sides of it.

At about this point where you have some rigging knowledge I would suggest opening the node editor and starting to get into the vastness of maya’s underlying structure. I do not mean the maya API, although that is certainly something you’d need to look into at a later point, but more to get familiar with the different nodes, how they work and potential applications of them. If you are anything like me you would want to build your own “under the hood”. It is a blessing and a curse really as sometimes I feel too strong of an urge to build something that is already there just so I can have it built myself. It’s crazy and very distracting, but also it gets you asking questions and poking around which proves to be quite useful.

So seeing the actual connections of everything you do is really nice in terms of feeling that you understand what you are doing. For example, if you graph the network of a parent constraint you would see where the connections come from and where they go, which will give you some idea of what happens under the hood. Same goes for everything really – skinClusters, other deformers, math nodes, shading nodes, etc. What this is supposed to do is to make you feel like you have a lot to work with, because you really do. With the available math nodes, you can build most of the algorithms you would need. That being said, the lack of trigonometry nodes is a bit annoying, but you can always write your own nodes when you need them.

The last tip I can think of for keeping your interest in rigging would be to start scripting stuff and not use MEL to do it. There is nothing wrong with using MEL if you really want to, or to support legacy code, but considering that 99% of what is done with MEL can be done with Python and the other 1% is stuff you definitely do not need I consider using MEL a terrible idea for your future. Python is a very versatile programming language and also (personal preference) it is simple, very pretty and quick to prototype with. Honestly, I think The zen of python is awesome to live by.

Honestly, I think everyone should learn to code, not only because in the near future it will expose a lot of possibilities to make nice changes in your day to day life, but also because of the way coding makes you think about stuff. Some of the main benefits I find are:
– The process of writing a script to do something makes you understand how it is done
– The time saved is invaluable
– The satisfaction of seeing your code work is incredible

Then what I would do after having some rigging knowledge, some maya infrastructure knowledge and some scripting knowledge is build a face rig, and take my time with it. If you create a nice face rig, it will be loads of fun to play around with it, which is very satisfying. And I am sure, at that point you will be hooked.

That’s it. I covered most of the things that have kept me interesting in rigging while I was struggling with it. I am sure that if you have picked up rigging enough to read this post, you already are a curious individual so sticking to it will not be hard.

How does rigging fit in the pipeline? Where does it sit and how does it communicate with the other parts aspects?

The pipeline is being upgraded and changed to fit the specific needs of each production company or team. But these changes are not in terms of the big scale of the process, but more to deal with smaller things. Therefore, generally the path an asset takes would be something like:

Pre-production > Modeling > Texturing > Rigging > Animation > Lighting > Rendering > Compositing

Building the shaders, kind of goes throughout the production as an asynchronous process, because it is not restricted by anything to start doing tests.

Bear in mind that even though these are sequenced in a proper production there is and there should be a lot of going back and forth to make sure we get the best of the asset. Obviously, more than one task can be worked on at the same time. For example, there is no reason to do some lighting when animation is in it’s blocking stage, or to do the texturing while rigging, etc.

So what about rigging?

Well, it fits nicely between modeling and animation. If you think of this sequence as a node in a graph or just a function, it would take model as input and spit out something that animation needs as output. Riggers tend to always look for clear and absolute solutions which would always be valid, but of course that would be too easy. And also very repetitive and boring for us. Do you want to be a node in a graph?

Now, how does rigging expand beyond modeling and animation though? Well, how do we make sure that the director will be happy with every shot? We can never be sure, but the best way to go about it, is to go back to the stages which have already been approved and get from them as much as we could. So we could go back to pre-pro and look at the character sheets. Does the character in the animation move like it has been designed to move? Do the character’s facial expressions match her character sheets? Do you see where I am going with this? We as riggers need to constantly look back at the pre-production and make absolutely sure that we are creating a rig which can fit the purposes of these designs. And if for some reason that is impossible, it is our job to bring it up, so it can be decided whether it makes sense time-wise to go back to pre-pro and fix it or we have to scrap that particular feature of the character.

Similarly, looking ahead a rigger should not only be looking at the animator. Way too many people look at lighting as just placing a couple of lights, setting some render preset and pressing a button. Of course, you are in for a big surprise if you try getting a lighting job with these expectations. Lighters tend to work with a lot of caches that may cause issues. They need to come up with clever techniques to overcome problems like for example on my film Naughty Princess the lighter asked me to make a small camera rig, so we can always keep the character in focus properly. Or another one we have used was to rivet a locator to the character so following her is easier. Often, deformation issues would come up in the lighting process and good communication is crucial to solve them quickly. Additionally, in terms of housekeeping, the smaller we can keep the file sizes the better for everyone else, as loading them can become painfully slow.

So there we have it. It would be stupid and unrealistic to think that rigging only takes models and spits out rigs for the animators, without thinking about whether this model is coming from and where it’s going. As I wrote in the “Why I Rig” post, one of the reasons is the fact that rigging has such a central position in the pipeline, that we have to communicate and make decisions for different aspects throughout the pipeline.

Considering that my career can easily take over most of my time for the next 20 to 40 years, I believe having good reasons for getting into it is essential.

Even though I do not plan to be doing it for 20 years, I do feel there are some quite strong reasons for my being a rigger in the animation industry. From talking to rigging people I know I gather that some of these reasons are shared among most of us, so do not be surprised if you find them quite obvious.

Nobody else wanted to do it

I think most of us have a story to share on this one, so here is mine. During the second year at university we had a group project aimed at creating an animated short. We had to pitch ideas and request additional members with specific roles. So in my pitch I outlined that I intend to direct and take on modeling and lighting, and that I will need people to do pre-production and texturing, animation and techy stuff – rigging and water simulations. Then when my pitch went trough and I was assigned some teammates I realized I am the most technical person in it, so if I wanted to see my film come to life I would have to rig and run the water sims myself. That’s how I became a rigger.

Now, I was not psyched at all but I was sure I am going to do it, and do it as best as I can, as again that was the only way to realize my idea. From talking to some people who do rigging it seems like this is the most common way of getting into it. What we did not know, though, was that we are going to love it. I downloaded all the tutorials on rigging I could get my hands on (the university gave us access to digital tutors and lynda) and I got on with it. Very soon I was hooked.

Art and maths

Throughout my getting familiar with the process of creating computer animations, the concept of it being a combination of art and maths would keep coming up and understandably so. To add on that, I do find a lot of artistic beauty in maths and it is also no secret that art has greatly benefited from maths as well (rule of thirds, fibonacci spirals, etc.) Additionally, these two have always been my subjects of choice. Unfortunately, I am neither a great painter nor a great mathematician, but I find that exploring each of them individually or their combination gives me a great deal of pleasure and understanding of the world. What is more they converge incredibly well in some fields such as architecture and computer graphics.

Then there is always the discussion in the field whether you are more technical or more artistic as if one undermines the other. Either way, I find rigging to be the sweet spot between the two. We have to make rigs that perform as fast as possible and not break. On the other side we also need to provide the animators with the ability to hit the shapes and poses needed for the production. In other words I think a rigger should be spending half his time on the technicalities of a rig – how well it performs – and the other half on making sure the deformations as appealing as they need to be.

We can go as deep in maths and programming as we want to, or we can let maya handle most of it for us. Similar thing goes for the artistic aspect as we can ask for a very thorough pre-production and try to match that or get a design and explore it to see how it works. Obviously, anatomy is always a big thing in animation, so the more artistic training a rigger has, the better.

Problem solving

The most obvious one probably. I was never one of those kids who were opening up their toys or gadgets to see how they work. I wanted to be though, but it seems I was too easily distracted to keep at it. But then now, it is not unusual to find me awake well into the early hours trying to figure out a problem. I hate going to bed without having cracked it, even though in the morning I am at least twice faster at finding the solution. A couple of nights ago I stayed up until 3am to try and figure out how to make sure maya’s python interpreter would always take my latest code changes without having to restart maya or use python’s reload function.

Another big one was trying to figure out how to build and Ik/Fk matching function for the first time. I had some clues from a friend, but I still spent a long time on that one.

It is stupidly exciting though! Finding the solution to a problem is always an immense satisfaction. If I cant find the solution, I cannot let it go. My girlfriend says “I can hear you think!”.

Rigging lies in the center of the pipeline

It is always nice to see how much crossover riggers usually have in the other aspects. It is no secret that the best riggers are also knowledgeable in modeling and animation. Josh Carey from Reel FX said in an interview that being able to model and animate is one of the main requirements for getting a rigging job there.

As a rigger you would usually have to communicate with all other departments which puts you in a very central and responsible position. I crave these kind of jobs, you know ego and all that. I want to be responsible for a lot of important tasks, so I can always be pushing myself out of my comfort zone.

Additionally, people tend to respect riggers for this precise reason. What is more it, I found rigging helped me to stay on top of things more easily when I was working with other people on my two short films.


One could easily create rigs without writing a single line of code, but you will rarely if ever find such people rigging in the industry. It is just very very inefficient. In a production pipeline it is incredibly important to have a non destructive workflow, which could hardly be the case if not for programming. Imagine having to build an arm rig with all it’s bells and whistles – ik/fk, elbow lock, twist, stretch, etc. – manually each time you rig a character. Not only is it going to be incredibly boring after the 3rd time you do it, but also it is going to take a ton of time, that quite possibly you do not have. If you script it once though, you get it for free every time you need an arm. So obviously, most of the rigging workflows people are based around coding.

Why is that an important reason to me though? Learning the code is really an eye opening experience. It gets you thinking about being effective and efficient. It teaches you to break problems in smaller and recyclable chunks, which can be tackled one by one. The benefits of learning to program are a whole other subject, but I would strongly suggest to anyone excited about personal growth and development to consider learning some basic programming. After all, coding is the engineering of the 21st century and it is going to get even more beneficial as time goes, mainly because everything is slowly becoming programmable.

Better contracts

Disclaimer: This one is completely speculative, because it’s based on talking to my university teachers and my coursemates who are now working in the industry.

With the great responsibility from the Rigging lies in the center of the pipeline point, luckily comes another benefit. TDs of all sorts tend to be paid slightly better than some of the artists in other departments. Additionally, as rigging pipelines tend to need some getting used to, it does not make sense for companies to hire short term riggers. Therefore, we would usually get longer contracts which as most would know is very rare in the animation and VFX industry.

Now for many people this point would not matter, but for me that stability means that I can stop thinking about just surviving, but instead think about advancing and developing myself further in my areas of choice.

These are the main reasons for my deciding to get into rigging. Another one I did not include is rigging being so much fun in general, but I thought it is a bit vague, even though anyone who has ever played around with a good rig knows what I mean.

All in all, rigging has been an incredible journey for me so far, because it has taught me to think in a certain problem solving manner about everything in my day to day life. I am therefore very glad these reasons got me to apply for rigging positions.

As simply put rigging is the process of giving a computer generated asset the ability to move. Whenever a person not in our industry asks me what do I do I generally start with this. Then of course, you can go on to imitate puppets and puppeteers with your hands.

Now not so simply put, rigging is the process of building a system to be used by an animator to deform a CG asset in a very specific manner. This deformation takes place on multiple layers, providing all the needed controls for hitting shapes and poses designed in the pre-production stage.

Okay, let’s deconstruct this.

When I say “deform a CG asset” I mean the ability to translate, rotate, scale or make it squash, stretch, bend, etc. Generally the base of a character rig is it’s bone structure which allows for moving the limbs and body in a way similar to how we move in the real world. That is why researching and studying anatomy is very important for riggers who want to improve.

The piece about the “very specific manner” is in regards to being able to build different types of FK or IK chains, IK/FK blends, blendshapes, etc. The keyword here is specific, because every aspect of a rig needs to come from a need. Every choice in the process of rigging is to always be bound by a good reason for achieving a specific result. Otherwise, we are shooting in the dark trying to hit a moving target. Therefore, it is very important for a production team to be able to communicate exactly what they expect and want to get in the end. Because, if we add a module to a rig “just in case” we are bloating the rig, and please let’s not bloat our rigs.

The “multiply layers” bit refers to the ability of deforming objects in different, but again very specific ways, and then sequencing them in a chain to produce the final result. The face is the classical example of this, where in most rigs people tend to use a combination of a few local rigs – rigs built to deform only in local space and then add them via a blendshape to the world space rig.

Then “providing all the needed controls” is again crucial for keeping the rig as light and clear as you possibly can. In our day to day lives we are very much used to clutter everything we interact with – bedrooms, kitchens, your home folder, etc. But again, when rigging an asset we should be thinking as minimalists. Which is to say, only add controls or functions if they are going to add value to the rig. Also, it is very important to make sure the shapes of the control objects makes sense, so the animators can now instantly what they are about to pick.

And lastly, the part about “hitting the shapes and poses”. Now there are a couple of metrics that describe how good or bad a rig is. Ones that I have mentioned above are functionality, performance and clarity. But in the end of the day, rigging is just a part of the animation pipeline. Therefore, as always we need to have a clear reference point to keep us on the right track. Take a look at modelers and animators, they always have their reference opened up on the side to help them stay consistent. Why would rigging be any different? We should not be thinking that we can rigs that are absolutely perfect for our purposes, without having a clear idea of what those purposes are.

Therefore, making sure that we can get the shapes and expressions that we have received in the character sheets or animatic, or even our own sketches in a more casual production, is not only making it easier for the animator, but it is making it possible to realize the initial concept.