This post is a part of a three post series, where I implement popular rigging functionalities with just using maya’s native matrix nodes.

Calculating twist is a popular rigging necessity, as often we would rather smoothly interpolate it along a joint chain, instead of just applying it at the end of it. The classical example is limbs, where we need some twist in the forearm/shin area to support the rotation of the wrist or foot. Some popular implementations utilize ik handles or aim constraints, but I find them as a bit of an overkill for the task. Therefore, today we will have a look at creating a matrix twist calculator, that is both clean and quick to evaluate.

Other than matrix nodes I will be using a couple of quaternion ones, but I promise it will be quite simple, as even I myself am not really used to working with them.

tl;dr: We will get the matrix offset between two objects – relative matrix, then extract the quaternion of that matrix and get only the X and W components, which when converted to an euler angle, will result in the twist between the two matrices along the desired axis.

Desired behaviour

Matrix twist calculator - desired behaviour
Please excuse the skinning, I have just done a geodesic voxel bind

As you can see what we are doing is calculating the twist amount (often called roll as well from the yaw, pitch and roll notation) between two objects. That is, the rotation difference on the axis aiming down the joint chain.


An undesirable effect you can notice is the flip when the angle reaches 180 degrees. Now, as far as I am aware, there is no reasonable solution to this problem, that does not involve some sort of caching of the previous rotation. I believe, that is what the No flip interpType on constraints does. There was one, using an orient constraint between a no roll joint and the rolling joint and then multiplying the resulting angle by 2, which worked in simple cases, but I found it a bit unintuitive and not always predictable. Additionally, most animators are familiar with the issue, and are reasonable about it. In the rare cases, where this issue will be a pain in your production you can always add control over the twisting matrices, so the animators can tweak them.

Something else to keep in mind is to always use the first axis of the rotate order to calculate the twist in, since the other ones might flip at 90 degrees instead of 180. That is why, I will be looking at calculating the X twist, as the default rotate order is XYZ.

With that out of the way, let us have a look at the setup.

Matrix twist calculator

I will be looking at the simple case of extracting the twist between two cubes oriented in the same way. Now, you might think that is too simple of an example, but in fact this is exactly what I do in my rigs. I create two locators, which are oriented with the X axis being aligned with the axis I am interested in. Then I parent them to the two objects I want to find the twist between, respectively. This, means that finding the twist on that axis of the locators, will give me the twist between the two objects.

Matrix twist calculator

Granted, I do not use actual locators or cubes, but just create matrices to represent them, so I keep my outliner cleaner. But, that is not important at the moment.

The relative matrix

Now, since we are going to be comparing two matrices to get the twist angle between them, we need to start by getting one of them in the relative space of the other one. If you have had a look at my Node based matrix constraint post or you were already familiar with matrices, you would know that we can do that with a simple multiplication of the child matrix by the inverse of the parent matrix. That will give us the matrix of the child object relative to that of the parent one.

The reason, we need that is because that relative matrix is now holding all the differences in the transformations between the two objects, and we are interested in exactly that, the difference on the aim axis.

Here is how that would look in the graph.

Matrix twist calculator - relative matrix

The quaternion

So, if we have the relative matrix, we can proceed to extracting the rotation out of it. The thing with rotations in 3D space is that they seem a bit messy, mainly because we usually think of them in terms of Euler angles, as that is what maya gives us in the .rotation attributes of transforms. There is a thing called a quaternion, though, which also represents a rotation in 3D space, and dare I say it, is much nicer to work with. Nicer, mainly because we do not care about rotate order, when working with quaternions, since they represent just a single rotation. What this gives us is a reliable representation of an angle along just one axis.

In practical terms, this means, that taking the X and W components of the quaternion, and zeroing out the Y and Z ones, will give us the desired rotation only in the X axis.

In maya terms, we will make use of the decomposeMatrix to get the quaternion out of a matrix and then use the quatToEuler node to convert that quaternion to an euler rotation, which will hold the twist between the matrices.

Here is the full graph, where the .outputRotateX of the quatToEuler node is the actual twist value.

Matrix twist calculator - full graph


And that is it! As you can see, it is a stupidly simple procedure, but has proved to be giving stable results, which in fact are 100% the same as using an ik handle or an aim constraint, but with little to no overhead, since matrix and quaternion nodes are very computationally efficient.

Stay tuned for part 3 from this matrix series, where I will look at creating a rivet by using just matrix nodes.

This post is a part of a three post series, where I will try to implement popular rigging functionalities by only using maya’s native matrix nodes.

Following the Cult of rig lately, I realized I have been very wasteful in my rigs in terms of constraints. I have always known that they are slower than direct connections and parenting, but then I thought that is the only way to do broken hierarchy rigs. Even though I did matrix math at university, I never used it in maya as I weirdly thought the matrix nodes are broken or limited. There was always the option of writing my own nodes, but since I would like to make it as easy for people to use my rigs, I would rather keep everything in vanilla maya.

Therefore, when Raffaele used the matrixMult and decomposeMatrix nodes to reparent a transform, I was very pleasantly inspired. Since then, I have tried applying the concept to a couple of other rigging functionalities, such as the twist calculation and rivets and it has been giving me steadily good results. So, in this post we will have a look at how we can use the technique he showed in the stream, to simulate a parent + scale constraint, without the performance overhead of constraints, effectively creating a node based matrix constraint.


There are some limitations with using this approach, though. Some of them are not complex to go around, but the issue is that this adds extra nodes to the graph, which in turn leads to performance overhead and clutter. That being said, constraints add up to the outliner clutter, so I suppose it might be a matter of a preference.


Constraining a joint with jointOrient values, will not work, as the jointOrient matrix is applied before the rotation. There is a way to get around this, but it involves creating a number of other nodes, which add some overhead and for me are making it unreasonable to use the setup instead of an orient constraint.

If you want to see how we go around the jointOrient issue just out of curiosity, have a look at the joint orient section.

Weights and multiple targets

Weights and multiple targets are also not entirely suitable for this approach. Again, it is definitely not impossible, since we can always blend the output values of the matrix decomposition, but that will also involve an additional blendColors node for each of the transform attributes we need – translate, rotate and scale. And similarly to the previous one, that means extra overhead and more node graph clutter. If there was an easy way to blend matrices with maya’s native nodes, that would be great.

Rotate order

Weirdly, even though the decompose matrix has a rotateOrder attribute, it does not seem to do anything, so this method will work with only the xyz rotate order. Last week I received an email from the maya_he3d mailing list, about that issue and it seems like it has been flagged to Autodesk for fixing, which is great.


The construction of such a node based matrix constraint is fairly simple both in terms of nodes and the math. We will be constructing the graph as shown in the Cult of Rig stream, so feel free to have a look at it for a more visual approach. The only addition I will make to it is supporting a maintainOffset functionality. Also, Raffaele talks a lot about math in his other videos as well, so have a look at them, too.

Node based matrix constraint

All the math is happening inside the matrixMult node. Essentially, we are taking the worldMatrix of a target object and we are converting it to relative space by multiplying by the parentInverseMatrix of the constrained object. The decomposeMatrix after that is there to break the matrix into attributes which we could actually connect to a transform – translate, rotate, scale and shear. It would be great if we could directly connect to an input matrix attribute, but that would probably create it’s own set of problems.

That’s the basic node based matrix constraint. How about maintaining the offset, though?

Maintain offset

In order to be able to maintain the offset, we need to just calculate it first and then put it in the multMatrix node before the other two matrices.

Node based matrix constraint - maintain offset

Calculating offset

The way we calculate the local matrix offset is by multiplying the worldMatrix of the object by the worldInverseMatrix of the parent (object relative to). The result is the local matrix offset.

Using the multMatrix node

It is entirely possible to do this using another matrixMult node, and then doing a getAttr of the output and set it in the main matrixMult by doing a setAttr with the type flag set to "matrix". The local matrixMult is then free to be deleted. The reason we get and set the attribute, instead of connecting it, is that otherwise we create a cycle.

Node based matrix constraint - local matrix offset

Using the Maya API

What I prefer doing, though, is getting the local offset via the API, as it does not involve creating nodes and then deleting them, which is much nicer when you need to code it. Let’s have a look.

import maya.OpenMaya as om

def getDagPath(node=None):
    sel = om.MSelectionList()
    d = om.MDagPath()
    sel.getDagPath(0, d)
    return d

def getLocalOffset(parent, child):
    parentWorldMatrix = getDagPath(parent).inclusiveMatrix()
    childWorldMatrix = getDagPath(child).inclusiveMatrix()

    return childWorldMatrix * parentWorldMatrix.inverse()

The getDagPath function is just there to give us a reference to an MDagPath instance of the passed object. Then, inside the getLocalOffset we get the inclusiveMatrix of the object, which is the full world matrix equivalent to the worldMatrix attribute. And in the end we return the local offset as an MMatrix instance.

Then, all we need to do is to set the multMatrix.matrixIn[0] attribute to our local offset matrix. The way we do that is by using the MMatrix‘s () operator which returns the element of the matrix specified by the row and column index. So, we can write it like this.

localOffset = getLocalOffset(parent, child)
mc.setAttr("multMatrix1.matrixIn[0]", [localOffset(i, j) for i in range(4) for j in range(4)], type="matrix")

Essentially, we are calculating the difference between the parent and child objects and we are applying it before the other two matrices in the multMatrix node in order to implement the maintainOffset functionality in our own node based matrix constraint.

Joint orient

Lastly, let us have a look at how we can go around the joint orientation issue I mentioned in the Limitations section.

What we need to do is account for the jointOrient attribute on joints. The difficulty comes from the fact that the jointOrient is a separate matrix that is applied after the rotation matrix. That means, that all we need to do is, in the end of our matrix chain rotate by the inverse of the jointOrient. I tried doing it a couple of times via matrices, but I could not get it to work. Then I resolved to write a node and test how I would do it from within. It is really simple, to do it via the API as all we need to do is use the rotateBy function of the MTransformationMatrix class, with the inverse of the jointOrient attribute taken as a MQuaternion.

Then, I thought that this should not be too hard to implement in vanilla maya too, since there are the quaternion nodes as well. And yeah there is, but honestly, I do not think that graph looks nice at all. Have a look.

Node based matrix constraint - joint orient

As you can see, what we do is, we create a quaternion from the joint orientation, then we invert it and apply it to the calculated output matrix of the multMatrix. The way we apply it is by doing a quaternion product. All we do after that is just convert it to euler and connect it to the rotation of the joint. Bear in mind, the quatToEuler node supports rotate orders, so it is quite useful.

Of course, you can still use the maintainOffset functionality with this method. As I said though, comparing this to just an orient constraint it seems like the orient constraint was performing faster every time, so I see no reason of doing this other than keeping the outliner cleaner.

Additionally, I am assuming that there is probably an easier way of doing this, but I could not find it. If you have something in mind, give me a shout.


Using this node based constrain I was able to remove parent, point and orient constraints from my body rig, making it perform much faster than before, and also the outliner is much nicer to look at. Stay tuned for parts 2 and 3 from this matrix series, where I will look at creating a twist calculator and a rivet by using just matrix nodes.

I remember the first time I tried to set up a seamless IK FK switch with Python vividly. There was this mechanical EVA suit that I was rigging for a masterclass assignment at uni given by Frontier. The IK to FK switching was trivial and there were not many issues with that, but I had a very hard time figuring out the FK to IK one, as I had no idea what the pole vector really is and also, my IK control was not oriented the same way as my FK one.

Im sure that throughout the web there are many solutions to the problem, but most of the ones I found were in MEL and some of them were a bit unstable, because they were relying too much on the xform command or the rotate one with the ws flag, which I am assuming causes issues sometimes when mapping from world space to relative, where a joint will have the exact same world rotation, so it looks perfectly, but if you blend between IK and FK you can see it shifting and then coming back in place. That’s why I decided to use constraints to achieve my rotations, which seems to be a simple enough and stable solution.

EDIT: It seems like even with constraints it is possible to get that issue in the case where the IK control is oriented differently. What fixes is though is switching back and forth once more.

Here is what we are trying to achieve

Seamless IK FK swich demo

Basically, there is just one command for the seamless IK FK Switch, which detects the current kinematics and switches to the other one maintaining the pose. I have added the button to a custom marking menu for easier access.

So, in order to give you a bit of a better context I have uploaded the example scene that I am using, so you can have a look at the exact structure, but feel free to use your own scene with IK/FK blending setup. The full code (which is very short anyway) is in this gist and there are three scene files in here for each version of our setup. The files contain just a simple IK/FK blending system, on which we can test our matching setup, but with different control orientations.

It is important to understand the limitations of a seamless IK FK switch before we dive in. Mainly, I am talking about the limited rotation of the second joint in the chain, as IK setups allow for rotations only in one axis. What this means is that if we have rotations in multiple axis on our FK control for that middle joint (elbow, knee, etc.) the IK/FK matching will not work properly. All this is due to the nature of inverse kinematics.

Also, for easier explaining I assume we are working on an arm and hand setup, but obviously the same approach would work for any IK/FK chain.

We will consider three cases:
All controls and joints are oriented the same
IK Control oriented in world space
IK Control and IK hand joint both oriented in world

Again, you do not have to use the same file as I do as it is just an example, but it is important to be clear on the existing setup. We assume that we have an arm joint chain – L_arm01_JNT > L_arm02_JNT > L_arm03_JNT and a hand joint chain – L_hand01_JNT > L_hand02_JNT with their correspondent IK and FK chains – L_armIk01_JNT > …, L_armFk01_JNT > …, etc. These two chains are blended via a few blendColors nodes for translate, rotate and scale into the final chain. The blending is controlled by L_armIkFk_CTL.fkIk. Then we have a simple non-stretchy IK setup, but obviously a stretchy one would work in the same way. Lastly, the L_hand01_JNT is point constrained to L_arm03_JNT and we only blend the rotate and scale attributes on it, as otherwise the wrist becomes dislocated during blending, because we are interpolating linearly translation values.

Now that we know what we have to work with, let us get on with it.

Seamless IK FK Switch when everything shares orientation

So, in this case, all of our controls and joints have the exact same orientation in both IK and FK. What this means is that essentially all we need to do to match the kinematics is to just plug the rotations from one setup to the other. Let’s have a look. The scene file for this one is called ikFkSwitch_sameOrient.ma

IK to FK

This one is always the easier setup, as FK controls generally just need to get the same rotation values as the IK joints and that’s it. Now, initially I tried copying the rotation via rotate and xform commands, but whenever a control was rotated a bit too extreme these would cause flipping when blending between IK and FK, which I am assuming is because these commands have a hard time converting the world space rotation to a relative one, causing differences of 360 degrees. So, even though in full FK and full IK everything looks perfect, in-between the joint rotates 360 degrees. Luckily, maya has provided us with constraints which have all the math complexity built in. Assuming you have named your joints the same way as me we use the following code.

mc.delete(mc.orientConstraint("L_armIk01_JNT", "L_armFk01_CTL"))
mc.delete(mc.orientConstraint("L_armIk02_JNT", "L_armFk02_CTL"))
mc.delete(mc.orientConstraint("L_handIk01_JNT", "L_handFk01_CTL"))

mc.setAttr("L_armIkFk_CTL.fkIk", 0)

As I said, this one is fairly trivial. We just orient each of our FK controls to match the rotations of the IK joints. Then in the end we change our blending control to FK to finalize the switch.

FK to IK

Now, this one was a pain the first time I was trying to do it, because I had no idea how pole vectors worked at all. As soon as I understood that all we need to know about them is that they just need to lie on the same plane as the three joints in the chain, it became easy. So essentially, we need to place the IK control on the FK joint to solve the end position. And then to get the elbow (or whatever your mid joint is representing) to match the FK, we just place the pole vector control at the exact location of the corresponding joint in the FK chain. So, we get something like this.

mc.delete(mc.parentConstraint("L_handFk01_JNT", "L_armIk_CTL"))
mc.xform("L_armPv_CTL", t=mc.xform("L_armFk02_JNT", t=1, q=1, ws=1), ws=1)

mc.setAttr("L_armIkFk_CTL.fkIk", 1)

Now even though this does the matching job perfectly it is not great for the animators to have the control snap at the mid joint location as it might go inside the geometry, which is just an unnecessary pain. What we can do is, get the two vectors from arm01 to arm02 and from arm03 to arm02 and use them to offset our pole vector a bit. Here’s the way we do that.

arm01Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk01_JNT", t=1, ws=1, q=1)[i] for i in range(3)]
arm02Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk03_JNT", t=1, ws=1, q=1)[i] for i in range(3)]

mc.xform("L_armPv_CTL", t=[mc.xform("L_armFk02_JNT", t=1, q=1, ws=1)[i] + arm01Vec[i] * .75 + arm02Vec[i] * .75 for i in range(3)], ws=1)

So, since xform returns lists in order to subtract them to get the vectors we just loop through them and subtract the individual elements. If you are new to the shorter loop form in Python have a look at this. Then once we have the two vectors we add 75% of them to the position of the arm02 FK joint and we arrive at a position slightly offset from the elbow, but still on the same plane, thus the matching is still precise. Then our whole FK to IK code would look like this

mc.delete(mc.parentConstraint("L_handFk01_JNT", "L_armIk_CTL"))

arm01Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk01_JNT", t=1, ws=1, q=1)[i] for i in range(3)]
arm02Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk03_JNT", t=1, ws=1, q=1)[i] for i in range(3)]

mc.xform("L_armPv_CTL", t=[mc.xform("L_armFk02_JNT", t=1, q=1, ws=1)[i] + arm01Vec[i] * .75 + arm02Vec[i] * .75 for i in range(3)], ws=1)

mc.setAttr("L_armIkFk_CTL.fkIk", 1)

Seamless IK FK switch when the IK control is oriented in world space

Now, in this case, the orientation of the IK control is not the same as the hand01 joint. I think in most cases people go for that kind of setup as it is much nicer for animators to have the world axis to work with in IK. The scene file for this one is called ikFkSwitch_ikWorld.ma.

The IK to FK switch is exactly the same as the previous one, so we will skip it.

FK to IK

So, in order to get this to work, we need to do the same as what we did in the previous case, but introduce an offset for our IK control. How do we get this offset then? Well, since we can apply transformations only on the controls, we need to calculate what rotation we need to apply to that control in order to get the desired rotation. Even though, we can calculate the offsets using maths and then apply them using maths, we might run into the same issue with flipping that I discussed in the previous case. So, instead, a much easier solution, but somewhat dirtier is to create a locator which will act as our dummy object to orient to.

Then, in our case where only the IK control is oriented differently from the joints, what we need to do is create a locator and have it assume the transformation of the IK control. The easiest way would be to just parent it underneath the control and zero out the transformations. Then parent the locator to the L_handFk01_JNT, as that’s the one that we want to match to. Now wherever that handFk01 joint goes, we have the locator parented underneath which shares the same orientation as our IK control. Therefore, just using parentConstraint will give us our matching pose. Assuming the locator is called L_hand01IkOfs_LOC all we do is this.

mc.delete(mc.parentConstraint("L_hand01IkOfs_LOC", "L_armIk_CTL"))

This will get our wrist match the pose perfectly. Then we apply the same code as before to get the pole vector to match as well and set the IK/FK blend attribute to IK.

arm01Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk01_JNT", t=1, ws=1, q=1)[i] for i in range(3)]
arm02Vec = [mc.xform("L_armFk02_JNT", t=1, ws=1, q=1)[i] - mc.xform("L_armFk03_JNT", t=1, ws=1, q=1)[i] for i in range(3)]

mc.xform("L_armPv_CTL", t=[mc.xform("L_armFk02_JNT", t=1, q=1, ws=1)[i] + arm01Vec[i] * .75 + arm02Vec[i] * .75 for i in range(3)], ws=1)

mc.setAttr("L_armIkFk_CTL.fkIk", 1)

Seamless IK FK switch when the IK control and joint are both oriented in world space

Now, in this last scenario, we have the handIk01 joint oriented in world space, as well as the control. The reason you would want to do this again is to give the animators the easiest way to interact with the hand. In the previous case, the axis of the IK control do not properly align with the joint which is a bit awkward. So a solution would be to have the handIk01 joint oriented in the same space as our control, so the rotation is 1 to 1 and it is a bit more intuitive. The scene for this one is ikFkSwitch_ikJointWorld.ma and it looks like this.

It is important to note that the IK joint is just rotated to match the position of the control, but the jointOrient attributes are still the same as the FK one and the blend one.

Seamless IK FK Switch with IK control and joint oriented in world space
IK FK Switch with IK control and joint oriented in world space

So again, going from IK to FK is the same as before, we are skipping it. Let us have a look at the FK to IK.

FK to IK

This one is very similar to the previous one, where we have an offset transform object to snap to. The difference is that now instead of having that offset be calculated just from the difference between the IK control and the FK joint, we also need to adjust for the existing rotation of the IK joint as well. So, we start with our locator the same way as before – parent it to the IK control, zero out transformations and parent to the handFk01 joint. And then, the extra step here is to apply the negative rotation of the IK joint to the locator in order to get the needed offset. So, this calculation would look like this.

ikRot = [-1 * mc.xform("L_handIk01_JNT", ro=1 ,q=1)[i] for i in range(3)]
mc.xform("L_hand01IkOfs_LOC", ro=ikRot, r=1)

We just take the rotation of the IK joint and multiply it by -1, which we then apply as a relative rotation to the locator.

And then again, as previously we just apply the pole vector calculation and we’re done.


So, as you can see, scripting a seamless IK FK switch is not really that complicated at all, but if you are trying to figure it out for the first time, without being very familiar with rigging and 3D maths it might be a bit of a pain. Again, if you want to see the full code it is in this gist.

Rigging in maya tends to be a very straight forward and no bullshit process, so getting a grasp of the fundamentals will pretty much be enough to let you build your knowledge further by experimenting with different approaches and techniques. Therefore, in this post I’ll describe all the essential building blocks of a rig.

Disclaimer: This post is going to be a continuous work in progress, as I will keep adding rigging concepts as we go.

Table of contents

  1. Nodes
  2. Joints
  3. Skinning
  4. Other deformers
  5. Weight painting


Now, even though the dependency graph is not an essential knowledge for a beginner rigger who just wants to build rigs, it is absolutely crucial for grasping the fundamentals of how rigging in maya works. As such, it is vital for understanding the following points of this article.

In a most broad aspect the dependency graph is the place where the scene is described in terms of it’s building blocks – nodes. A node is essentially similar to the concept of a function or method in traditional programming. If you are not into programming, then you can think of a node as a sort of a calculator, which takes in some data in the form of an input, modifies it and it returns the output data. Nodes seem to be used in a lot of the major packages in the 3D industry – Maya, Houdini, Nuke, Cinema 4D, etc., so it is a concept which is useful to understand.

Every object in a maya scene is described by one or more nodes. And when I say object I do not only mean geometries, but also locators, cameras, deformers, modeling history (polyExtrude commands, polyMerge, etc.) and a lot of other stuff we don’t need to worry about. If you are curious about everything that’s going on in a scene, you can uncheck the Show DAG objects only checkbox in the outliner.

A simple example is a piece of geometry. You would expect that it would be described by one node containing all the information, but there actually are two. One is a transform node, which contains all the transformation data of the object – translations, rotations and scale (world transformations). The other is the shape node, which contains all the vertex data to build the actual geometry (local transformations), but we don’t need to worry about this one just yet.

A very cool thing about nodes is that we can actually create our own. The Maya API allows us to write custom nodes in Python and C++.

Further reading: Autodesk User Guide: Node types


In a non-technical aspect a joint is incredibly similar to a bone of a skeleton. A number of them are chained together in order to create limbs, spines, tails, etc., which in turn together form exactly that, a skeleton. These joints then are told to influence the body of a character or any other object and that’s how you get a very basic rig.

In a more technical approach to describe a joint, we can start off by saying it is a transform node. Therefore it has all the usual attributes – translation, rotation, scale, etc., but there also are some new ones like joint orientation and segmentScaleCompensate.

The segmentScaleCompensate essentially prevents the children joints in the hierarchy from inheriting the scale of the parent joints. And it is generally something we don’t need to think about at all, if you are not rigging for games. If you are, you will need to turn this off and work around it in your rigs.

The joint orientation on the other hand, is absolutely crucial for rigging. It is such an extensive subject that I am considering writing a whole post just about it. Here is a good post on the topic. Describing it briefly, joint orientation describes the way the joint is going to rotate. That is the reason it is called orientation (I think), because in a sense it shows us which way is forward, which is up and which is to the side.

So to summarize, a joint is a transform node with extra attributes, used to create hierarchies (joint chains) which in turn form a skeleton, used to drive the deformations of a piece of geometry.


In non-technical terms skinning is like creating a sock puppet. You stick your arm (the joint chain) inside the sock (the geometry) and by moving your hand and fingers the sock is being deformed as well.

In a more technical manner, skinning is the process of binding a geometry to a joint chain. In maya nodes terms, this action is described by a skinCluster node. Essentially, when skinning a piece of geo, maya calculates all the vertex positions of the mesh relative to the joint they are influenced by and stores that data in the skinCluster. Then when the joint is transformed in any way – moved, rotated or scaled, the vertices of the mesh follow that transformation, by applying their relative positions on top of the transformed positions of the joint.

Other deformers

A skinCluster is what maya calls a deformer. It makes quite a lot of sense if you think about it as a skinCluster does exactly that – it deforms the geometry using the joints. Now, there are other deformers as well and even though they work in many different ways, essentially what they do is, they take a version of the geometry as an input (they can take as input the output of other deformers which allows us to chain deformations), deforms it and outputs the new deformed points into the shape node. Some good simple examples are the nonlinear deformers.

Just like with nodes (deformers are essentially nodes themselves) we can write our own custom ones, which is very useful. For example I have written a collision deformer which makes geometries collide and prevents intersections.

Weight painting

Remember how in the skinning section I said “maya calculates all the vertex positions of the mesh relative to the joint they are influenced by“? Well how does maya know which joint a vertex is influenced by? You see, each component of the geometry can be associated with one or more joints. On creating the skinCluster maya will assign some joints to the components, but more often than not you won’t be satisfied with that assignment. Therefore, you need to tell maya which joints should be driving a specific component. The way you do that is by a process called weight painting.

When we say weight, we basically mean amount of influence.

It is called painting, because by doing so, we are creating weight maps for each joint. As with many other aspects of CGI, these maps range from 0 to 1 although we can choose to go above and below if we want to, but I wouldn’t advise doing so, just because it’s much harder to keep control of your weights this way. So a value of 0 means that this specific region of the geometry will not be deformed at all, by the current joint. Obviously then, 1 would mean that the region will be deformed entirely by the current joint. Values in between mean that the joint will influence the component some but not entirely. It’s important to note that any component needs to have a total weight of 1 (again if we are chosen to work with normalized weights, but as I said this is the suggested approach), therefore if you lower the influence of one joint over a specific component, another or a combination of multiple other ones will receive that remaining influence.

Additionally, all deformers can have their weights painted. SkinClusters seem to be the most complex ones to paint because they have multiple influences, but generally with most other deformers – nonlinears, blendShapes, deltaMush, etc. – there is a single weight per vertex describing how much influence the deformer has over that specific vertex.

Some very useful features of weights are that they can be mirrored and copied. Additionally, they can be stored and loaded at a later point, though maya’s native options for doing that have been very very limited, but this article gives some good tips about it.

As any other creative endeavour, rigging tends to have a steep learning curve. There is just so much to learn. But fear not, the steeper the learning curve the higher the satisfaction of going along.

Previously, I have written about how I got to be a rigger, but I do want to go over the thoughts that made me persist with it. Because, persistence and deliberate practice is the only way to acquire and cultivate a passion.

So, a lot of people getting into rigging for the first time start with trying to rig a character. I understand that, usually there is a need for that, so it only makes sense. It is unrealistic though to expect that the rig will be any good. Which is of course fine, considering it is your first time rigging. But making something that sucks is quite discouraging. To make it easier for yourself to persist with it, I would suggest to predispose yourself to winning. How do you go about doing that?

Well, a common example is making your bed. If you do it first thing in the morning you have started your day with a small win. If you setup more tasks like that you create a chain of success, which tricks your brain on expecting and predisposing yourself more towards the same thing.

So, instead of a character rig, how about starting with the bouncing ball? If the animators start with it, why not us? I did not rig a bouncing ball though, so preaching rigging one does not sit right. What I started with was a super simple anglepoise lamp like this one.

Anglepoise lamp

It is quite similar to pixar’s luxo junior, so I thought it would be a good fun and it really was.

After that I moved on to rigging an actual character. It still sucked, but I definitely felt good about it, because even though I had rigged something before, it was still my first character rig. Additionally I remember how excited one of the two rigging workshops that I had at uni got me. There was this setup of train wheels.

Train wheels pistons rigging

The lecturer asked us how to go about rigging this, so we only control the rotation of the wheels and the pistons would follow properly. If you have some basic rigging knowledge, give it a go. Pistons are always a fun thing to rig.

The reason this got me excited was that it actually opened me up to the problem solving aspect of rigging. You are presented with htis situation and you have to find a way to make it work in a specific manner. That’s it. There might be many ways to go about it, but in the end the desired result is only one. This means you cannot bullshit your way out of it, because if it does not work, it is pretty clear that it does not.

So after you have got some simpler tasks under your belt and maybe you have started rigging your first character there will be a lot of roadblocks. And I do mean a lot. Sometimes you might think that your problem is unique and you will not be able to solve it but I assure you, everything you are going to run into in your first rigs most of us have gone through, so you can always look it up or ask. Honestly, we seem to be a friendly and helpful lot.

The way I overcame most of my roadblocks was to open a new file, build an incredibly simplified version of what I am trying to do and see if I can build it, however messy it gets. This helps isolate the problem and see clearly all sides of it.

At about this point where you have some rigging knowledge I would suggest opening the node editor and starting to get into the vastness of maya’s underlying structure. I do not mean the maya API, although that is certainly something you’d need to look into at a later point, but more to get familiar with the different nodes, how they work and potential applications of them. If you are anything like me you would want to build your own “under the hood”. It is a blessing and a curse really as sometimes I feel too strong of an urge to build something that is already there just so I can have it built myself. It’s crazy and very distracting, but also it gets you asking questions and poking around which proves to be quite useful.

So seeing the actual connections of everything you do is really nice in terms of feeling that you understand what you are doing. For example, if you graph the network of a parent constraint you would see where the connections come from and where they go, which will give you some idea of what happens under the hood. Same goes for everything really – skinClusters, other deformers, math nodes, shading nodes, etc. What this is supposed to do is to make you feel like you have a lot to work with, because you really do. With the available math nodes, you can build most of the algorithms you would need. That being said, the lack of trigonometry nodes is a bit annoying, but you can always write your own nodes when you need them.

The last tip I can think of for keeping your interest in rigging would be to start scripting stuff and not use MEL to do it. There is nothing wrong with using MEL if you really want to, or to support legacy code, but considering that 99% of what is done with MEL can be done with Python and the other 1% is stuff you definitely do not need I consider using MEL a terrible idea for your future. Python is a very versatile programming language and also (personal preference) it is simple, very pretty and quick to prototype with. Honestly, I think The zen of python is awesome to live by.

Honestly, I think everyone should learn to code, not only because in the near future it will expose a lot of possibilities to make nice changes in your day to day life, but also because of the way coding makes you think about stuff. Some of the main benefits I find are:
– The process of writing a script to do something makes you understand how it is done
– The time saved is invaluable
– The satisfaction of seeing your code work is incredible

Then what I would do after having some rigging knowledge, some maya infrastructure knowledge and some scripting knowledge is build a face rig, and take my time with it. If you create a nice face rig, it will be loads of fun to play around with it, which is very satisfying. And I am sure, at that point you will be hooked.

That’s it. I covered most of the things that have kept me interesting in rigging while I was struggling with it. I am sure that if you have picked up rigging enough to read this post, you already are a curious individual so sticking to it will not be hard.

How does rigging fit in the pipeline? Where does it sit and how does it communicate with the other parts aspects?

The pipeline is being upgraded and changed to fit the specific needs of each production company or team. But these changes are not in terms of the big scale of the process, but more to deal with smaller things. Therefore, generally the path an asset takes would be something like:

Pre-production > Modeling > Texturing > Rigging > Animation > Lighting > Rendering > Compositing

Building the shaders, kind of goes throughout the production as an asynchronous process, because it is not restricted by anything to start doing tests.

Bear in mind that even though these are sequenced in a proper production there is and there should be a lot of going back and forth to make sure we get the best of the asset. Obviously, more than one task can be worked on at the same time. For example, there is no reason to do some lighting when animation is in it’s blocking stage, or to do the texturing while rigging, etc.

So what about rigging?

Well, it fits nicely between modeling and animation. If you think of this sequence as a node in a graph or just a function, it would take model as input and spit out something that animation needs as output. Riggers tend to always look for clear and absolute solutions which would always be valid, but of course that would be too easy. And also very repetitive and boring for us. Do you want to be a node in a graph?

Now, how does rigging expand beyond modeling and animation though? Well, how do we make sure that the director will be happy with every shot? We can never be sure, but the best way to go about it, is to go back to the stages which have already been approved and get from them as much as we could. So we could go back to pre-pro and look at the character sheets. Does the character in the animation move like it has been designed to move? Do the character’s facial expressions match her character sheets? Do you see where I am going with this? We as riggers need to constantly look back at the pre-production and make absolutely sure that we are creating a rig which can fit the purposes of these designs. And if for some reason that is impossible, it is our job to bring it up, so it can be decided whether it makes sense time-wise to go back to pre-pro and fix it or we have to scrap that particular feature of the character.

Similarly, looking ahead a rigger should not only be looking at the animator. Way too many people look at lighting as just placing a couple of lights, setting some render preset and pressing a button. Of course, you are in for a big surprise if you try getting a lighting job with these expectations. Lighters tend to work with a lot of caches that may cause issues. They need to come up with clever techniques to overcome problems like for example on my film Naughty Princess the lighter asked me to make a small camera rig, so we can always keep the character in focus properly. Or another one we have used was to rivet a locator to the character so following her is easier. Often, deformation issues would come up in the lighting process and good communication is crucial to solve them quickly. Additionally, in terms of housekeeping, the smaller we can keep the file sizes the better for everyone else, as loading them can become painfully slow.

So there we have it. It would be stupid and unrealistic to think that rigging only takes models and spits out rigs for the animators, without thinking about whether this model is coming from and where it’s going. As I wrote in the “Why I Rig” post, one of the reasons is the fact that rigging has such a central position in the pipeline, that we have to communicate and make decisions for different aspects throughout the pipeline.

As simply put rigging is the process of giving a computer generated asset the ability to move. Whenever a person not in our industry asks me what do I do I generally start with this. Then of course, you can go on to imitate puppets and puppeteers with your hands.

Now not so simply put, rigging is the process of building a system to be used by an animator to deform a CG asset in a very specific manner. This deformation takes place on multiple layers, providing all the needed controls for hitting shapes and poses designed in the pre-production stage.

Okay, let’s deconstruct this.

When I say “deform a CG asset” I mean the ability to translate, rotate, scale or make it squash, stretch, bend, etc. Generally the base of a character rig is it’s bone structure which allows for moving the limbs and body in a way similar to how we move in the real world. That is why researching and studying anatomy is very important for riggers who want to improve.

The piece about the “very specific manner” is in regards to being able to build different types of FK or IK chains, IK/FK blends, blendshapes, etc. The keyword here is specific, because every aspect of a rig needs to come from a need. Every choice in the process of rigging is to always be bound by a good reason for achieving a specific result. Otherwise, we are shooting in the dark trying to hit a moving target. Therefore, it is very important for a production team to be able to communicate exactly what they expect and want to get in the end. Because, if we add a module to a rig “just in case” we are bloating the rig, and please let’s not bloat our rigs.

The “multiply layers” bit refers to the ability of deforming objects in different, but again very specific ways, and then sequencing them in a chain to produce the final result. The face is the classical example of this, where in most rigs people tend to use a combination of a few local rigs – rigs built to deform only in local space and then add them via a blendshape to the world space rig.

Then “providing all the needed controls” is again crucial for keeping the rig as light and clear as you possibly can. In our day to day lives we are very much used to clutter everything we interact with – bedrooms, kitchens, your home folder, etc. But again, when rigging an asset we should be thinking as minimalists. Which is to say, only add controls or functions if they are going to add value to the rig. Also, it is very important to make sure the shapes of the control objects makes sense, so the animators can now instantly what they are about to pick.

And lastly, the part about “hitting the shapes and poses”. Now there are a couple of metrics that describe how good or bad a rig is. Ones that I have mentioned above are functionality, performance and clarity. But in the end of the day, rigging is just a part of the animation pipeline. Therefore, as always we need to have a clear reference point to keep us on the right track. Take a look at modelers and animators, they always have their reference opened up on the side to help them stay consistent. Why would rigging be any different? We should not be thinking that we can rigs that are absolutely perfect for our purposes, without having a clear idea of what those purposes are.

Therefore, making sure that we can get the shapes and expressions that we have received in the character sheets or animatic, or even our own sketches in a more casual production, is not only making it easier for the animator, but it is making it possible to realize the initial concept.