This category contains posts about all things rigging, but mainly about the core concepts of it.

Rigging clothes of not very high res cartoon characters can get very tricky as with a lot of the designs intersections are inevitable. It can get very frustrating for animators to fix issues like that, and the easier we can make it for them the better. Today, I am going to have a quick look at a setup that can help with fixing small intersections, but can also be used to achieve a variety of effects. It is a nice simple tool that maya provides us – the softMod deformer – but they have not necessarily provided us with a great interface to interact with it, so we will have a look at a way to make it work a bit nicer for us.

Maya softMod deformer - demo

Essentially, what we have is a couple of controls, where one of them defines the origin of the deformation and the other one is actually deforming the geometry. The nice thing is that by placing the deformer after the skinCluster in the chain we can have the controls follow any of our rig controls, in order to be able to easily pick them up and deform our geo in world space.

tl;dr: You can connect your own matrices to the softMod deformer’s softModXforms attribute, in order to have your own controls driving the softMod deformation in world space after the skinCluster.

Figuring it out

When I was trying to figure out what matrices I need to plug to which attributes, I was having a hard time making sense of the available documentation on the subject. I found a few people online making use of the preBindMatrix attribute, but I could not get that to work properly, so naturally, I thought screw it, I am going to write my own softMod out of frustration.

After a couple of minutes of setting up the boilerplate code I was up and running, and it was a really simple effect that I needed, so the code was quite straightforward. I was having issues with the deformation not being accurate in world space though, so I had to account for that, which meant I would multiply by the worldInverseMatrix of the origin object I am using, then deform by the local matrix of the deforming object and finally multiply by the worldMatrix of the origin object in order to bring it back to world.

Doing that after having a look at making it work with the vanilla Maya softMod, though, made me think that I have seen similarly named matrices in the deformer attributes. Namely, the children of the compound softModXformspreMatrix, weightedMatrix and postMatrix. Connecting the proper matrices to these attributes, gave me the result I was looking for.

The reason I am saying this, is because I wanted to point out that is really helpful sometimes to just try and write your own node/deformer/plugin/script in order to understand what Maya is doing and why. I did this exact same thing when trying to figure out how to account for joint orientation in my matrix constraint post.

The actual softMod deformer setup

With that out of the way let us have a look at the graph.

Maya softMod deformer - node graph

So essentially, by making use of the softModXforms we are building exactly what I mentioned in the previous chapter, where we account for the world positioning of our deformer controls, by bringing back the deformation to local space, deforming our object and then placing it back in it’s world position.

Of course, these locators are there just so I can have a nice and simple example. In reality, the way this would work is that these locators would probably be replaced by two controls – one controlling the origin of the deformation and the other actually deforming the object. Additionally, exposing the falloffRadius attribute of the softMod deformer somewhere on these controls would be a good idea as well.

A nice benefit of having our own controls driving the softMod is that we can get rid of the softModHandle since it won’t be doing anything, which would result in a cleaner scene.

Using the tool in production

Now, I could imagine a couple of approaches for using this setup. The first one would be to build these into your rigs before passing them to the animators. Depending on the geometry, though, this could easily be an overkill if they are not used in every shot. If that is the case, the better approach would be to build some sort of an UI for the animators to create these into their scenes.

Additionally, while looking for info on this setup, I stumbled upon a few people having a riveted object be the origin control, so essentially achieving something similar to the infamous tweaker dorito setup.


Even though, the softMod is a very simple deformer, in cartoony productions I could imagine it being very handy for fixing intersections and giving the animators control over finer deformations.

During the week, I got a comment on the first post in my Maya matrix nodes series – the matrix constraint one – about using the wtAddMatrix node to achieve the multiple targets with blending weights functionality similar to constraints. I have stumbled upon the wtAddMatrix node, but I think it is the fact that Autodesk have made it very fiddly to work with it – we need to show all attributes in the node editor and we have 0 access to setting the weight plug – that put me off ever using it. That being said, when RigVader commented on that post I decided I will give it a go. Since it actually works quite nicely, today I am looking at blending matrices in Maya.

Disclaimer: I will be using the matrix constraint setup outlined in the post I mentioned, so it might be worth having a look at that one if you have missed it.

tl;dr: Using the wtAddMatrix we can blend between matrices before we plug the output into a matrix constraint setup to achieve having multiple targets with different weights.

Turns out, the wtAddMatrix is a really handy node. It gives us the chance to plug a number of matrices in the .matrixIn plugs of the .wtMatrix array attribute, and give them weight in the .weightIn plug. That, effectively lets us blend between them.

Blending matrices for a matrix constraint setup

So, now that we know we can blend matrices, we just need to figure out exactly what do we need to blend.

Let us first have a look at the simpler case – not maintaining the offset.

Blending matrices - matrix constraint with no offset

The group1 on the graph is the parent of pCube1 and is used just so we convert the world matrix into a relative to the parent matrix, without using the parentInverseMatrix. The reason for that is we do not want to create benign cycles, which Raff sometimes talks about on the Cult of Rig streams. Other than that, everything seems to be pretty straightforward.

Bear in mind, the wtAddMatrix node does not normalize the weights, which means that we could have all of the targets fully influence our object. What is more, you could also push them beyond 1 or negate them, which would result in seemingly odd results, but that might just be what you need in some cases.

Maintaining the offset

Often we need to maintain the offset in order to achieve the desired behaviour, so the way we do that is we resort to the multMatrix node once more. I am not going in detail, as there are already a couple of ways you can do that outlined in the previous post, but let us see how it fits in our graph.

Blending matrices - matrix constraint maintaing offset

The two additional multMatrix nodes let us multiply the local offset for the current target by the world matrix of the current target, effectively constraining the object but also maintaining the initial offset.

Now, however clean and simple it may be, the graph gets to be a bit long. What this means is, it is probably getting a bit slower to evaluate as well. That is why, I thought I would do a bit of a performance test to see if there still is any benefit to using this setup over a parentConstraint.


The way I usually do my tests is either loop a few hundred times in the scene and build the setup or build it once, save it in a file and then import the file a few hundred times and let it run with some dummy animation. Then I use Maya 2017’s Evaluation toolkit to Run a performance test, which gives us info about the performance in the different evaluation methods – DG, Serial and Parallel. Since, the results vary quite a bit, what I usually do is, run it three times and take the best ones.

In this case, I built the two setups in separate files, both with 2 target objects and maintain offset. Then I ran the tests on 200 hundred imported setups.

So here are the results.

Parent constraint
Playback Speeds
    DG  = 89.8204 fps
    EMS = 20.1613 fps
    EMP = 59.2885 fps
Matrix constraint
Playback Speeds
    DG  = 91.4634 fps
    EMS = 24.6305 fps
    EMP = 67.2646 fps

Bear in mind these tests are done on my 5 years old laptop, so the results you are going to get if you are to repeat this test are going to be significantly better.

As you can see even with the extended graph we are still getting about 7.5 fps increase by using the matrix constraint setup with blending matrices. Considering, we have 200 hundred instances in the scene (which is by no means a large number), that means we have about .0375 fps increase per setup, which in turn means that on every 26 setups we win a frame.


So, there we have it, an even larger part of the parentConstraint functionality, can be implemented by just using matrix nodes. What this means is we can keep our outliner cleaner and get a better performance out of our rigs at the same time, which is a total win win.

Thanks to RigVader for pointing the wtAddMatrix node as a potential solution, it really works quite nicely!

This post is a part of a three post series, where I implement popular rigging functionalities with just using maya’s native matrix nodes.

Rivets are one of those things that blew my mind the first time I learned of them. Honestly, at the time, the ability to stick an object to the deforming components of a geometry seemed almost magical. Although, the more you learn about how geometries work in Maya, the more sense rivets start to make. The stigma around them, though, has always been that they are a bit slow, since they have to wait for the underlying geometry to evaluate and only then can they evaluate as well. And even though that is still the case, it seems that since parallel was introduced the performance has increased significantly.

It is worth trying to simplify and clean rivets up, considering how handy they are for rigging setups like:
– twist distributing ribbons
– bendy/curvy limbs
– sticking objects to geometries after squash and stretch
– sticking controls to geometries
– driving joints sliding on surfaces

and others.

When I refer to the classic rivet or the aimConstraint rivet, it is this one that I am talking about. I have seen it used by many riggers and also lots of lighters as well.

The purpose of this approach is to get rid of the aimConstraint that is driving the rotation of the rivet. Additionally, I have seen a pointConstraint used as well, in order to account for the parent inverse matrix, which would also be replaced by this setup. Even though we are stripping constraints, the performance increase is not very large, so the major benefit of the matrix rivet is a cleaner graph.

TL;DR: We are going to plug the information from a pointOnSurfaceInfo node directly into a fourByFourMatrix node, in attempt to remove constraints from our rigs.

Disclaimer: Bear in mind, I will be only looking at riveting an object to a NURBS surface. Riveting to poly geo would need to be done through the same old loft setup.

Limitations: Since we are extracting our final transform values using a decomposeMatrix node, we do not have the option to use any rotation order other than XYZ, as at the moment the decomposeMatrix node does not support other orders. A way around it, though, is taking the outputQuat attribute and pluging it into an quatToEuler node which actually supports different rotate orders.

Difference between follicle and aimConstraint rivet

Matrix rivet - follicle and classical rivet differences

Matrix rivet - follicle and classic rivet graph

The locator is riveted using an aimConstraint. You can see there is a small difference in the rotations of the follicle and the locator. Why is that?

The classical rivet setup connects the tangentV and normal attributes of a pointOnSurface to the aimConstraint. The third axis is then the cross product of these two. But it seems like the follicle is actually using the tangentU vector for it’s calculations, since we get this difference between the two setups.

Choosing to plug the tangentU into the aimConstraint, instead of tangentV, results in the same behaviour as a follicle. To be honest, I am not sure which one would be preferable. In the construction of our matrix rivet, though, we have full control over that.

Why not follicles?

As I already said, in parallel, follicles are fast! Honestly, for most of my riveting needs I wouldn’t mind using a follicle. The one aspect of follicles I really dislike though, is the fact that it operates through a shape node. I understand it was not meant for rigging, and having the objects clearly recognizable both in the outliner and the viewport is important, but in my case it is just adding up to clutter. Ideally, I like avoiding unnecessary DAG nodes, since they only get in the way.

Additionally, have you had a look at the follicle shape node? I mean, there are so many hair related attributes, it is a shame to use it just for parameterU and parameterV.

Therefore, if we could use a non-DAG network of simple nodes to do the same job without any added overhead, why should we clutter our rigs?

Constructing the matrix rivet

So, the way matrices work in Maya is that the first three rows of the matrix describe the X, Y and Z axis and the fourth row is the position. Since, this is an oversimplification I would strongly suggest having a look at some matrix math resources and definitely watching the Cult of Rig streams, if you would like to learn more about matrices.

What this means to us, though, is that if we have two vectors and a position we can always construct a matrix out of them, since the cross product of the two vectors will give us a third one. So here is how our matrix construction looks like in the graph.

Matrix rivet - constructing the matrix

So, as you can see, we are utilizing the fourByFourMatrix node to construct a matrix. Additionally, we use the vectorProduct node set to Cross Product to construct our third axis out of the normal and the chosen tangent, in this case tangentV which gives us the same result as using the classic aimConstraint rivet. If we choose to use the tangentU instead, we would get the follicle‘s behaviour. Then, obviously we decompose the matrix and plug it into our riveted transform.

Optionally, similar to the first post in this series, we can use the multMatrix node to inverse the parent’s transform, if we so need to. What I usually do, though, is parent them underneath a transform that has it’s inheritTransform attribute turned off, so we can plug the world transforms directly.

It is important to note that in this case we are absolutely sure that the output matrix is orthogonal, since we know that the normal is perpendicular to both tangents. Thus, crossing it with any of the tangents, will result in a third perpendicular vector.

Skipping the vector product

Initially, when I thought of building rivets like this, I plugged the normal, tangentU and tangentV directly from the pointOnSurfaceInfo to the fourByFourMatrix. What this means, is that we have a matrix that is not necessarily orthogonal, since the tangents might very well not be perpendicular. This results in a shearing matrix. That being said though, it was still giving me proper results.

Matrix rivet - skipping the vectorProduct

Then, I added it to my modular system to test it on a couple of characters and it kept giving me steadily good results – 1 to 1 with the behaviour of a follicle or aimConstraint rivet, depending on the order I plug the tangents in.

What this means, then, is that the decomposeMatrix node separates all the shearing from the matrix and thus returns the proper rotation as if the matrix is actually orthogonal.

If that is the case, then we can safely skip the vectorProduct and still have a working rivet, considering we completely disregard the outputShear attribute of the decomposeMatrix.

Since, I do not understand how that shearing is being extracted, though, I will be keeping an eye on the behaviour of the rivets in my rigs, to see if there is anything dodgy about it. So far, it has proved to be as stable as anything else.


If you are anything like me, you would really like the simplicity of the graph, as we literally are taking care of the full matrix construction ourselves. What is more, there are no constraints, nor follicle shapes in the outliner, which again, I find much nicer to look at.

This matrix series has been loads of fun for me to write, so I will definitely be trying to come up with other interesting functions we could use matrices for.

This post is a part of a three post series, where I implement popular rigging functionalities with just using maya’s native matrix nodes.

Calculating twist is a popular rigging necessity, as often we would rather smoothly interpolate it along a joint chain, instead of just applying it at the end of it. The classical example is limbs, where we need some twist in the forearm/shin area to support the rotation of the wrist or foot. Some popular implementations utilize ik handles or aim constraints, but I find them as a bit of an overkill for the task. Therefore, today we will have a look at creating a matrix twist calculator, that is both clean and quick to evaluate.

Other than matrix nodes I will be using a couple of quaternion ones, but I promise it will be quite simple, as even I myself am not really used to working with them.

tl;dr: We will get the matrix offset between two objects – relative matrix, then extract the quaternion of that matrix and get only the X and W components, which when converted to an euler angle, will result in the twist between the two matrices along the desired axis.

Desired behaviour

Matrix twist calculator - desired behaviour
Please excuse the skinning, I have just done a geodesic voxel bind

As you can see what we are doing is calculating the twist amount (often called roll as well from the yaw, pitch and roll notation) between two objects. That is, the rotation difference on the axis aiming down the joint chain.


An undesirable effect you can notice is the flip when the angle reaches 180 degrees. Now, as far as I am aware, there is no reasonable solution to this problem, that does not involve some sort of caching of the previous rotation. I believe, that is what the No flip interpType on constraints does. There was one, using an orient constraint between a no roll joint and the rolling joint and then multiplying the resulting angle by 2, which worked in simple cases, but I found it a bit unintuitive and not always predictable. Additionally, most animators are familiar with the issue, and are reasonable about it. In the rare cases, where this issue will be a pain in your production you can always add control over the twisting matrices, so the animators can tweak them.

Something else to keep in mind is to always use the first axis of the rotate order to calculate the twist in, since the other ones might flip at 90 degrees instead of 180. That is why, I will be looking at calculating the X twist, as the default rotate order is XYZ.

With that out of the way, let us have a look at the setup.

Matrix twist calculator

I will be looking at the simple case of extracting the twist between two cubes oriented in the same way. Now, you might think that is too simple of an example, but in fact this is exactly what I do in my rigs. I create two locators, which are oriented with the X axis being aligned with the axis I am interested in. Then I parent them to the two objects I want to find the twist between, respectively. This, means that finding the twist on that axis of the locators, will give me the twist between the two objects.

Matrix twist calculator

Granted, I do not use actual locators or cubes, but just create matrices to represent them, so I keep my outliner cleaner. But, that is not important at the moment.

The relative matrix

Now, since we are going to be comparing two matrices to get the twist angle between them, we need to start by getting one of them in the relative space of the other one. If you have had a look at my Node based matrix constraint post or you were already familiar with matrices, you would know that we can do that with a simple multiplication of the child matrix by the inverse of the parent matrix. That will give us the matrix of the child object relative to that of the parent one.

The reason, we need that is because that relative matrix is now holding all the differences in the transformations between the two objects, and we are interested in exactly that, the difference on the aim axis.

Here is how that would look in the graph.

Matrix twist calculator - relative matrix

The quaternion

So, if we have the relative matrix, we can proceed to extracting the rotation out of it. The thing with rotations in 3D space is that they seem a bit messy, mainly because we usually think of them in terms of Euler angles, as that is what maya gives us in the .rotation attributes of transforms. There is a thing called a quaternion, though, which also represents a rotation in 3D space, and dare I say it, is much nicer to work with. Nicer, mainly because we do not care about rotate order, when working with quaternions, since they represent just a single rotation. What this gives us is a reliable representation of an angle along just one axis.

In practical terms, this means, that taking the X and W components of the quaternion, and zeroing out the Y and Z ones, will give us the desired rotation only in the X axis.

In maya terms, we will make use of the decomposeMatrix to get the quaternion out of a matrix and then use the quatToEuler node to convert that quaternion to an euler rotation, which will hold the twist between the matrices.

Here is the full graph, where the .outputRotateX of the quatToEuler node is the actual twist value.

Matrix twist calculator - full graph


And that is it! As you can see, it is a stupidly simple procedure, but has proved to be giving stable results, which in fact are 100% the same as using an ik handle or an aim constraint, but with little to no overhead, since matrix and quaternion nodes are very computationally efficient.

Stay tuned for part 3 from this matrix series, where I will look at creating a rivet by using just matrix nodes.

This post is a part of a three post series, where I will try to implement popular rigging functionalities by only using maya’s native matrix nodes.

Following the Cult of rig lately, I realized I have been very wasteful in my rigs in terms of constraints. I have always known that they are slower than direct connections and parenting, but then I thought that is the only way to do broken hierarchy rigs. Even though I did matrix math at university, I never used it in maya as I weirdly thought the matrix nodes are broken or limited. There was always the option of writing my own nodes, but since I would like to make it as easy for people to use my rigs, I would rather keep everything in vanilla maya.

Therefore, when Raffaele used the matrixMult and decomposeMatrix nodes to reparent a transform, I was very pleasantly inspired. Since then, I have tried applying the concept to a couple of other rigging functionalities, such as the twist calculation and rivets and it has been giving me steadily good results. So, in this post we will have a look at how we can use the technique he showed in the stream, to simulate a parent + scale constraint, without the performance overhead of constraints, effectively creating a node based matrix constraint.


There are some limitations with using this approach, though. Some of them are not complex to go around, but the issue is that this adds extra nodes to the graph, which in turn leads to performance overhead and clutter. That being said, constraints add up to the outliner clutter, so I suppose it might be a matter of a preference.


Constraining a joint with jointOrient values, will not work, as the jointOrient matrix is applied before the rotation. There is a way to get around this, but it involves creating a number of other nodes, which add some overhead and for me are making it unreasonable to use the setup instead of an orient constraint.

If you want to see how we go around the jointOrient issue just out of curiosity, have a look at the joint orient section.

Weights and multiple targets

Weights and multiple targets are also not entirely suitable for this approach. Again, it is definitely not impossible, since we can always blend the output values of the matrix decomposition, but that will also involve an additional blendColors node for each of the transform attributes we need – translate, rotate and scale. And similarly to the previous one, that means extra overhead and more node graph clutter. If there was an easy way to blend matrices with maya’s native nodes, that would be great.

Rotate order

Weirdly, even though the decompose matrix has a rotateOrder attribute, it does not seem to do anything, so this method will work with only the xyz rotate order. Last week I received an email from the maya_he3d mailing list, about that issue and it seems like it has been flagged to Autodesk for fixing, which is great.


The construction of such a node based matrix constraint is fairly simple both in terms of nodes and the math. We will be constructing the graph as shown in the Cult of Rig stream, so feel free to have a look at it for a more visual approach. The only addition I will make to it is supporting a maintainOffset functionality. Also, Raffaele talks a lot about math in his other videos as well, so have a look at them, too.

Node based matrix constraint

All the math is happening inside the matrixMult node. Essentially, we are taking the worldMatrix of a target object and we are converting it to relative space by multiplying by the parentInverseMatrix of the constrained object. The decomposeMatrix after that is there to break the matrix into attributes which we could actually connect to a transform – translate, rotate, scale and shear. It would be great if we could directly connect to an input matrix attribute, but that would probably create it’s own set of problems.

That’s the basic node based matrix constraint. How about maintaining the offset, though?

Maintain offset

In order to be able to maintain the offset, we need to just calculate it first and then put it in the multMatrix node before the other two matrices.

Node based matrix constraint - maintain offset

Calculating offset

The way we calculate the local matrix offset is by multiplying the worldMatrix of the object by the worldInverseMatrix of the parent (object relative to). The result is the local matrix offset.

Using the multMatrix node

It is entirely possible to do this using another matrixMult node, and then doing a getAttr of the output and set it in the main matrixMult by doing a setAttr with the type flag set to "matrix". The local matrixMult is then free to be deleted. The reason we get and set the attribute, instead of connecting it, is that otherwise we create a cycle.

Node based matrix constraint - local matrix offset

Using the Maya API

What I prefer doing, though, is getting the local offset via the API, as it does not involve creating nodes and then deleting them, which is much nicer when you need to code it. Let’s have a look.

import maya.OpenMaya as om

def getDagPath(node=None):
    sel = om.MSelectionList()
    d = om.MDagPath()
    sel.getDagPath(0, d)
    return d

def getLocalOffset(parent, child):
    parentWorldMatrix = getDagPath(parent).inclusiveMatrix()
    childWorldMatrix = getDagPath(child).inclusiveMatrix()

    return childWorldMatrix * parentWorldMatrix.inverse()

The getDagPath function is just there to give us a reference to an MDagPath instance of the passed object. Then, inside the getLocalOffset we get the inclusiveMatrix of the object, which is the full world matrix equivalent to the worldMatrix attribute. And in the end we return the local offset as an MMatrix instance.

Then, all we need to do is to set the multMatrix.matrixIn[0] attribute to our local offset matrix. The way we do that is by using the MMatrix‘s () operator which returns the element of the matrix specified by the row and column index. So, we can write it like this.

localOffset = getLocalOffset(parent, child)
mc.setAttr("multMatrix1.matrixIn[0]", [localOffset(i, j) for i in range(4) for j in range(4)], type="matrix")

Essentially, we are calculating the difference between the parent and child objects and we are applying it before the other two matrices in the multMatrix node in order to implement the maintainOffset functionality in our own node based matrix constraint.

Joint orient

Lastly, let us have a look at how we can go around the joint orientation issue I mentioned in the Limitations section.

What we need to do is account for the jointOrient attribute on joints. The difficulty comes from the fact that the jointOrient is a separate matrix that is applied after the rotation matrix. That means, that all we need to do is, in the end of our matrix chain rotate by the inverse of the jointOrient. I tried doing it a couple of times via matrices, but I could not get it to work. Then I resolved to write a node and test how I would do it from within. It is really simple, to do it via the API as all we need to do is use the rotateBy function of the MTransformationMatrix class, with the inverse of the jointOrient attribute taken as a MQuaternion.

Then, I thought that this should not be too hard to implement in vanilla maya too, since there are the quaternion nodes as well. And yeah there is, but honestly, I do not think that graph looks nice at all. Have a look.

Node based matrix constraint - joint orient

As you can see, what we do is, we create a quaternion from the joint orientation, then we invert it and apply it to the calculated output matrix of the multMatrix. The way we apply it is by doing a quaternion product. All we do after that is just convert it to euler and connect it to the rotation of the joint. Bear in mind, the quatToEuler node supports rotate orders, so it is quite useful.

Of course, you can still use the maintainOffset functionality with this method. As I said though, comparing this to just an orient constraint it seems like the orient constraint was performing faster every time, so I see no reason of doing this other than keeping the outliner cleaner.

Additionally, I am assuming that there is probably an easier way of doing this, but I could not find it. If you have something in mind, give me a shout.


Using this node based constrain I was able to remove parent, point and orient constraints from my body rig, making it perform much faster than before, and also the outliner is much nicer to look at. Stay tuned for parts 2 and 3 from this matrix series, where I will look at creating a twist calculator and a rivet by using just matrix nodes.

The classical rivet was a really popular rigging thing a few years ago (and long before that it seems). I am by no means a seasoned rigger, but whenever I would look for facial rigging techniques the rivet would keep coming up. What is more, barely if ever people suggested using follicle to achieve the result, generally because the classical rivet evaluates faster. So, I thought I’d do a maya performance test to compare them.

Table of contents


I will be looking into the performance of a follicle and a classical rivet, both on a NURBS sphere and on a poly sphere. NURBS because I tend to use a lot of ribbons and poly, because it’s a popular feature for attaching objects to meshes.

I will be using Maya 2017’s Evaluation Toolkit to run the performance test, as it gives nice output for each evaluation method, even though I cannot imagine using anything but parallel.

The way the tests are going to work is, I will create two files, each containing the same geometry with 10 rivets. In one file I will use follicles and in the other the classical setup. The deformation on the geometry will just be keyed vertices and it will be identical for each setup, so we can be sure that the only difference between the two files is the riveting setup.

Then, the test will be done in a new scene where I will reference the file to test a 100 times. For each setup I will run the evaluation manager’s performance test and take the results and compare them.

Okay, let us have a look then.


Classical rivet setup

So, the way this one works is I just loop from 1 to 10 and I create a pointOnSurfaceInfo node with parameterU set to iterator * .1 and parameterV set to .5. Then, I plug the output position directly to a locator’s translate attr. Additionally, the output position, normal vector and a tangent vector go into an aimConstraint which constraints the rotation of the locator.

Follicle setup

This one is fairly straightforward, I just created 10 follicles, with parameterU set to iterator * .1 and V to .5.


Bear in mind, EMS refers to serial evaluation and EMP is parallel.

NURBS surface
Classical Rivet
Playback Speeds
    DG  = 13.1694 fps
    EMS = 11.1359 fps
    EMP = 20.7469 fps
Playback Speeds
    DG  = 11.3208 fps
    EMS = 12.6263 fps
    EMP = 27.8293 fps

Even though I expected the follicle to be faster I was surprised by how much. It is important to note that we have 10 * 100 = 1000 rivets in the scene, which is obviously a big number. Therefore, in a more realistic example the difference is going to be more negligible, but still 7.8fps is quite a bit.

What is also quite interesting is that in DG the follicle is slower than the classic rivet. So, the stigma of the old days that the classical rivet is faster, seem to be deserved, but parallel changes everything.


Classical rivet setup

So, when it comes to polys the classical rivet gets a bit more complicated, which I would imagine results in a larger slowdown as well. The way this setup works is, we grab 10 couples of edges, which in turn produce 10 surfaces through a loft node. Maintaining history, the nurbs surfaces will follow the poly geometry. So, we can perform the same rivet setup as before on the nurbs.

Follicle setup

On a mesh with proper UVs the follicles are again trivial to set up. We just loop 10 times and create a follicle with the appropriate U and V parameters.

Polygon geometry
Playback Speeds
    DG  = 1.7313 fps
    EMS = 3.32005 fps
    EMP = 9.79112 fps
Classical rivet
Playback Speeds
    DG  = 1.05775 fps
    EMS = 1.52022 fps
    EMP = 3.31053 fps

As expected, follicles are again quite a bit faster. I am saying as expected, as not only do we have a riveting setup as in the NURBS case, but also there is the edges and the loft which add to the slowdown. I am assuming, that is why even in DG the classical rivet is still slower.


So, the conclusion is pretty clear – follicle rivets are much faster than classical rivets in the latest maya versions which include the parallel evaluation method.

Rigging in maya tends to be a very straight forward and no bullshit process, so getting a grasp of the fundamentals will pretty much be enough to let you build your knowledge further by experimenting with different approaches and techniques. Therefore, in this post I’ll describe all the essential building blocks of a rig.

Disclaimer: This post is going to be a continuous work in progress, as I will keep adding rigging concepts as we go.

Table of contents

  1. Nodes
  2. Joints
  3. Skinning
  4. Other deformers
  5. Weight painting


Now, even though the dependency graph is not an essential knowledge for a beginner rigger who just wants to build rigs, it is absolutely crucial for grasping the fundamentals of how rigging in maya works. As such, it is vital for understanding the following points of this article.

In a most broad aspect the dependency graph is the place where the scene is described in terms of it’s building blocks – nodes. A node is essentially similar to the concept of a function or method in traditional programming. If you are not into programming, then you can think of a node as a sort of a calculator, which takes in some data in the form of an input, modifies it and it returns the output data. Nodes seem to be used in a lot of the major packages in the 3D industry – Maya, Houdini, Nuke, Cinema 4D, etc., so it is a concept which is useful to understand.

Every object in a maya scene is described by one or more nodes. And when I say object I do not only mean geometries, but also locators, cameras, deformers, modeling history (polyExtrude commands, polyMerge, etc.) and a lot of other stuff we don’t need to worry about. If you are curious about everything that’s going on in a scene, you can uncheck the Show DAG objects only checkbox in the outliner.

A simple example is a piece of geometry. You would expect that it would be described by one node containing all the information, but there actually are two. One is a transform node, which contains all the transformation data of the object – translations, rotations and scale (world transformations). The other is the shape node, which contains all the vertex data to build the actual geometry (local transformations), but we don’t need to worry about this one just yet.

A very cool thing about nodes is that we can actually create our own. The Maya API allows us to write custom nodes in Python and C++.

Further reading: Autodesk User Guide: Node types


In a non-technical aspect a joint is incredibly similar to a bone of a skeleton. A number of them are chained together in order to create limbs, spines, tails, etc., which in turn together form exactly that, a skeleton. These joints then are told to influence the body of a character or any other object and that’s how you get a very basic rig.

In a more technical approach to describe a joint, we can start off by saying it is a transform node. Therefore it has all the usual attributes – translation, rotation, scale, etc., but there also are some new ones like joint orientation and segmentScaleCompensate.

The segmentScaleCompensate essentially prevents the children joints in the hierarchy from inheriting the scale of the parent joints. And it is generally something we don’t need to think about at all, if you are not rigging for games. If you are, you will need to turn this off and work around it in your rigs.

The joint orientation on the other hand, is absolutely crucial for rigging. It is such an extensive subject that I am considering writing a whole post just about it. Here is a good post on the topic. Describing it briefly, joint orientation describes the way the joint is going to rotate. That is the reason it is called orientation (I think), because in a sense it shows us which way is forward, which is up and which is to the side.

So to summarize, a joint is a transform node with extra attributes, used to create hierarchies (joint chains) which in turn form a skeleton, used to drive the deformations of a piece of geometry.


In non-technical terms skinning is like creating a sock puppet. You stick your arm (the joint chain) inside the sock (the geometry) and by moving your hand and fingers the sock is being deformed as well.

In a more technical manner, skinning is the process of binding a geometry to a joint chain. In maya nodes terms, this action is described by a skinCluster node. Essentially, when skinning a piece of geo, maya calculates all the vertex positions of the mesh relative to the joint they are influenced by and stores that data in the skinCluster. Then when the joint is transformed in any way – moved, rotated or scaled, the vertices of the mesh follow that transformation, by applying their relative positions on top of the transformed positions of the joint.

Other deformers

A skinCluster is what maya calls a deformer. It makes quite a lot of sense if you think about it as a skinCluster does exactly that – it deforms the geometry using the joints. Now, there are other deformers as well and even though they work in many different ways, essentially what they do is, they take a version of the geometry as an input (they can take as input the output of other deformers which allows us to chain deformations), deforms it and outputs the new deformed points into the shape node. Some good simple examples are the nonlinear deformers.

Just like with nodes (deformers are essentially nodes themselves) we can write our own custom ones, which is very useful. For example I have written a collision deformer which makes geometries collide and prevents intersections.

Weight painting

Remember how in the skinning section I said “maya calculates all the vertex positions of the mesh relative to the joint they are influenced by“? Well how does maya know which joint a vertex is influenced by? You see, each component of the geometry can be associated with one or more joints. On creating the skinCluster maya will assign some joints to the components, but more often than not you won’t be satisfied with that assignment. Therefore, you need to tell maya which joints should be driving a specific component. The way you do that is by a process called weight painting.

When we say weight, we basically mean amount of influence.

It is called painting, because by doing so, we are creating weight maps for each joint. As with many other aspects of CGI, these maps range from 0 to 1 although we can choose to go above and below if we want to, but I wouldn’t advise doing so, just because it’s much harder to keep control of your weights this way. So a value of 0 means that this specific region of the geometry will not be deformed at all, by the current joint. Obviously then, 1 would mean that the region will be deformed entirely by the current joint. Values in between mean that the joint will influence the component some but not entirely. It’s important to note that any component needs to have a total weight of 1 (again if we are chosen to work with normalized weights, but as I said this is the suggested approach), therefore if you lower the influence of one joint over a specific component, another or a combination of multiple other ones will receive that remaining influence.

Additionally, all deformers can have their weights painted. SkinClusters seem to be the most complex ones to paint because they have multiple influences, but generally with most other deformers – nonlinears, blendShapes, deltaMush, etc. – there is a single weight per vertex describing how much influence the deformer has over that specific vertex.

Some very useful features of weights are that they can be mirrored and copied. Additionally, they can be stored and loaded at a later point, though maya’s native options for doing that have been very very limited, but this article gives some good tips about it.