nodes

During the week, I got a comment on the first post in my Maya matrix nodes series – the matrix constraint one – about using the wtAddMatrix node to achieve the multiple targets with blending weights functionality similar to constraints. I have stumbled upon the wtAddMatrix node, but I think it is the fact that Autodesk have made it very fiddly to work with it – we need to show all attributes in the node editor and we have 0 access to setting the weight plug – that put me off ever using it. That being said, when RigVader commented on that post I decided I will give it a go. Since it actually works quite nicely, today I am looking at blending matrices in Maya.

Disclaimer: I will be using the matrix constraint setup outlined in the post I mentioned, so it might be worth having a look at that one if you have missed it.

tl;dr: Using the wtAddMatrix we can blend between matrices before we plug the output into a matrix constraint setup to achieve having multiple targets with different weights.

Turns out, the wtAddMatrix is a really handy node. It gives us the chance to plug a number of matrices in the .matrixIn plugs of the .wtMatrix array attribute, and give them weight in the .weightIn plug. That, effectively lets us blend between them.

Blending matrices for a matrix constraint setup

So, now that we know we can blend matrices, we just need to figure out exactly what do we need to blend.

Let us first have a look at the simpler case – not maintaining the offset.

Blending matrices - matrix constraint with no offset

The group1 on the graph is the parent of pCube1 and is used just so we convert the world matrix into a relative to the parent matrix, without using the parentInverseMatrix. The reason for that is we do not want to create benign cycles, which Raff sometimes talks about on the Cult of Rig streams. Other than that, everything seems to be pretty straightforward.

Bear in mind, the wtAddMatrix node does not normalize the weights, which means that we could have all of the targets fully influence our object. What is more, you could also push them beyond 1 or negate them, which would result in seemingly odd results, but that might just be what you need in some cases.

Maintaining the offset

Often we need to maintain the offset in order to achieve the desired behaviour, so the way we do that is we resort to the multMatrix node once more. I am not going in detail, as there are already a couple of ways you can do that outlined in the previous post, but let us see how it fits in our graph.

Blending matrices - matrix constraint maintaing offset

The two additional multMatrix nodes let us multiply the local offset for the current target by the world matrix of the current target, effectively constraining the object but also maintaining the initial offset.

Now, however clean and simple it may be, the graph gets to be a bit long. What this means is, it is probably getting a bit slower to evaluate as well. That is why, I thought I would do a bit of a performance test to see if there still is any benefit to using this setup over a parentConstraint.

Performance

The way I usually do my tests is either loop a few hundred times in the scene and build the setup or build it once, save it in a file and then import the file a few hundred times and let it run with some dummy animation. Then I use Maya 2017’s Evaluation toolkit to Run a performance test, which gives us info about the performance in the different evaluation methods – DG, Serial and Parallel. Since, the results vary quite a bit, what I usually do is, run it three times and take the best ones.

In this case, I built the two setups in separate files, both with 2 target objects and maintain offset. Then I ran the tests on 200 hundred imported setups.

So here are the results.

Parent constraint
Playback Speeds
===============
    DG  = 89.8204 fps
    EMS = 20.1613 fps
    EMP = 59.2885 fps
Matrix constraint
Playback Speeds
===============
    DG  = 91.4634 fps
    EMS = 24.6305 fps
    EMP = 67.2646 fps

Bear in mind these tests are done on my 5 years old laptop, so the results you are going to get if you are to repeat this test are going to be significantly better.

As you can see even with the extended graph we are still getting about 7.5 fps increase by using the matrix constraint setup with blending matrices. Considering, we have 200 hundred instances in the scene (which is by no means a large number), that means we have about .0375 fps increase per setup, which in turn means that on every 26 setups we win a frame.

Conclusion

So, there we have it, an even larger part of the parentConstraint functionality, can be implemented by just using matrix nodes. What this means is we can keep our outliner cleaner and get a better performance out of our rigs at the same time, which is a total win win.

Thanks to RigVader for pointing the wtAddMatrix node as a potential solution, it really works quite nicely!

This post is a part of a three post series, where I implement popular rigging functionalities with just using maya’s native matrix nodes.

Rivets are one of those things that blew my mind the first time I learned of them. Honestly, at the time, the ability to stick an object to the deforming components of a geometry seemed almost magical. Although, the more you learn about how geometries work in Maya, the more sense rivets start to make. The stigma around them, though, has always been that they are a bit slow, since they have to wait for the underlying geometry to evaluate and only then can they evaluate as well. And even though that is still the case, it seems that since parallel was introduced the performance has increased significantly.

It is worth trying to simplify and clean rivets up, considering how handy they are for rigging setups like:
– twist distributing ribbons
– bendy/curvy limbs
– sticking objects to geometries after squash and stretch
– sticking controls to geometries
– driving joints sliding on surfaces

and others.

When I refer to the classic rivet or the aimConstraint rivet, it is this one that I am talking about. I have seen it used by many riggers and also lots of lighters as well.

The purpose of this approach is to get rid of the aimConstraint that is driving the rotation of the rivet. Additionally, I have seen a pointConstraint used as well, in order to account for the parent inverse matrix, which would also be replaced by this setup. Even though we are stripping constraints, the performance increase is not very large, so the major benefit of the matrix rivet is a cleaner graph.

TL;DR: We are going to plug the information from a pointOnSurfaceInfo node directly into a fourByFourMatrix node, in attempt to remove constraints from our rigs.

Disclaimer: Bear in mind, I will be only looking at riveting an object to a NURBS surface. Riveting to poly geo would need to be done through the same old loft setup.

Limitations: Since we are extracting our final transform values using a decomposeMatrix node, we do not have the option to use any rotation order other than XYZ, as at the moment the decomposeMatrix node does not support other orders. A way around it, though, is taking the outputQuat attribute and pluging it into an quatToEuler node which actually supports different rotate orders.

Difference between follicle and aimConstraint rivet

Matrix rivet - follicle and classical rivet differences

Matrix rivet - follicle and classic rivet graph

The locator is riveted using an aimConstraint. You can see there is a small difference in the rotations of the follicle and the locator. Why is that?

The classical rivet setup connects the tangentV and normal attributes of a pointOnSurface to the aimConstraint. The third axis is then the cross product of these two. But it seems like the follicle is actually using the tangentU vector for it’s calculations, since we get this difference between the two setups.

Choosing to plug the tangentU into the aimConstraint, instead of tangentV, results in the same behaviour as a follicle. To be honest, I am not sure which one would be preferable. In the construction of our matrix rivet, though, we have full control over that.

Why not follicles?

As I already said, in parallel, follicles are fast! Honestly, for most of my riveting needs I wouldn’t mind using a follicle. The one aspect of follicles I really dislike though, is the fact that it operates through a shape node. I understand it was not meant for rigging, and having the objects clearly recognizable both in the outliner and the viewport is important, but in my case it is just adding up to clutter. Ideally, I like avoiding unnecessary DAG nodes, since they only get in the way.

Additionally, have you had a look at the follicle shape node? I mean, there are so many hair related attributes, it is a shame to use it just for parameterU and parameterV.

Therefore, if we could use a non-DAG network of simple nodes to do the same job without any added overhead, why should we clutter our rigs?

Constructing the matrix rivet

So, the way matrices work in Maya is that the first three rows of the matrix describe the X, Y and Z axis and the fourth row is the position. Since, this is an oversimplification I would strongly suggest having a look at some matrix math resources and definitely watching the Cult of Rig streams, if you would like to learn more about matrices.

What this means to us, though, is that if we have two vectors and a position we can always construct a matrix out of them, since the cross product of the two vectors will give us a third one. So here is how our matrix construction looks like in the graph.

Matrix rivet - constructing the matrix

So, as you can see, we are utilizing the fourByFourMatrix node to construct a matrix. Additionally, we use the vectorProduct node set to Cross Product to construct our third axis out of the normal and the chosen tangent, in this case tangentV which gives us the same result as using the classic aimConstraint rivet. If we choose to use the tangentU instead, we would get the follicle‘s behaviour. Then, obviously we decompose the matrix and plug it into our riveted transform.

Optionally, similar to the first post in this series, we can use the multMatrix node to inverse the parent’s transform, if we so need to. What I usually do, though, is parent them underneath a transform that has it’s inheritTransform attribute turned off, so we can plug the world transforms directly.

It is important to note that in this case we are absolutely sure that the output matrix is orthogonal, since we know that the normal is perpendicular to both tangents. Thus, crossing it with any of the tangents, will result in a third perpendicular vector.

Skipping the vector product

Initially, when I thought of building rivets like this, I plugged the normal, tangentU and tangentV directly from the pointOnSurfaceInfo to the fourByFourMatrix. What this means, is that we have a matrix that is not necessarily orthogonal, since the tangents might very well not be perpendicular. This results in a shearing matrix. That being said though, it was still giving me proper results.

Matrix rivet - skipping the vectorProduct

Then, I added it to my modular system to test it on a couple of characters and it kept giving me steadily good results – 1 to 1 with the behaviour of a follicle or aimConstraint rivet, depending on the order I plug the tangents in.

What this means, then, is that the decomposeMatrix node separates all the shearing from the matrix and thus returns the proper rotation as if the matrix is actually orthogonal.

If that is the case, then we can safely skip the vectorProduct and still have a working rivet, considering we completely disregard the outputShear attribute of the decomposeMatrix.

Since, I do not understand how that shearing is being extracted, though, I will be keeping an eye on the behaviour of the rivets in my rigs, to see if there is anything dodgy about it. So far, it has proved to be as stable as anything else.

Conclusion

If you are anything like me, you would really like the simplicity of the graph, as we literally are taking care of the full matrix construction ourselves. What is more, there are no constraints, nor follicle shapes in the outliner, which again, I find much nicer to look at.

This matrix series has been loads of fun for me to write, so I will definitely be trying to come up with other interesting functions we could use matrices for.

This post is a part of a three post series, where I will try to implement popular rigging functionalities by only using maya’s native matrix nodes.

Following the Cult of rig lately, I realized I have been very wasteful in my rigs in terms of constraints. I have always known that they are slower than direct connections and parenting, but then I thought that is the only way to do broken hierarchy rigs. Even though I did matrix math at university, I never used it in maya as I weirdly thought the matrix nodes are broken or limited. There was always the option of writing my own nodes, but since I would like to make it as easy for people to use my rigs, I would rather keep everything in vanilla maya.

Therefore, when Raffaele used the matrixMult and decomposeMatrix nodes to reparent a transform, I was very pleasantly inspired. Since then, I have tried applying the concept to a couple of other rigging functionalities, such as the twist calculation and rivets and it has been giving me steadily good results. So, in this post we will have a look at how we can use the technique he showed in the stream, to simulate a parent + scale constraint, without the performance overhead of constraints, effectively creating a node based matrix constraint.

Limitations

There are some limitations with using this approach, though. Some of them are not complex to go around, but the issue is that this adds extra nodes to the graph, which in turn leads to performance overhead and clutter. That being said, constraints add up to the outliner clutter, so I suppose it might be a matter of a preference.

Joints

Constraining a joint with jointOrient values, will not work, as the jointOrient matrix is applied before the rotation. There is a way to get around this, but it involves creating a number of other nodes, which add some overhead and for me are making it unreasonable to use the setup instead of an orient constraint.

If you want to see how we go around the jointOrient issue just out of curiosity, have a look at the joint orient section.

Weights and multiple targets

Weights and multiple targets are also not entirely suitable for this approach. Again, it is definitely not impossible, since we can always blend the output values of the matrix decomposition, but that will also involve an additional blendColors node for each of the transform attributes we need – translate, rotate and scale. And similarly to the previous one, that means extra overhead and more node graph clutter. If there was an easy way to blend matrices with maya’s native nodes, that would be great.

Rotate order

Weirdly, even though the decompose matrix has a rotateOrder attribute, it does not seem to do anything, so this method will work with only the xyz rotate order. Last week I received an email from the maya_he3d mailing list, about that issue and it seems like it has been flagged to Autodesk for fixing, which is great.

Construction

The construction of such a node based matrix constraint is fairly simple both in terms of nodes and the math. We will be constructing the graph as shown in the Cult of Rig stream, so feel free to have a look at it for a more visual approach. The only addition I will make to it is supporting a maintainOffset functionality. Also, Raffaele talks a lot about math in his other videos as well, so have a look at them, too.

Node based matrix constraint

All the math is happening inside the matrixMult node. Essentially, we are taking the worldMatrix of a target object and we are converting it to relative space by multiplying by the parentInverseMatrix of the constrained object. The decomposeMatrix after that is there to break the matrix into attributes which we could actually connect to a transform – translate, rotate, scale and shear. It would be great if we could directly connect to an input matrix attribute, but that would probably create it’s own set of problems.

That’s the basic node based matrix constraint. How about maintaining the offset, though?

Maintain offset

In order to be able to maintain the offset, we need to just calculate it first and then put it in the multMatrix node before the other two matrices.

Node based matrix constraint - maintain offset

Calculating offset

The way we calculate the local matrix offset is by multiplying the worldMatrix of the object by the worldInverseMatrix of the parent (object relative to). The result is the local matrix offset.

Using the multMatrix node

It is entirely possible to do this using another matrixMult node, and then doing a getAttr of the output and set it in the main matrixMult by doing a setAttr with the type flag set to "matrix". The local matrixMult is then free to be deleted. The reason we get and set the attribute, instead of connecting it, is that otherwise we create a cycle.

Node based matrix constraint - local matrix offset

Using the Maya API

What I prefer doing, though, is getting the local offset via the API, as it does not involve creating nodes and then deleting them, which is much nicer when you need to code it. Let’s have a look.

import maya.OpenMaya as om

def getDagPath(node=None):
    sel = om.MSelectionList()
    sel.add(node)
    d = om.MDagPath()
    sel.getDagPath(0, d)
    return d

def getLocalOffset(parent, child):
    parentWorldMatrix = getDagPath(parent).inclusiveMatrix()
    childWorldMatrix = getDagPath(child).inclusiveMatrix()

    return childWorldMatrix * parentWorldMatrix.inverse()

The getDagPath function is just there to give us a reference to an MDagPath instance of the passed object. Then, inside the getLocalOffset we get the inclusiveMatrix of the object, which is the full world matrix equivalent to the worldMatrix attribute. And in the end we return the local offset as an MMatrix instance.

Then, all we need to do is to set the multMatrix.matrixIn[0] attribute to our local offset matrix. The way we do that is by using the MMatrix‘s () operator which returns the element of the matrix specified by the row and column index. So, we can write it like this.

localOffset = getLocalOffset(parent, child)
mc.setAttr("multMatrix1.matrixIn[0]", [localOffset(i, j) for i in range(4) for j in range(4)], type="matrix")

Essentially, we are calculating the difference between the parent and child objects and we are applying it before the other two matrices in the multMatrix node in order to implement the maintainOffset functionality in our own node based matrix constraint.

Joint orient

Lastly, let us have a look at how we can go around the joint orientation issue I mentioned in the Limitations section.

What we need to do is account for the jointOrient attribute on joints. The difficulty comes from the fact that the jointOrient is a separate matrix that is applied after the rotation matrix. That means, that all we need to do is, in the end of our matrix chain rotate by the inverse of the jointOrient. I tried doing it a couple of times via matrices, but I could not get it to work. Then I resolved to write a node and test how I would do it from within. It is really simple, to do it via the API as all we need to do is use the rotateBy function of the MTransformationMatrix class, with the inverse of the jointOrient attribute taken as a MQuaternion.

Then, I thought that this should not be too hard to implement in vanilla maya too, since there are the quaternion nodes as well. And yeah there is, but honestly, I do not think that graph looks nice at all. Have a look.

Node based matrix constraint - joint orient

As you can see, what we do is, we create a quaternion from the joint orientation, then we invert it and apply it to the calculated output matrix of the multMatrix. The way we apply it is by doing a quaternion product. All we do after that is just convert it to euler and connect it to the rotation of the joint. Bear in mind, the quatToEuler node supports rotate orders, so it is quite useful.

Of course, you can still use the maintainOffset functionality with this method. As I said though, comparing this to just an orient constraint it seems like the orient constraint was performing faster every time, so I see no reason of doing this other than keeping the outliner cleaner.

Additionally, I am assuming that there is probably an easier way of doing this, but I could not find it. If you have something in mind, give me a shout.

Conclusion

Using this node based constrain I was able to remove parent, point and orient constraints from my body rig, making it perform much faster than before, and also the outliner is much nicer to look at. Stay tuned for parts 2 and 3 from this matrix series, where I will look at creating a twist calculator and a rivet by using just matrix nodes.

Rigging in maya tends to be a very straight forward and no bullshit process, so getting a grasp of the fundamentals will pretty much be enough to let you build your knowledge further by experimenting with different approaches and techniques. Therefore, in this post I’ll describe all the essential building blocks of a rig.

Disclaimer: This post is going to be a continuous work in progress, as I will keep adding rigging concepts as we go.

Table of contents

  1. Nodes
  2. Joints
  3. Skinning
  4. Other deformers
  5. Weight painting

Nodes

Now, even though the dependency graph is not an essential knowledge for a beginner rigger who just wants to build rigs, it is absolutely crucial for grasping the fundamentals of how rigging in maya works. As such, it is vital for understanding the following points of this article.

In a most broad aspect the dependency graph is the place where the scene is described in terms of it’s building blocks – nodes. A node is essentially similar to the concept of a function or method in traditional programming. If you are not into programming, then you can think of a node as a sort of a calculator, which takes in some data in the form of an input, modifies it and it returns the output data. Nodes seem to be used in a lot of the major packages in the 3D industry – Maya, Houdini, Nuke, Cinema 4D, etc., so it is a concept which is useful to understand.

Every object in a maya scene is described by one or more nodes. And when I say object I do not only mean geometries, but also locators, cameras, deformers, modeling history (polyExtrude commands, polyMerge, etc.) and a lot of other stuff we don’t need to worry about. If you are curious about everything that’s going on in a scene, you can uncheck the Show DAG objects only checkbox in the outliner.

A simple example is a piece of geometry. You would expect that it would be described by one node containing all the information, but there actually are two. One is a transform node, which contains all the transformation data of the object – translations, rotations and scale (world transformations). The other is the shape node, which contains all the vertex data to build the actual geometry (local transformations), but we don’t need to worry about this one just yet.

A very cool thing about nodes is that we can actually create our own. The Maya API allows us to write custom nodes in Python and C++.

Further reading: Autodesk User Guide: Node types

Joints

In a non-technical aspect a joint is incredibly similar to a bone of a skeleton. A number of them are chained together in order to create limbs, spines, tails, etc., which in turn together form exactly that, a skeleton. These joints then are told to influence the body of a character or any other object and that’s how you get a very basic rig.

In a more technical approach to describe a joint, we can start off by saying it is a transform node. Therefore it has all the usual attributes – translation, rotation, scale, etc., but there also are some new ones like joint orientation and segmentScaleCompensate.

The segmentScaleCompensate essentially prevents the children joints in the hierarchy from inheriting the scale of the parent joints. And it is generally something we don’t need to think about at all, if you are not rigging for games. If you are, you will need to turn this off and work around it in your rigs.

The joint orientation on the other hand, is absolutely crucial for rigging. It is such an extensive subject that I am considering writing a whole post just about it. Here is a good post on the topic. Describing it briefly, joint orientation describes the way the joint is going to rotate. That is the reason it is called orientation (I think), because in a sense it shows us which way is forward, which is up and which is to the side.

So to summarize, a joint is a transform node with extra attributes, used to create hierarchies (joint chains) which in turn form a skeleton, used to drive the deformations of a piece of geometry.

Skinning

In non-technical terms skinning is like creating a sock puppet. You stick your arm (the joint chain) inside the sock (the geometry) and by moving your hand and fingers the sock is being deformed as well.

In a more technical manner, skinning is the process of binding a geometry to a joint chain. In maya nodes terms, this action is described by a skinCluster node. Essentially, when skinning a piece of geo, maya calculates all the vertex positions of the mesh relative to the joint they are influenced by and stores that data in the skinCluster. Then when the joint is transformed in any way – moved, rotated or scaled, the vertices of the mesh follow that transformation, by applying their relative positions on top of the transformed positions of the joint.

Other deformers

A skinCluster is what maya calls a deformer. It makes quite a lot of sense if you think about it as a skinCluster does exactly that – it deforms the geometry using the joints. Now, there are other deformers as well and even though they work in many different ways, essentially what they do is, they take a version of the geometry as an input (they can take as input the output of other deformers which allows us to chain deformations), deforms it and outputs the new deformed points into the shape node. Some good simple examples are the nonlinear deformers.

Just like with nodes (deformers are essentially nodes themselves) we can write our own custom ones, which is very useful. For example I have written a collision deformer which makes geometries collide and prevents intersections.

Weight painting

Remember how in the skinning section I said “maya calculates all the vertex positions of the mesh relative to the joint they are influenced by“? Well how does maya know which joint a vertex is influenced by? You see, each component of the geometry can be associated with one or more joints. On creating the skinCluster maya will assign some joints to the components, but more often than not you won’t be satisfied with that assignment. Therefore, you need to tell maya which joints should be driving a specific component. The way you do that is by a process called weight painting.

When we say weight, we basically mean amount of influence.

It is called painting, because by doing so, we are creating weight maps for each joint. As with many other aspects of CGI, these maps range from 0 to 1 although we can choose to go above and below if we want to, but I wouldn’t advise doing so, just because it’s much harder to keep control of your weights this way. So a value of 0 means that this specific region of the geometry will not be deformed at all, by the current joint. Obviously then, 1 would mean that the region will be deformed entirely by the current joint. Values in between mean that the joint will influence the component some but not entirely. It’s important to note that any component needs to have a total weight of 1 (again if we are chosen to work with normalized weights, but as I said this is the suggested approach), therefore if you lower the influence of one joint over a specific component, another or a combination of multiple other ones will receive that remaining influence.

Additionally, all deformers can have their weights painted. SkinClusters seem to be the most complex ones to paint because they have multiple influences, but generally with most other deformers – nonlinears, blendShapes, deltaMush, etc. – there is a single weight per vertex describing how much influence the deformer has over that specific vertex.

Some very useful features of weights are that they can be mirrored and copied. Additionally, they can be stored and loaded at a later point, though maya’s native options for doing that have been very very limited, but this article gives some good tips about it.