When it comes to rigging in a production or many productions the name of the game is reusability and automation. In that sense rigging is very similar to traditional software development, as we also have to build systems that are easily maintainable, extensible and reusable. There are a few ways to achieve that in rigging and today I am going to look at some of them. The reason I wanted to write this is that rigging systems are one of the big things I wished I had learned earlier.

Disclaimer: This post will probably come out very opinionated and those opinions are going to be based on my own experience, which really, is not that extensive. Additionally, even though there are going to be informative bits, I am writing this more of a way to share my thoughts instead of trying to teach anything as again, I am not really qualified in any way to do that.

Also, please bear in mind that the larger portion of this post is going to be very speculative since I am talking about tools that I have not really used myself.

A rigging system

I just want to briefly go over what I mean by rigging systems. Essentially, everything that takes a model and produces a rig out of it is a rigging system to me. That means if you just rig an entire thing manually you are the rigging system. If you use a tool similar to maya’s HumanIK that is your rigging system. That means that any system that by following a set of instructions can on it’s own produce a rig is a rigging system. Other than just building actual node networks, rigging systems should provide easy ways to save and load a bunch of properties and settings such as deformer weights, blendshapes, control shapes, etc.

Rigging system types

With the definition out of the way we can have a look at the types of rigging systems out there.

Auto rigging tool

Disclaimer: This is going to be the most speculative portion of this post, as I have never used one of these solutions, other just having a brief look at them.

The auto rigging tool is a rigging system which takes care of everything we talked above by providing you with some sort of a guiding system to define the proportions and often the type of a rig (biped, quadruped, wings, .etc) and then using these guides, it builds node networks which become rig components.

Some examples of auto rigging tools are maya’s HumanIk which I mentioned above and also a popular non-Autodesk one is Rapid Rig.

There are a lot of them online and also they are usually a big part of rigging showreels as it seems that every rigger who starts learning scripting, goes for an auto rigging solution at some point. Including me.

Now, the problem that I have with this kind of rigging systems is the lack of extensibility. I have not seen an auto rigging tool with any sort of an API so far. That means that, for every rig you built you have only a limited number of available components (that number might be a large one but still limited). What I mean by that, there is probably only one arm component available and even though there might be many options on how to build that arm, there is a chance you are not going to find what you are looking for and would like to insert your own logic somewhere in that component, but there is no way to do that.

Mentioning the many options brings me to the next issue I see with auto rigging tools – performance and clutter. The way I see it, the more options you want to support inside of just one single component, the more clutter you would introduce in the logic of that component, in order to accommodate those options. Additionally, if everything is happening behind the scenes, I have no other way of knowing what the tool creates than just opening the node editor and having a look at the networks, which as you could imagine is not going to be fun on large rigs. That opaqueness scares me, as I would not know about potential node clutter introduced in my scene.

That brings me to my next point about auto rigging tools, which is the fact that everything is stored baked down in the scene. What I mean by that is, the auto rigging tool might give you some options for rebuilding parts of the rig after they have been created, but ultimately everything we store is baked into node networks. Yes we can save our weights and maybe some properties outside of the scene file, but these would be things that go on top of the auto rig product. There is no way to store how we actually constructed that rig. Then if I need to change the position/orientation of a joint, how do I go about that? What if, god forbid, the proportions of the model have changed? Do I delete everything and rebuild it? And if I have to do that, what happens with the parts of the rig that I have added on top of the auto rig, do I need to manually rebuild them as well?

The last thing I want to mention about auto rigging tools is UI. I mean, it is usually bloody horrible. I think it is probably all down to the native UI tools that maya gives us, which all feel very clunky. They just don’t seem to work for anything as complex as a rigging system. All of the auto rigging tools I have seen make an extensive use of tightly packed buttons, checkboxes and text fields in loads of collapsible sections or tabs, which just doesn’t seem to cut it in 2017. Again, I think the main issue there is that Maya’s native tools are just not enough to build anything more intuitive. That being said, PySide has been available for a while.

Maya Human IK rigging system UI

So, if you are going to be building an auto rigging tool, please keep in mind the following ideas can improve your work with that rigging system a lot

  • creating some sort of an API for easily extending/modifying the functionality of the tool, mainly by creating new or editing existing components
  • storing information about the actual building of the rig instead of just the baked down version of it, in order to enable you to easily rebuild it when changes need to happen
  • create a more intuitive UI than what is already out there. Node graph maybe?

Rigging framework

Going beyond auto rigging tools, we have rigging frameworks. Those are systems which do not necessarily have any high levels components such as arms/legs/spines built into them, but instead they provide you with the tools to create such components and save them for later use. The only system of this type that I know of is mGear. Incidentally, it does actually provide a high level modular system called Shifter that does give you everything that an auto rigging tool would. The good thing here, though, is that using the actual framework you can build your own modular rigging system and extend and modify it a lot.

Now, since I have never actually used it take everything I say with a grain of salt, but from what I understand, similarly, to an auto rigging tool you would build everything in the viewport and then save it as a baked down version. I do not know how easy or difficult rebuilding components is, but if anything has been built on top of them it would have to be also manually rebuilt.

What I really like about mGear, though, is the open source aspect of it. The fact that you can grab it and build out of it a rigging system that would suit your needs perfectly, is amazing.

Guidable modules

Now, this one I think is the only system that you can create yourself without any scripting. Even though, it might be a bit slower to work with, I think in terms of results you would be able to get everything you get out of an auto rigging tool.

So, what I mean by guidable modules is, say, storing a rigged IK chain with all the controls on it in a file and then when you need an IK chain in a rig you bring the one from the file. The rigged IK chain would have some sort of guides (usually just locators) that would be able to reposition the joints and stretch the chain without actually introducing any values on your controls and also the stretch values would be recalculated so there is no actual stretch on the chain, but instead the modified version becomes the default state.

I know that sounds a bit weird and it probably won’t sit well with many of you, since referencing/importing files is often considered dirty because of all the stuff that gets carried across. That being said, if you clean up after yourself, that would not be an issue.

Additionally, if you are referencing the components in the scene, you can modify the components themselves and the changes would be carried across all of your rigs utilizing that component.

What is more, the same idea can easily be applied to a rigging framework or an auto rigging tool you built yourself, so it removes the issue which arises when changes need to be made.

Rigging framework with modular rigging system through an API

Now, the last rigging system I want to talk about is one that combines aspects of all previously mentioned ones, but in a way where extendability, maintainability and reusability are all taken care of. That comes at the expense of not having a UI, having rules and conventions and requiring a thorough programming knowledge.

The way an user interacts with this rigging system is entirely through an API. The way the rigs are stored is in actual building information rather than a baked down version, which means that every time you open the rig it is built on the spot, making changes incredibly easy and non-destructive. And the way the components are created is through an actual Python class, benefiting from all the benefits (and unfortunately negatives) of object oriented programming.

So, the rigging process with this sort of a system is going to be writing code. Of course, we still need to paint weights and store control shapes, but these are easily saved and loaded back in. Here is an example of what a rig might look like in this system.


initializeRiggingStructure() # Creates the boilerplate for a rig - top/god node, etc.

spine = bodyCommands.spine(spineChain)
arm = bodyCommands.arm(armChain, spine.end)

... ## Build all the components you would want


The loadGuides() function refers to a file which contains all our guides, similar to the previous rigging system. In this one, though, it is up to you what sort of guides you use. For example, for an arm you would just draw out your arm chain and the module will take it from there.

This is the rigging system that I like using. It feels much more intuitive to me as I do not feel any restrictions from anything, be that UI, pre-built components that I don’t have access to, etc. If I want a slightly different module I can just inherit from the old one and make my changes. If there is a model change I just need to reposition my guides and run my code.

The main downside of it is that it might get a while a new person to get used to such a system. Having a nice documentation and examples would help a lot. Another thing, people might feel uncertain about is the complete lack of UI, but again for me it is liberating to not be constrained by buttons, text field, etc.


I am very happy with the current rigging system I am using described in the previous section. That being said, though, I cannot help but think of things I would like to see in a rigging system.

For starters, let us go back at UI. Even though, I feel great about being able to do whatever I want with the code, having an UI for certain things would be much quicker. What ideally I would like to do is be able to have both working at the same time. Whatever I write needs to be reflected in the UI and, the harder bit, whatever I do in the UI I need to be reflected somewhere in my build file, so next time I build the rig it comes with the changes made from the UI as well. Having an UI modifying my code, though, does not sound amazing, so we need a different way of handling that, which could potentially be metadata. The one issue I have with relying too much on metadata is that it is not immediately obvious what is going on.

Another thing I would really like to see at some point is some sort of a rigging system standard where riggers around the world can exchange components and general bits from the rigs with each other. To be honest, though, I am both excited and worried about something like this, as introducing a standard might significantly hinder innovation.

The big thing that lies in the future, though, is getting a higher level of interactiveness while rigging. Complex builds using the rigging system from the last section can take minutes to build, which means that for every single change I make in the guides file or the build I will need to wait a lot to actually see the result. That is making the process a lot more obscure, when you just need to keep changing stuff in order to hit the right values. Imagine, though, that we have all that building happening in real-time. Say I have the guides opened in one window, the build file in my text editor and I have the product in another window. Ideally, what I would like is by moving something in my guides file, to trigger a rebuild of that “dirtied” portion of the build which would result in the changes applied in my third window without actually deforming my model.

I am saying this lies in the future, though, aspects of it are already taken care by the guidable components method described above. That being said, that full level of interactiveness is what I would ideally like to achieve.

As I said in the beginning, a lot of these are just my own speculations, which means that I am still trying to figure most of this out. That is why, I would love to hear your tips, opinions and ideas on rigging systems, so please do share them!

If you have been reading bindpose for a while and have seen my marking menu posts you probably know that I am very keen on getting my workflow as optimized as possible. I am a big fan of handy shelfs, marking menus, hotkeys, custom widgets, etc. The way I see it, the closer and easier our tools are to access the quicker we can push the rig through. That is why today we are having a look at using PySide to install a global hotkey in Maya (one that would work in all windows and panels) in a way where we do not break any of the existing functionality of that hotkey (hopefully).

If you have not used PySide before, do not worry, our interaction with it will be brief and pretty straightforward. I myself am very new to it. That being said, I think it is a great library to learn and much nicer and more flexible than the native maya UI toolset.

Disclaimer: The way I do this is very hacky and dirty and I am sure there must be a nicer way of doing this, so if you have suggestions please do let me know, so I can add it both to my workflow and to this post.

What we want to achieve

So, essentially, all I want to do here is install a global hotkey (PySide calls them shortcuts) on the CTRL + H combination that would work in all Maya’s windows and panels as you would expect it to – Hide selected, but if we are inside the Script editor it would clear the history.

Some of you might think that we can easily do this without PySide, just using maya’s hotkeys, but the tricky bit comes in from the fact that maya’s hotkeys are not functioning when your last click was inside the Script editor’s text field or history field. That means, that only if you click somewhere on the frames of the Script editor would that hotkey get triggered, which obviously is not nice at all.

Achieving it

So, let us have a look at the full code first and then we will break it apart.

from functools import partial
from maya import OpenMayaUI as omui, cmds as mc

    from PySide2.QtCore import *
    from PySide2.QtGui import *
    from PySide2.QtWidgets import *
    from shiboken2 import wrapInstance
except ImportError:
    from PySide.QtCore import *
    from PySide.QtGui import *
    from shiboken import wrapInstance

def _getMainMayaWindow():
    mayaMainWindowPtr = omui.MQtUtil.mainWindow()
    mayaMainWindow = wrapInstance(long(mayaMainWindowPtr), QWidget)
    return mayaMainWindow

def shortcutActivated(shortcut):
    if "scriptEditor" in mc.getPanel(wf=1):
        e = QKeyEvent(QEvent.KeyPress, Qt.Key_H, Qt.CTRL)
        QCoreApplication.postEvent(_getMainMayaWindow(), e)
        mc.evalDeferred(partial(shortcut.setEnabled, 1))

def initShortcut():
    shortcut = QShortcut(QKeySequence(Qt.CTRL + Qt.Key_H), _getMainMayaWindow())
    shortcut.activated.connect(partial(shortcutActivated, shortcut))


Okay, let us go through it bit by bit.


We start with a simple import of partial which is used to create a callable reference to a function including arguments. Then from maya we the usual cmds, but also OpenMayaUI which we use to get a PySide reference to maya’s window.

Then the PySide import might look a bit confusing with that try and except block, but the only reason it is there is because between maya 2016 and maya 2017 they switched PySide versions, and the imports had to change as well. So, what we do is we try to import from PySide2 (Maya 2017) and if it cannot be found we do the imports from PySide (Maya 2016).

Getting Maya’s main window

Even though, Maya’s UI is built entirely by Qt (PySide is a wrapper around Qt), the native elements are not usable with PySide functions. In order to be able to interact with these native bits we need to find a PySide reference to them. In the example for hotkeys we need only the main window, but depending on what you are trying to do you might have to iterate through children in order to find the UI element you are looking for. Therefore this _getMainMayaWindow function has become a boilerplate code and I always copy and paste it together with the imports.

The way it works is, using Maya’s API we get a pointer to the memory address where Maya’s main window is stored in memory. That’s the omui.MQtUtil.mainWindow() function. Then what we do is, using that pointer and the wrapInstance function we create a PySide QWidget instance of our window. That means that we can run any QWidget functions on Maya’s main window. In our hotkey example, though, we only need it to bind the hotkey to it.

The logic of the hotkey

The shortcutActivated function is the one that is going to get called every time we press the hotkey. It takes a QShortcut object as an argument, but we will not worry about it just yet. All we need to know is that this object is what calls our shortcutActivated function.

It is worth mentioning that this function is going to get called without giving Maya a chance to handle the event itself. So, that means that if we have nothing inside this function, pressing CTRL + H will do nothing. Therefore, we need to make sure we implement whatever functionality we want inside of this function.

So, having a look at the if statement, you can see that we are just checking if the current panel with focus – mc.getPanel(wf=1) – is the Script editor. That will return True if we have last clicked either on the frames of the Script editor windows or anywhere inside of it.

Then, obviously, if that is the case we just clear the Script editor history.

If it returns False, though, it means that we are outside of the Script editor so we need to let Maya handle the key combination as there might be something bound to it (In the case of CTRL+H we have the hiding functionality which we want to maintain). So, let us pass it to Maya then.

As I said earlier, Maya does not get a chance to handle this hotkey at all, it is entirely handled by PySide’s shortcut. So in order to pass it back to Maya, what we do is we disable our shortcut and we simulate the key combination again, so Maya can do it’s thing. Once that is done, we re-enable our shortcut so it is ready for next time we press the key combination. That is what the following snippet does.

e = QKeyEvent(QEvent.KeyPress, Qt.Key_H, Qt.CTRL)
QCoreApplication.postEvent(_getMainMayaWindow(), e)
mc.evalDeferred(partial(shortcut.setEnabled, 1))

Notice we are using evalDeferred as we are updating a shortcut from within itself.

Binding the function to the hotkey

Now that we have all the functionality ready, we need to bind it all to the key combination of our choice – CTRL + H in our example. So, we create a new QShortcut instance, which receives a QKeySequence and parent QWidget as arguments. Essentially, we are saying we want this key combination to exist as a shortcut in this widget. The widget we are using is the main maya window we talked about earlier.

Then, we use the setContext method of the shortcut to extend it’s functionality across the whole application, using Qt.ApplicationShortcut as an argument. Now the shortcut is activated whenever we press the key combination while we have our focus in any of the maya windows.

Lastly, we just need to specify what we want to happen when the user has activated the shortcut. That is where we use the activated signal of the shortcut (more info on signals and slots) and we connect it to our own shortcutActivated function. Notice that we are using partial to create a callable version of our function with the shortcut itself passed in as an argument.

And that’s it!


Hotkeys, marking menus, shelves, custom widgets and everything else of the sort is always a great way to boost your workflow and be a bit more efficient. Spending some time to build them for yourself in a way where you can easily reproduce them in the next version of Maya or on your next machine is going to pay off in the long run.

I hope this post has shown you how you can override maya’s default hotkeys in some cases where it would be useful, while still maintaining the default functionality in the rest of the UI.

If you know of a nicer way of doing this, please do share it!

Today, I am going to share a really quick tip of achieving an uniform spacing along a curve.

Disclaimer: If you are not familiar with using the API, worry not, we are looking at a very simple example and I will try to explain everything, but it also might be a good idea to get some understanding of how it all functions. A good place to start is Chad Vernon’s Introduction to the API.

Very often in rigging we need to use curves. In quite a lot of these cases we need to get uniformly distributed positions along that curve. A simple example is creating controls along a curve. Chances are you would want them to be as uniformly distributed as possible, but in order to get that only using the parameter along the curve, you would need a perfectly uniform one that also matches the actual curvature. To get that you would need to do a lot of rebuilding, inserting knots and tweaking.

For another tip on rigging with curves have a look at my post about getting a stable end joint when working with IK splines.

I suppose that if you are doing it by hand then you can easily tweak the position along the curve and eyeball the distances between them to be roughly equal, but it sounds like too much hassle to me and also, more often than not, you would want to have that automated as I could imagine it being integral to a lot of rig components.

Let us have a look then!

The issue

So, I am sure everyone has run into the situation where they’ve wanted to create a few objects positioned uniformly along a nurbsCurve or a nurbsSurface, but they get this.

Getting an uniform space along a curve - example of non-uniform spacing on a nurbsSurface

Notice how larger the gap is between the joints on the left-hand side than on the right. The reason for that is that the distance between the isoparms is not equal throughout the surface, but the parameter difference is. What that means is, no matter how much we stretch and deform the surface, the parameter difference between the spans is always going to be the same – .25 in our example (1.0 / spansU).

Getting an uniform space along a curve - example of non-uniform spacing on a nurbsSurface with drawover

That discrepancy between the parameter space and the 3D space is what causes these non-uniform positions.

Getting uniform positions along a curve

So now that we know that, we can figure out that the way to get a reliable position is to find a relationship between the 3D space and the parameter space. That is where the API’s MFnNurbsCurve comes handy.

The 3D space information that we are going to be using is the length of the curve, as we know that is an accurate representation of distance along the curve. If you have a look at the available methods in the MFnNurbsCurve class, you will find the following one findParamFromLength. Given a distance along the curve this function will give us a parameter.


Let us consider the following curve.

Getting an uniform spacing along a curve - example curve with non-uniform CVs

Let us position some joints along the curve using distances only based on the parameter.

for i in range(11):
    pci = mc.createNode("pointOnCurveInfo")
    mc.connectAttr("curve1.worldSpace", pci + ".inputCurve")
    mc.setAttr(pci+".parameter", i * .1)
    jnt = mc.createNode("joint")

All we do here is iterate 11 times and create a joint on the curve at the position of parameter equal to iterator * step where the step is 1.0 / (numberOfJoints - 1), which is .1 in our example.

Getting an uniform spacing along a curve - Example of non-uniform spacing on a curve using just the parameter

As expected, the non-uniform distance between the CVs results in an also non-uniform spacing of the joints.

Let us try a different approach then. We will get a reference to an API instance of our curve, and using the above mentioned function we will get parameters based on actual distance along the curve, hence getting an uniform distribution.

from maya import OpenMaya as om

def getDagPath(node=None):
    sel = om.MSelectionList()
    d = om.MDagPath()
    sel.getDagPath(0, d)
    return d

crvFn = om.MFnNurbsCurve(getDagPath("curveShape4"))

for i in range(11):
    parameter = crvFn.findParamFromLength(crvFn.length() * .1 * i)
    point = om.MPoint()
    crvFn.getPointAtParam(parameter, point)
    jnt = mc.createNode("joint")

So, the getDagPath function takes a name of a node and returns an MDagPath instance of that node, which we need in order to create the MFnNurbsCurve instance. The MDagPath is used for many other things in the API, so it is always a good idea to have that getDagPath function somewhere where you can easily access it.

Notice we are passing the curve shape node, as if we are to use the curve4 transform we will not be able to create the MFnNurbsCurve instance.

Having that MFnNurbsCurve, we iterate 11 times and following the same logic for getting a position along the curve as before – iterator * step – we get the parameter at that position, using the findParamFromLength method.

Now that we know the parameter we could still use the pointOnCurveInfo as we did before, but considering we are already working in the API we might as well get all the data from there. So, using the getPointAtParam method we can get a world space position of the point on the curve at that parameter.

Notice however that we are first creating an MPoint and we are then passing it to the getPointAtParam function to populate it.

And here is the result.

Getting an uniform spacing along a curve - example of uniform spaced joints along a curve using the mfnNurbsCurve from the Maya API

Using the same approach to get uniform positions on a surface

So, all that nurbsCurve business is great, but how can we apply the same logic to a nurbsSurface. Unfortunately, the MFnNurbsSurface does not have any method resembling the findParamFromLength one, but luckily we can always create a curve from a surface.

So in order to get uniform spacing along a nurbsSurface what I usually would do is create a nurbsCurve from that surface using the curveFromSurfaceIso node and using the described method find the accurate parameters and use those on the surface itself.

While writing this I realized that maybe the same approach can be used to actually get an uniform representation of the surface by getting curves from the surface and using them calculating the new, uniformly spaced CVs of the surface. Seems like we might loose a lot of the curvature of the surface, but it also seems promising, so I will definitely look into it.


Using curves and surfaces is something that I did not do a lot of in the beginning of my rigging path, but obviously they are such an integral part of rigging, that it is very important to be able to work with them in a reliable and predictable fashion. Thus, this tip has helped me a lot when building bits of my rigging system and I really hope you find it valuable in your work as well.

Additionally, I would like to reiterate who powerful of a tool the API is and I would definitely suggest anyone who is not really familiar with it to take the plunge and start learning it by using it. The major benefits are not only functional ones (like the one described in this post), but also performance ones, as the API is incredibly faster that anything to do with maya.cmds.

So, painting skin weights. It is a major part of our rigging lives and sadly one of the few bits, together with joint positioning, that we cannot yet automate, though in the long run machine learning will probably get us 99% there. Untill then though, I thought I would share some of my tips for painting skin weights with maya’s native tools, since whenever I would learn one of these I felt stupid for not finding it out earlier as, more often than not, it was just so simple.

I am sure a lot of you are familiar with these, but even if you learn just a single new idea about them today, it might boost your workflow quite a bit. Additionally, I know that a lot of you are probably using ngSkinTools and literally everyone I know who works with it says they cannot imagine going back. So I am sure that some of the things I am going to mention are probably already taken care of ngSkinTools, but if you, like me, have not had the chance to adopt it yet, you might find these helpful.

I am going to list these in no particular order, but here is a table of contents.


  1. Simplifying geometries with thickness and copying the weights
  2. Using simple proxy geometry to achieve very smooth weights interpolation quickly
  3. Duplicate the geometry to get maya default bind on different parts
  4. Copy and paste vertex weights
  5. Use Post as normalization method when smoothing
  6. Move skinned joints tool
  7. Reveal selected joint in the influence list
  8. Some handy hotkeys
  9. Average weights
  10. Copy and paste multiple vertex weights with search and replace
  11. Print weights

So with that out of the way, let us get on with it.

Simplifying geometries with thickness and copying the weights

This one comes in very handy when we are dealing with complex double-sided geometries (ones that have thickness). The issue with them is that when you are painting one side, the other one is left unaffected, so as soon as an influence object transforms the two sides intersect like crazy. That is often the case with clothes and wearables in general.

The really easy way to get around this is to
1. Make a copy of the geometry
2. Remove the thickness from it (when having a good topology it is as simple as selecting the row of faces which creates the thickness and deleting it together with one side of the geo)
3. Paint the weights on that one
4. Copy the weights back to the original geometry

Painting skin weights tips - Using a one sided proxy geometry when working with thickness

Now, a really cool thing that I had not thought of untill recently is that even if I have started painting some weights on the double sided geometry to begin with I can also maintain them, by copying the weights from the original one to the simplified one before painting it, so I have a working base.

That means, that if I have managed to paint some weights on a double sided geometry that kind of work, but the two sides are not behaving 1 to 1, I can create a simplified geo, copy the weights from the original one to the simplified and then copy them back to get the 1 to 1 behaviour I am looking for.

Using simple proxy geometry to achieve very smooth weights interpolation quickly

This one is very similar to the first one, but I use it all the time and not only on double-sided geometries.

Very often there are geometries that have some sort of a detail modeled in them that make it hard for weight painting smooth weights around it.

Consider the following example. Let us suppose that we need this geometry to be able to stretch smoothly when using the .translateX of the end joint.

Tips for painting skin weights in maya - Using a simple geometry to copy weights to models which are hard to smooth weights for.

Doesn’t look great with default skinning, but also if I try to block in some weights and smooth them, it is likely that maya won’t be able to interpolate them nicely. To go around it, I’d create a simple plane with no subdivisions so I can have a very nice smooth interpolation from one edge to the other.

Tips for painting skin weights - Using a simple plane without subdivisions to achieve a smooth weights interpolation for copying to complex geometries.

Copying this back to the initial geometry gives us this.

Tips for painting skin weights - Smooth skinned complex geometry using weights from a simple plane.

Very handy for mechanical bits that have some detail in them and also need to be stretched (happens very often in cartoon animation).

Duplicate the geometry to get maya default bind on different parts

So, very often I have to paint the weights on a part of a geometry to a bunch of new joints while I still need to maintain the existing weights on the rest of it. More often than not, I would be satisfied with maya’s default weights after a bind, but obviously if I do that it will obliterate my existing weights.

What I do in such cases is make a copy of the geometry and smooth bind it to only the new joints. Then I select the vertices on the original geometry that comprise the part I want the new influences in and I use the Copy skin weights from the duplicated one to the selected vertices. If the part is actually separated from the rest of the geometry that should do it, but if it s a more of an organic shape, there is going to be some blending of the new weights with the ones surrounding them.

I could imagine, though, that having the ability to have layers and masks on your skin weights would make this one trivial.

Copy and paste vertex weights

I am guilty of writing my own version of this tool just out of the ignorance of not knowing that this exists. Basically what you can do is select a vertex, use the Copy vertex weights then select another one (or more than one) and use the Paste vertex weights command to paste them. Works cross-geometries as well.

A cool thing about the tool that I wrote to do this is I added a search and replace feature that would apply the weights to the renamed joints. For example if I am copying a vert from the left arm and I want to paste it on the right I would add “L_” to “R_” to my replacement flags.

Use Post as normalization method when smoothing

So, I have met both people who love and who hate post. I think the main reason people dislike it is because they don’t feel comfortable with their weights being able to go above 1.0, but I have to say that sometimes it is very handy. Especially for smoothing. Everyone knows how unpredictable maya’s interactive smoothing is, and that’s understandable since in a lot of cases it is not immediately obvious where should the remaining weights go to.

Smoothing on post is 100% predictable which I think is the big benefit. The way it works is that it smooths out a joint’s influence by itself, without touching any of the other weights. That means that the weights are not normalized to 1.0, but instead of verts shooting off in oblivion post normalizes them for our preview. That is also why it is not recommended to leave skinClusters on Post as the weights are going to be normalized on deformation time which would be slower.

So more often than not my workflow for painting weights would be to block in some harsh rigid weights, then switch to Post and go through the influences one by one flooding them with the Smooth paint operation once or twice.

Move skinned joints tool

I am not sure which version of maya did this tool come in, but I learned of it very recently. Essentially you can select a piece of geo (or a joint) and run the Move skinned joints tool, then you can transform the joint however you like or you can also change the inputs going into it without affecting the geometry, though, you’d have to be careful to not change the tool or the selection as that would go out of the Move skinned joints tool. Ideally any other changes than just moving/rotating them about should be ready to be ran in the script editor.

I would not recommend using this for anything else than just testing out different pivot points. Doing it for actual positioning in the final skinCluster feels dirty to me.

Reveal selected joint in the Paint skin weights tool influence list

Only recently I found out what this button does.

Paint skin weights tool - reveal selected joint in influence list

It scrolls the list of influences to reveal the joint that we have selected, which is absolutely brilliant! Previously, I hated how when I need to get out of the Paint skin weights tool and then get back inside of it, the treeView is always scrolled to the top of the list. Considering that the last selection is maintained, pressing that button will always get you back to where you left off. Even better, echoing all commands gives us the following line of MEL that we can bind to a hotkey.

artSkinRevealSelected artAttrSkinPaintCtx;

Some handy hotkeys

I have learned about some of these way too late, which is a shame, but since then I’ve been using them constantly and the speed increase is immense. I hate navigating my mouse to the tool options just to change a setting or value.

  • CTRL + ALT + C – Copy vertex weights
  • CTRL + ALT + V – Paste vertex weights
  • N + LMB (drag) – Adjust the value you are painting with
  • U + LMB – Marking menu to change the current paint operation (Replace, Add, etc.)
  • ALT + F – Flood surfaces with current values

For more of these head on to the Hotkey editor, then in the Edit hotkeys for combobox go for Other items and open up the Artisan dropdown.

From here on I have added some of the functionalities that I have written for myself, but sadly the code is very messy to be shared. Luckily, it is not hard at all to write your own (and it will probably be much better than mine), but if you are interested, do let me know and I can clean it up and share it at some point.

Average weights

This one I use a lot. What it does is, it goes through a selection of verts and calculates the average weights for all influences and then goes through the selection once more and applies that average calculated weight. Essentially, what this results in is a rigidly transforming collection of verts. Stupidly simple, but very useful when rigging mechanical bits, which should not deform. Also I have used it in the past on different types of tubings and ropes where there are bits (rings, leafs, etc.) that need to follow the main deformation but not deform themselves.

Copy and paste multiple vertex weights with search and replace

In addition to the above mentioned copy and paste vertex weights, I have written a simple function that copies a bunch of vertex weights and then pastes them to the same vertex IDs on a new mesh. It is not very often that we have geometries that are copies of each other, but if we do this tool saves me a lot of time, because I can then just skin one of them, copy the weights for all verts and then paste them to the other geometry using the Search and Replace to adjust for the new influences.

Comes in particularly handy for radially positioned geometries where mirroring will not help us a lot.

Print weights

Quite often I’d like to debug why something is deforming incorrectly, but scrolling through the different influences can get tedious especially if you have a lot of them. So I wrote a small function that finds the weights on the selected vertex and prints them to me.

This is the kind of output I get from it.

joint2 : 0.635011218122
joint1 : 0.364988781878

As I said, there is a lot of room for improvement. It works only on a single vert at the moment, but I could imagine it being really cool to see multiple ones in a printed table similar to what you would get in the component editor.

What would be even cooler would to use PySide to print them next to your mouse pointer.


Considering that we spend such a big chunk of our time on painting weights we should do our best to be as efficient and effective as possible. That is the reason I wanted to share these, as they have helped me improve my workflow immensely and I hope you would find some value in them as well.

IK splines are a big part of a rigger’s toolset. They come in super handy for anything that needs to behave similarly to a rope. Funnily enough, that behaviour is often desired in many parts of the body and is also often preferable to a ribbon, mainly because ribbon’s stretch is not always desirable. Examples include spines, soft limbs, tentacles, some cases with lips and eyebrows, etc. Additionally, IK splines are a necessity in prop rigging, so we definitely need to have a stable way of setting them up. That is why today I am looking at a quick tip on going around the issue where the end joint does not sit at the end of the spline when stretched or deformed a bit more extremely.

The issue

If you have ever used a spline IK you have probably noticed an annoying stability issue at the end of the chain. Basically, when the chain is stretched or deformed a lot, our joints become longer and it is harder for them to assume the proper positions and rotations along the spline in order to follow it correctly. Effectively, as we stretch the spline it is almost as if the joint chain becomes with lower resolution than needed.

Here’s an example of the issue. Notice how the end joint has trouble sitting at the end of the chain.

IK splines - end joint issue

Depending on the amount of joints in the chain this issue will be less or more pronounced. Since I very often have two layers of control on spline setups where the lower resolution one drives the higher one so the animators have a lot more control than a single one, I also need to provide the fix for both layers. So, let us have a look at it.

The setup

Depending on the way the spline is driven you will have to adapt the setup, but I think it will be fairly straight-forward how to do that.

Essentially, all we do is we create another joint chain with just 2 joints, where the base is rooted at the end of the spline (essentially driven by whatever drives the end of the spline) and it aims at the second to last joint of the chain. Effectively, giving us this.

IK splines - end joint issue fix


You’ll notice that I haven’t added any stretch to that aimed joint. I have found that most of the time I really do not need it. It seems to me that in order for that to become an issue, the chain needs to be stretched quite a bit, which is not very often the case. If you know, though, that your spline IK setup would be stretched a lot, it might be a good idea to plug the distance between the end point of the spline and the second to last joint of the chain into the translateX of the tip of the aimed joint chain.

Up vector

Depending on the result you would like to see from the setup you have a few different choices for the up vector of the aimConstraint. If you want it to behave exactly like the rest of the chain behaves, you can use the up axis of the last joint in the chain to be the up vector. I would usually suggest going that way, as then however you decide to twist the chain the additional joint will always be following that. Other options may include, the joint we are aiming at, the base of the chain (if we do not want any twist) or whatever drives the end of the chain, so we get the full twist out of it.

Additional potential issue

If you have another look at any of the GIFs above you’ll notice that at a certain pose of the CV, not only the last joint is flying off, but also the second to last one goes past the end of the spline. That is caused by the exact same issue I mentioned above. Our fix will not behave amazingly when this happens as the aimed joint will have to pop in order to aim at the opposite direction.

To be honest, similarly to the stretching bit I mentioned above, I haven’t had issues with this mainly because the chains are rarely stretched or deformed that much. That being said, there is a potential solution, which seems quite heavy, but I suppose if the functionality is needed the cost is irrelevant.

What you would do to completely go around this issue is having a second spline IK chain with the exact same joint chain but in reverse. You would also need to use a reverseCurve before the ikHandle, as well. Essentially, we are duplicating the setup but in reverse, so the problematic area is not only the end of the initial chain but it is also covered by the base of the new one and we know that the base of the IK spline behaves correctly. Therefore, all we need to do after that is paint the weights using both joint chains and smoothly blend them somewhere in the middle.

I have to say that I have never actually used this setup, but I have only tested it out, so if you manage to get it to work or not I would be happy to hear about it.


I really like how in rigging there is almost always a solution and coming up with these solutions is always so much fun. By no means is this fix bulletproof, but most of the time it would do the job. I hope it helps you with building your own spline IK setups, since they are just so useful.

This post is a part of a three post series, where I implement popular rigging functionalities with just using maya’s native matrix nodes.

Calculating twist is a popular rigging necessity, as often we would rather smoothly interpolate it along a joint chain, instead of just applying it at the end of it. The classical example is limbs, where we need some twist in the forearm/shin area to support the rotation of the wrist or foot. Some popular implementations utilize ik handles or aim constraints, but I find them as a bit of an overkill for the task. Therefore, today we will have a look at creating a matrix twist calculator, that is both clean and quick to evaluate.

Other than matrix nodes I will be using a couple of quaternion ones, but I promise it will be quite simple, as even I myself am not really used to working with them.

tl;dr: We will get the matrix offset between two objects – relative matrix, then extract the quaternion of that matrix and get only the X and W components, which when converted to an euler angle, will result in the twist between the two matrices along the desired axis.

Desired behaviour

Matrix twist calculator - desired behaviour
Please excuse the skinning, I have just done a geodesic voxel bind

As you can see what we are doing is calculating the twist amount (often called roll as well from the yaw, pitch and roll notation) between two objects. That is, the rotation difference on the axis aiming down the joint chain.


An undesirable effect you can notice is the flip when the angle reaches 180 degrees. Now, as far as I am aware, there is no reasonable solution to this problem, that does not involve some sort of caching of the previous rotation. I believe, that is what the No flip interpType on constraints does. There was one, using an orient constraint between a no roll joint and the rolling joint and then multiplying the resulting angle by 2, which worked in simple cases, but I found it a bit unintuitive and not always predictable. Additionally, most animators are familiar with the issue, and are reasonable about it. In the rare cases, where this issue will be a pain in your production you can always add control over the twisting matrices, so the animators can tweak them.

Something else to keep in mind is to always use the first axis of the rotate order to calculate the twist in, since the other ones might flip at 90 degrees instead of 180. That is why, I will be looking at calculating the X twist, as the default rotate order is XYZ.

With that out of the way, let us have a look at the setup.

Matrix twist calculator

I will be looking at the simple case of extracting the twist between two cubes oriented in the same way. Now, you might think that is too simple of an example, but in fact this is exactly what I do in my rigs. I create two locators, which are oriented with the X axis being aligned with the axis I am interested in. Then I parent them to the two objects I want to find the twist between, respectively. This, means that finding the twist on that axis of the locators, will give me the twist between the two objects.

Matrix twist calculator

Granted, I do not use actual locators or cubes, but just create matrices to represent them, so I keep my outliner cleaner. But, that is not important at the moment.

The relative matrix

Now, since we are going to be comparing two matrices to get the twist angle between them, we need to start by getting one of them in the relative space of the other one. If you have had a look at my Node based matrix constraint post or you were already familiar with matrices, you would know that we can do that with a simple multiplication of the child matrix by the inverse of the parent matrix. That will give us the matrix of the child object relative to that of the parent one.

The reason, we need that is because that relative matrix is now holding all the differences in the transformations between the two objects, and we are interested in exactly that, the difference on the aim axis.

Here is how that would look in the graph.

Matrix twist calculator - relative matrix

The quaternion

So, if we have the relative matrix, we can proceed to extracting the rotation out of it. The thing with rotations in 3D space is that they seem a bit messy, mainly because we usually think of them in terms of Euler angles, as that is what maya gives us in the .rotation attributes of transforms. There is a thing called a quaternion, though, which also represents a rotation in 3D space, and dare I say it, is much nicer to work with. Nicer, mainly because we do not care about rotate order, when working with quaternions, since they represent just a single rotation. What this gives us is a reliable representation of an angle along just one axis.

In practical terms, this means, that taking the X and W components of the quaternion, and zeroing out the Y and Z ones, will give us the desired rotation only in the X axis.

In maya terms, we will make use of the decomposeMatrix to get the quaternion out of a matrix and then use the quatToEuler node to convert that quaternion to an euler rotation, which will hold the twist between the matrices.

Here is the full graph, where the .outputRotateX of the quatToEuler node is the actual twist value.

Matrix twist calculator - full graph


And that is it! As you can see, it is a stupidly simple procedure, but has proved to be giving stable results, which in fact are 100% the same as using an ik handle or an aim constraint, but with little to no overhead, since matrix and quaternion nodes are very computationally efficient.

Stay tuned for part 3 from this matrix series, where I will look at creating a rivet by using just matrix nodes.

This post is a part of a three post series, where I will try to implement popular rigging functionalities by only using maya’s native matrix nodes.

Following the Cult of rig lately, I realized I have been very wasteful in my rigs in terms of constraints. I have always known that they are slower than direct connections and parenting, but then I thought that is the only way to do broken hierarchy rigs. Even though I did matrix math at university, I never used it in maya as I weirdly thought the matrix nodes are broken or limited. There was always the option of writing my own nodes, but since I would like to make it as easy for people to use my rigs, I would rather keep everything in vanilla maya.

Therefore, when Raffaele used the matrixMult and decomposeMatrix nodes to reparent a transform, I was very pleasantly inspired. Since then, I have tried applying the concept to a couple of other rigging functionalities, such as the twist calculation and rivets and it has been giving me steadily good results. So, in this post we will have a look at how we can use the technique he showed in the stream, to simulate a parent + scale constraint, without the performance overhead of constraints, effectively creating a node based matrix constraint.


There are some limitations with using this approach, though. Some of them are not complex to go around, but the issue is that this adds extra nodes to the graph, which in turn leads to performance overhead and clutter. That being said, constraints add up to the outliner clutter, so I suppose it might be a matter of a preference.


Constraining a joint with jointOrient values, will not work, as the jointOrient matrix is applied before the rotation. There is a way to get around this, but it involves creating a number of other nodes, which add some overhead and for me are making it unreasonable to use the setup instead of an orient constraint.

If you want to see how we go around the jointOrient issue just out of curiosity, have a look at the joint orient section.

Weights and multiple targets

Weights and multiple targets are also not entirely suitable for this approach. Again, it is definitely not impossible, since we can always blend the output values of the matrix decomposition, but that will also involve an additional blendColors node for each of the transform attributes we need – translate, rotate and scale. And similarly to the previous one, that means extra overhead and more node graph clutter. If there was an easy way to blend matrices with maya’s native nodes, that would be great.

Rotate order

Weirdly, even though the decompose matrix has a rotateOrder attribute, it does not seem to do anything, so this method will work with only the xyz rotate order. Last week I received an email from the maya_he3d mailing list, about that issue and it seems like it has been flagged to Autodesk for fixing, which is great.


The construction of such a node based matrix constraint is fairly simple both in terms of nodes and the math. We will be constructing the graph as shown in the Cult of Rig stream, so feel free to have a look at it for a more visual approach. The only addition I will make to it is supporting a maintainOffset functionality. Also, Raffaele talks a lot about math in his other videos as well, so have a look at them, too.

Node based matrix constraint

All the math is happening inside the matrixMult node. Essentially, we are taking the worldMatrix of a target object and we are converting it to relative space by multiplying by the parentInverseMatrix of the constrained object. The decomposeMatrix after that is there to break the matrix into attributes which we could actually connect to a transform – translate, rotate, scale and shear. It would be great if we could directly connect to an input matrix attribute, but that would probably create it’s own set of problems.

That’s the basic node based matrix constraint. How about maintaining the offset, though?

Maintain offset

In order to be able to maintain the offset, we need to just calculate it first and then put it in the multMatrix node before the other two matrices.

Node based matrix constraint - maintain offset

Calculating offset

The way we calculate the local matrix offset is by multiplying the worldMatrix of the object by the worldInverseMatrix of the parent (object relative to). The result is the local matrix offset.

Using the multMatrix node

It is entirely possible to do this using another matrixMult node, and then doing a getAttr of the output and set it in the main matrixMult by doing a setAttr with the type flag set to "matrix". The local matrixMult is then free to be deleted. The reason we get and set the attribute, instead of connecting it, is that otherwise we create a cycle.

Node based matrix constraint - local matrix offset

Using the Maya API

What I prefer doing, though, is getting the local offset via the API, as it does not involve creating nodes and then deleting them, which is much nicer when you need to code it. Let’s have a look.

import maya.OpenMaya as om

def getDagPath(node=None):
    sel = om.MSelectionList()
    d = om.MDagPath()
    sel.getDagPath(0, d)
    return d

def getLocalOffset(parent, child):
    parentWorldMatrix = getDagPath(parent).inclusiveMatrix()
    childWorldMatrix = getDagPath(child).inclusiveMatrix()

    return childWorldMatrix * parentWorldMatrix.inverse()

The getDagPath function is just there to give us a reference to an MDagPath instance of the passed object. Then, inside the getLocalOffset we get the inclusiveMatrix of the object, which is the full world matrix equivalent to the worldMatrix attribute. And in the end we return the local offset as an MMatrix instance.

Then, all we need to do is to set the multMatrix.matrixIn[0] attribute to our local offset matrix. The way we do that is by using the MMatrix‘s () operator which returns the element of the matrix specified by the row and column index. So, we can write it like this.

localOffset = getLocalOffset(parent, child)
mc.setAttr("multMatrix1.matrixIn[0]", [localOffset(i, j) for i in range(4) for j in range(4)], type="matrix")

Essentially, we are calculating the difference between the parent and child objects and we are applying it before the other two matrices in the multMatrix node in order to implement the maintainOffset functionality in our own node based matrix constraint.

Joint orient

Lastly, let us have a look at how we can go around the joint orientation issue I mentioned in the Limitations section.

What we need to do is account for the jointOrient attribute on joints. The difficulty comes from the fact that the jointOrient is a separate matrix that is applied after the rotation matrix. That means, that all we need to do is, in the end of our matrix chain rotate by the inverse of the jointOrient. I tried doing it a couple of times via matrices, but I could not get it to work. Then I resolved to write a node and test how I would do it from within. It is really simple, to do it via the API as all we need to do is use the rotateBy function of the MTransformationMatrix class, with the inverse of the jointOrient attribute taken as a MQuaternion.

Then, I thought that this should not be too hard to implement in vanilla maya too, since there are the quaternion nodes as well. And yeah there is, but honestly, I do not think that graph looks nice at all. Have a look.

Node based matrix constraint - joint orient

As you can see, what we do is, we create a quaternion from the joint orientation, then we invert it and apply it to the calculated output matrix of the multMatrix. The way we apply it is by doing a quaternion product. All we do after that is just convert it to euler and connect it to the rotation of the joint. Bear in mind, the quatToEuler node supports rotate orders, so it is quite useful.

Of course, you can still use the maintainOffset functionality with this method. As I said though, comparing this to just an orient constraint it seems like the orient constraint was performing faster every time, so I see no reason of doing this other than keeping the outliner cleaner.

Additionally, I am assuming that there is probably an easier way of doing this, but I could not find it. If you have something in mind, give me a shout.


Using this node based constrain I was able to remove parent, point and orient constraints from my body rig, making it perform much faster than before, and also the outliner is much nicer to look at. Stay tuned for parts 2 and 3 from this matrix series, where I will look at creating a twist calculator and a rivet by using just matrix nodes.

The classical rivet was a really popular rigging thing a few years ago (and long before that it seems). I am by no means a seasoned rigger, but whenever I would look for facial rigging techniques the rivet would keep coming up. What is more, barely if ever people suggested using follicle to achieve the result, generally because the classical rivet evaluates faster. So, I thought I’d do a maya performance test to compare them.

Table of contents


I will be looking into the performance of a follicle and a classical rivet, both on a NURBS sphere and on a poly sphere. NURBS because I tend to use a lot of ribbons and poly, because it’s a popular feature for attaching objects to meshes.

I will be using Maya 2017’s Evaluation Toolkit to run the performance test, as it gives nice output for each evaluation method, even though I cannot imagine using anything but parallel.

The way the tests are going to work is, I will create two files, each containing the same geometry with 10 rivets. In one file I will use follicles and in the other the classical setup. The deformation on the geometry will just be keyed vertices and it will be identical for each setup, so we can be sure that the only difference between the two files is the riveting setup.

Then, the test will be done in a new scene where I will reference the file to test a 100 times. For each setup I will run the evaluation manager’s performance test and take the results and compare them.

Okay, let us have a look then.


Classical rivet setup

So, the way this one works is I just loop from 1 to 10 and I create a pointOnSurfaceInfo node with parameterU set to iterator * .1 and parameterV set to .5. Then, I plug the output position directly to a locator’s translate attr. Additionally, the output position, normal vector and a tangent vector go into an aimConstraint which constraints the rotation of the locator.

Follicle setup

This one is fairly straightforward, I just created 10 follicles, with parameterU set to iterator * .1 and V to .5.


Bear in mind, EMS refers to serial evaluation and EMP is parallel.

NURBS surface
Classical Rivet
Playback Speeds
    DG  = 13.1694 fps
    EMS = 11.1359 fps
    EMP = 20.7469 fps
Playback Speeds
    DG  = 11.3208 fps
    EMS = 12.6263 fps
    EMP = 27.8293 fps

Even though I expected the follicle to be faster I was surprised by how much. It is important to note that we have 10 * 100 = 1000 rivets in the scene, which is obviously a big number. Therefore, in a more realistic example the difference is going to be more negligible, but still 7.8fps is quite a bit.

What is also quite interesting is that in DG the follicle is slower than the classic rivet. So, the stigma of the old days that the classical rivet is faster, seem to be deserved, but parallel changes everything.


Classical rivet setup

So, when it comes to polys the classical rivet gets a bit more complicated, which I would imagine results in a larger slowdown as well. The way this setup works is, we grab 10 couples of edges, which in turn produce 10 surfaces through a loft node. Maintaining history, the nurbs surfaces will follow the poly geometry. So, we can perform the same rivet setup as before on the nurbs.

Follicle setup

On a mesh with proper UVs the follicles are again trivial to set up. We just loop 10 times and create a follicle with the appropriate U and V parameters.

Polygon geometry
Playback Speeds
    DG  = 1.7313 fps
    EMS = 3.32005 fps
    EMP = 9.79112 fps
Classical rivet
Playback Speeds
    DG  = 1.05775 fps
    EMS = 1.52022 fps
    EMP = 3.31053 fps

As expected, follicles are again quite a bit faster. I am saying as expected, as not only do we have a riveting setup as in the NURBS case, but also there is the edges and the loft which add to the slowdown. I am assuming, that is why even in DG the classical rivet is still slower.


So, the conclusion is pretty clear – follicle rivets are much faster than classical rivets in the latest maya versions which include the parallel evaluation method.

So, it seems like I have been going crazy with marking menus lately. I am really trying to get the most of them, and that would not be much if we could only use them in the viewports, so today we are going to look at how we can construct custom marking menus in maya editors.

Custom marking menus in maya editors - node editor

tl;dr: We can query the current panel popup menu parent in maya with the findPanelPopupParent MEL function, and we can use it as a parent to our popupMenu.

So, there are a couple of scenarios that we need to have a look at, as they should be approached differently. Although, not completely necessary I would suggest you have a look at my previous marking menu posts – Custom marking menu with Python and Custom hotkey marking menu – as I will try to not repeat myself.

Okay, let us crack on. Here are the two different situations for custom marking menus in maya editors we are going to look at.

Modifiers + click trigger

In the viewport these are definitely the easier ones to set up as all we need to do is just create a popupMenu with the specified modifiers – sh, ctl and alt, the chosen button and viewPanes as the parent. When it comes to the different editors, though, it gets a bit trickier.

Let us take the node editor as an example.

If we are to create a marking menu in the node editor, it is a fairly simple process. We do exactly the same as before, but we pass "nodeEditorPanel1" as the parent argument. If you have a node editor opened when you run the popupMenu command, you will be able to use your marking menu in there. The catch is though, that once you close the node editor the marking menu is deleted, so it is not available the next time you open the node editor.

Unfortunately, I do not have a great solution to this problem. In fact, it is a terrible solution, but I wanted to get it out there, so someone can see it, be appalled and correct me.

The second method – Custom hotkey trigger – is much nicer to work with. So, you might want to skip to that one.

What I do is, I create a hotkey for a command that invokes the specific editor (I only have marking menus in the node editor and the viewport) and runs the marking menu code after that. So, for example, here is my node editor hotkey (Alt+Q) runTimeCommand.


if mc.popupMenu("vsNodeMarkingMenu", ex=1):
    mc.popupMenu("vsNodeMarkingMenu", dai=1, e=1)
    mc.popupMenu("vsNodeMarkingMenu", p="nodeEditorPanel1Window", b=2, ctl=1, alt=1, mm=1)

mc.setParent("vsNodeMarkingMenu", m=1)

from vsRigging.markingMenus import vsNodeMarkingMenu

That means that everytime I open the node editor with my hotkey I also create the marking menu in there, ready for me to use. As, I said, it is not a solution, but more of a workaround at this point. In my case, though, I never open the node editor through anything else than a hotkey, so it kind of works for me.

Then the vsRigging.markingMenus.vsNodeMarkingMenu file is as simple as listing the menuItems.

import maya.cmds as mc

mc.menuItem(l="multiplyDivide", c="mc.createNode('multiplyDivide')", rp="N", i="multiplyDivide.svg")
mc.menuItem(l="multDoubleLinear", c="mc.createNode('multDoubleLinear')", rp="NE", i="multDoubleLinear.svg")
mc.menuItem(l="plusMinusAverage", c="mc.createNode('plusMinusAverage')", rp="S", i="plusMinusAverage.svg")
mc.menuItem(l="condition", c="mc.createNode('condition')", rp="W", i="condition.svg")
mc.menuItem(l="blendColors", c="mc.createNode('blendColors')", rp="NW", i="blendColors.svg")
mc.menuItem(l="remapValue", c="mc.createNode('remapValue')", rp="SW", i="remapValue.svg")

A proper way of doing this would be to have a callback, so everytime the node editor gets built we can run our code. I have not found a way to do that though, other than ofcourse breaking apart maya’s internal code and overwriting it, which I wouldn’t go for.

Luckily, creating a custom marking menu bound to a custom hotkey actually works properly and is fairly easy. In fact, it is very similar to the Custom hotkey marking menu post. Let us have a look.

Custom hotkey trigger

Now, when we are working with custom hotkeys we actually run the initialization of the popupMenu everytime we press the hotkey. This means we have the ability to run code before we create the marking menu. Therefore, we can query the current panel and build our popupMenu according to it. Here is an example runTimeCommand, which is bound to a hotkey.

import maya.mel as mel

name = "exampleMarkingMenu"

if mc.popupMenu(name, ex=1):

parent = mel.eval("findPanelPopupParent")

if "nodeEditor" in parent:
    popup = mc.popupMenu(name, b=1, sh=1, alt=0, ctl=0, aob=1, p=parent, mm=1)

    from markingMenus import exampleMarkingMenu
    popup = mc.popupMenu(name, b=1, sh=1, alt=0, ctl=0, aob=1, p=parent, mm=1)

    from markingMenus import fallbackMarkingMenu

So, what we do here is, we start by cleaning up any existing versions of the marking menu. Then, we use the very handy findPanelPopupParent MEL function to give us the parent to which we should bind our popupMenus. Having that we check if the editor we want exists in the name of the parent. I could also compare it directly to a string, but the actual panel has a number at the end and I prefer just checking the base name. Then, depending on which panel I am working in at the moment, I build the appropriate custom marking menu.

Don’t forget that you need to create a release command as well, to delete the marking menu so it does not get in the way if you are not pressing the hotkey. It is a really simple command, that I went over in my previous marking menu post.

The obvious limitation here is that we have a hotkey defined and we cant just do ctrl+alt+MMB for example.


So, yeah, these tend to be a bit trickier than just creating ones in the viewport, but also I think there is more to be desired from some of maya’s editors *cough* node editor *cough*, and custom marking menus help a lot.

So, recently I stumbled upon a djx blog blost about custom hotkeys and marking menus in different editors in maya. I had been thinking about having a custom hotkey marking menu, but was never really sure how to approach this, so after reading that post I thought I’d give it a go and share my process.

tl;dr: We can create a runtime command which builds our marking menu and have a hotkey to call that command. Thus, giving us the option to invoke custom marking menus with our own custom hotkeys, such as Shift+W or T for example, and a mouse click.

Disclaimer: I have been having a super annoying issue with this setup, where the “release” command does not always get called, so the marking menu is not always deleted. What this means is that if you are using a modifier like Shift, Control or Alt sometimes your marking menu will still be bound to it after it has been closed. Therefore, if you are using something like Shift+H+LMB, just pressing Shift+LMB will open it up, so you lose the usual add to selection functionality. Sure, to fix it you just have to press and release your hotkey again, but it definitely gets on your nerve after a while.

If anyone has a solution, please let me know.

I have written about building custom marking menus in Maya previously, so feel free to have a look as I will try to not repeat myself here. There I also talked about why I prefer to script my marking menus, instead of using the Marking menu editor, and that’s valid here as well.

So, let us have a look then.


The first thing we need to do is define a runTimeCommand, so we can run it with a hotkey. That is what happens if you do it through the Marking menu editor and set Use marking menu in to Hotkey Editor, as well.

There a couple of ways we can do that.

Hotkey Editor

On the right hand side of the hotkey editor there is a tab called Runtime Command Editor. If you go on that one you can create and edit runTime commands.

Scripting it in Python

If you have multiple marking menus that you want to crate, the hotkey editor might seem as a bit of a slow solution. Additionally, if changes need to be made I always find it more intuitive to look at code in my favourite text editor (which is sublime by the way).

To create a runTime command we run the runTimeCommand function which for some reason does not appear in the Python docs, but I’ have been using maya.cmds.runTimeCommand successfully.

All we need to provide is a name for the command, some annotation – ann, a string with some code – c and a language – cl.

Here is an example

mc.runTimeCommand("exampleRunTimeCommand", ann="Example runTime command", c=commandString, cl="python")

Something we need to keep in mind when working with runTime commands is that we cannot pass external functions to them. We can import modules and use them once inside, but I cannot pass a reference to an actual function to the c flag, as I would do to menuItems for example. That means that we need to pass our code as a string.

Press and release

Now, that we know how to create the runTimeCommands let us see what we need these commands for.

As I mentioned, they are needed so we can access them by a hotkey. What that hotkey should do is initialize our marking menu, but once we release the key it should get rid of it, so it does not interfere with other functions. Therefore we need two of them – Press and Release.

Let us say we are building a custom hotkey marking menu for weight painting. In that case we will have something similar to the following.

  • mmWeightPainting_Press runTimeCommand – to initialize our marking menu
  • mmWeightPainting_Release runTimeCommand – to delete our marking menu

The way we bind the release command to the release of a hotkey is by pressing the small arrow to the side of the hotkey field.

Custom hotkey marking menu - release command hotkey

The Press command

import maya.cmds as mc # Optional if it is already imported

name = "mmWeightPainting"
if mc.popupMenu(name, ex=1):

popup = mc.popupMenu(name, b=1, sh=1, alt=0, ctl=0, aob=1, p="viewPanes", mm=1)

import mmWeightPainting

So, essentially what we do is every time we press our hotkey, we delete our old marking menu and rebuild it. We do this, because we want to make sure that our latest changes are applied.

Now, the lower part of the command is where it gets cool, I think. We can store our whole marking menu build – all menuItems – inside a file somewhere in our MAYA_SCRIPT_PATH and then just import it from the runTimeCommand as in this piece of code. What this gives us, is again, the ability to really easily update stuff (not that it is a big deal with marking menus once you set them up). Additionally, I quite like the modularity, as it means we can have very simple runTimeCommands not cluttered with the actual marking menu build. This is the way that creating through the Marking menu editor works as well, but obviously it loads a MEL file instead.

So, literally that mmWeightPainting file is as simple as creating all our marking menu items.

import maya.cmds as mc

mc.menuItem(l="first item")
mc.menuItem(l="second item")
mc.menuItem(l="North radial position", rp="N")

And that takes care of building our marking menu when we press our hotkey + the specified modifiers and mouse button. What, we do not yet have is deleting it on release, so it does not interfere with the other functionality tied to modifier + click combo. That is where the mmWeightPainting_Release runTimeCommand comes in.

The Release command

## mmWeightPainting_Release runTimeCommand
name = "mmWeightPainting"

if mc.popupMenu(name, ex=1):

Yep, it is a really simple one. We just delete the marking menu, so it does not interfere with anything else. Essentially, the idea is we have it available only while the hotkey is pressed.


All that is left to be done is to assign a hotkey to the commands. There are a couple of things to have in mind.

If you are using modifiers for the popupMenu command – sh, ctl or alt – then the same modifiers need to be present in your hotkey as otherwise, even though the runTimeCommand will run successfully, the popupMenu will not be triggered.

In the above example

mc.popupMenu(name, b=1, sh=1, alt=0, ctl=0, aob=1, p="viewPanes", mm=1)

we have specified the sh modifier. Therefore, the Shift key needs to be present in our hotkey.

Also, obviously be careful which hotkeys you overwrite, so you do not end up causing yourself more harm than good.


That’s it, it really is quite simple, but it helps a lot once you get used to your menus. Honestly, trying to do stuff without them feels so tedious afterwards.