September 2017

When it comes to rigging in a production or many productions the name of the game is reusability and automation. In that sense rigging is very similar to traditional software development, as we also have to build systems that are easily maintainable, extensible and reusable. There are a few ways to achieve that in rigging and today I am going to look at some of them. The reason I wanted to write this is that rigging systems are one of the big things I wished I had learned earlier.

Disclaimer: This post will probably come out very opinionated and those opinions are going to be based on my own experience, which really, is not that extensive. Additionally, even though there are going to be informative bits, I am writing this more of a way to share my thoughts instead of trying to teach anything as again, I am not really qualified in any way to do that.

Also, please bear in mind that the larger portion of this post is going to be very speculative since I am talking about tools that I have not really used myself.

A rigging system

I just want to briefly go over what I mean by rigging systems. Essentially, everything that takes a model and produces a rig out of it is a rigging system to me. That means if you just rig an entire thing manually you are the rigging system. If you use a tool similar to maya’s HumanIK that is your rigging system. That means that any system that by following a set of instructions can on it’s own produce a rig is a rigging system. Other than just building actual node networks, rigging systems should provide easy ways to save and load a bunch of properties and settings such as deformer weights, blendshapes, control shapes, etc.

Rigging system types

With the definition out of the way we can have a look at the types of rigging systems out there.

Auto rigging tool

Disclaimer: This is going to be the most speculative portion of this post, as I have never used one of these solutions, other just having a brief look at them.

The auto rigging tool is a rigging system which takes care of everything we talked above by providing you with some sort of a guiding system to define the proportions and often the type of a rig (biped, quadruped, wings, .etc) and then using these guides, it builds node networks which become rig components.

Some examples of auto rigging tools are maya’s HumanIk which I mentioned above and also a popular non-Autodesk one is Rapid Rig.

There are a lot of them online and also they are usually a big part of rigging showreels as it seems that every rigger who starts learning scripting, goes for an auto rigging solution at some point. Including me.

Now, the problem that I have with this kind of rigging systems is the lack of extensibility. I have not seen an auto rigging tool with any sort of an API so far. That means that, for every rig you built you have only a limited number of available components (that number might be a large one but still limited). What I mean by that, there is probably only one arm component available and even though there might be many options on how to build that arm, there is a chance you are not going to find what you are looking for and would like to insert your own logic somewhere in that component, but there is no way to do that.

Mentioning the many options brings me to the next issue I see with auto rigging tools – performance and clutter. The way I see it, the more options you want to support inside of just one single component, the more clutter you would introduce in the logic of that component, in order to accommodate those options. Additionally, if everything is happening behind the scenes, I have no other way of knowing what the tool creates than just opening the node editor and having a look at the networks, which as you could imagine is not going to be fun on large rigs. That opaqueness scares me, as I would not know about potential node clutter introduced in my scene.

That brings me to my next point about auto rigging tools, which is the fact that everything is stored baked down in the scene. What I mean by that is, the auto rigging tool might give you some options for rebuilding parts of the rig after they have been created, but ultimately everything we store is baked into node networks. Yes we can save our weights and maybe some properties outside of the scene file, but these would be things that go on top of the auto rig product. There is no way to store how we actually constructed that rig. Then if I need to change the position/orientation of a joint, how do I go about that? What if, god forbid, the proportions of the model have changed? Do I delete everything and rebuild it? And if I have to do that, what happens with the parts of the rig that I have added on top of the auto rig, do I need to manually rebuild them as well?

The last thing I want to mention about auto rigging tools is UI. I mean, it is usually bloody horrible. I think it is probably all down to the native UI tools that maya gives us, which all feel very clunky. They just don’t seem to work for anything as complex as a rigging system. All of the auto rigging tools I have seen make an extensive use of tightly packed buttons, checkboxes and text fields in loads of collapsible sections or tabs, which just doesn’t seem to cut it in 2017. Again, I think the main issue there is that Maya’s native tools are just not enough to build anything more intuitive. That being said, PySide has been available for a while.

Maya Human IK rigging system UI

So, if you are going to be building an auto rigging tool, please keep in mind the following ideas can improve your work with that rigging system a lot

  • creating some sort of an API for easily extending/modifying the functionality of the tool, mainly by creating new or editing existing components
  • storing information about the actual building of the rig instead of just the baked down version of it, in order to enable you to easily rebuild it when changes need to happen
  • create a more intuitive UI than what is already out there. Node graph maybe?

Rigging framework

Going beyond auto rigging tools, we have rigging frameworks. Those are systems which do not necessarily have any high levels components such as arms/legs/spines built into them, but instead they provide you with the tools to create such components and save them for later use. The only system of this type that I know of is mGear. Incidentally, it does actually provide a high level modular system called Shifter that does give you everything that an auto rigging tool would. The good thing here, though, is that using the actual framework you can build your own modular rigging system and extend and modify it a lot.

Now, since I have never actually used it take everything I say with a grain of salt, but from what I understand, similarly, to an auto rigging tool you would build everything in the viewport and then save it as a baked down version. I do not know how easy or difficult rebuilding components is, but if anything has been built on top of them it would have to be also manually rebuilt.

What I really like about mGear, though, is the open source aspect of it. The fact that you can grab it and build out of it a rigging system that would suit your needs perfectly, is amazing.

Guidable modules

Now, this one I think is the only system that you can create yourself without any scripting. Even though, it might be a bit slower to work with, I think in terms of results you would be able to get everything you get out of an auto rigging tool.

So, what I mean by guidable modules is, say, storing a rigged IK chain with all the controls on it in a file and then when you need an IK chain in a rig you bring the one from the file. The rigged IK chain would have some sort of guides (usually just locators) that would be able to reposition the joints and stretch the chain without actually introducing any values on your controls and also the stretch values would be recalculated so there is no actual stretch on the chain, but instead the modified version becomes the default state.

I know that sounds a bit weird and it probably won’t sit well with many of you, since referencing/importing files is often considered dirty because of all the stuff that gets carried across. That being said, if you clean up after yourself, that would not be an issue.

Additionally, if you are referencing the components in the scene, you can modify the components themselves and the changes would be carried across all of your rigs utilizing that component.

What is more, the same idea can easily be applied to a rigging framework or an auto rigging tool you built yourself, so it removes the issue which arises when changes need to be made.

Rigging framework with modular rigging system through an API

Now, the last rigging system I want to talk about is one that combines aspects of all previously mentioned ones, but in a way where extendability, maintainability and reusability are all taken care of. That comes at the expense of not having a UI, having rules and conventions and requiring a thorough programming knowledge.

The way an user interacts with this rigging system is entirely through an API. The way the rigs are stored is in actual building information rather than a baked down version, which means that every time you open the rig it is built on the spot, making changes incredibly easy and non-destructive. And the way the components are created is through an actual Python class, benefiting from all the benefits (and unfortunately negatives) of object oriented programming.

So, the rigging process with this sort of a system is going to be writing code. Of course, we still need to paint weights and store control shapes, but these are easily saved and loaded back in. Here is an example of what a rig might look like in this system.

loadModel()
loadGuides()

initializeRiggingStructure() # Creates the boilerplate for a rig - top/god node, etc.

spine = bodyCommands.spine(spineChain)
arm = bodyCommands.arm(armChain, spine.end)

... ## Build all the components you would want

loadWeights()
loadControlShapes()

The loadGuides() function refers to a file which contains all our guides, similar to the previous rigging system. In this one, though, it is up to you what sort of guides you use. For example, for an arm you would just draw out your arm chain and the module will take it from there.

This is the rigging system that I like using. It feels much more intuitive to me as I do not feel any restrictions from anything, be that UI, pre-built components that I don’t have access to, etc. If I want a slightly different module I can just inherit from the old one and make my changes. If there is a model change I just need to reposition my guides and run my code.

The main downside of it is that it might get a while a new person to get used to such a system. Having a nice documentation and examples would help a lot. Another thing, people might feel uncertain about is the complete lack of UI, but again for me it is liberating to not be constrained by buttons, text field, etc.

Conclusion

I am very happy with the current rigging system I am using described in the previous section. That being said, though, I cannot help but think of things I would like to see in a rigging system.

For starters, let us go back at UI. Even though, I feel great about being able to do whatever I want with the code, having an UI for certain things would be much quicker. What ideally I would like to do is be able to have both working at the same time. Whatever I write needs to be reflected in the UI and, the harder bit, whatever I do in the UI I need to be reflected somewhere in my build file, so next time I build the rig it comes with the changes made from the UI as well. Having an UI modifying my code, though, does not sound amazing, so we need a different way of handling that, which could potentially be metadata. The one issue I have with relying too much on metadata is that it is not immediately obvious what is going on.

Another thing I would really like to see at some point is some sort of a rigging system standard where riggers around the world can exchange components and general bits from the rigs with each other. To be honest, though, I am both excited and worried about something like this, as introducing a standard might significantly hinder innovation.

The big thing that lies in the future, though, is getting a higher level of interactiveness while rigging. Complex builds using the rigging system from the last section can take minutes to build, which means that for every single change I make in the guides file or the build I will need to wait a lot to actually see the result. That is making the process a lot more obscure, when you just need to keep changing stuff in order to hit the right values. Imagine, though, that we have all that building happening in real-time. Say I have the guides opened in one window, the build file in my text editor and I have the product in another window. Ideally, what I would like is by moving something in my guides file, to trigger a rebuild of that “dirtied” portion of the build which would result in the changes applied in my third window without actually deforming my model.

I am saying this lies in the future, though, aspects of it are already taken care by the guidable components method described above. That being said, that full level of interactiveness is what I would ideally like to achieve.

As I said in the beginning, a lot of these are just my own speculations, which means that I am still trying to figure most of this out. That is why, I would love to hear your tips, opinions and ideas on rigging systems, so please do share them!

If you have been reading bindpose for a while and have seen my marking menu posts you probably know that I am very keen on getting my workflow as optimized as possible. I am a big fan of handy shelfs, marking menus, hotkeys, custom widgets, etc. The way I see it, the closer and easier our tools are to access the quicker we can push the rig through. That is why today we are having a look at using PySide to install a global hotkey in Maya (one that would work in all windows and panels) in a way where we do not break any of the existing functionality of that hotkey (hopefully).

If you have not used PySide before, do not worry, our interaction with it will be brief and pretty straightforward. I myself am very new to it. That being said, I think it is a great library to learn and much nicer and more flexible than the native maya UI toolset.

Disclaimer: The way I do this is very hacky and dirty and I am sure there must be a nicer way of doing this, so if you have suggestions please do let me know, so I can add it both to my workflow and to this post.

What we want to achieve

So, essentially, all I want to do here is install a global hotkey (PySide calls them shortcuts) on the CTRL + H combination that would work in all Maya’s windows and panels as you would expect it to – Hide selected, but if we are inside the Script editor it would clear the history.

Some of you might think that we can easily do this without PySide, just using maya’s hotkeys, but the tricky bit comes in from the fact that maya’s hotkeys are not functioning when your last click was inside the Script editor’s text field or history field. That means, that only if you click somewhere on the frames of the Script editor would that hotkey get triggered, which obviously is not nice at all.

Achieving it

So, let us have a look at the full code first and then we will break it apart.

from functools import partial
from maya import OpenMayaUI as omui, cmds as mc

try:
    from PySide2.QtCore import *
    from PySide2.QtGui import *
    from PySide2.QtWidgets import *
    from shiboken2 import wrapInstance
except ImportError:
    from PySide.QtCore import *
    from PySide.QtGui import *
    from shiboken import wrapInstance


def _getMainMayaWindow():
    mayaMainWindowPtr = omui.MQtUtil.mainWindow()
    mayaMainWindow = wrapInstance(long(mayaMainWindowPtr), QWidget)
    return mayaMainWindow


def shortcutActivated(shortcut):
    if "scriptEditor" in mc.getPanel(wf=1):
        mc.scriptEditorInfo(clearHistory=1)
    else:
        shortcut.setEnabled(0)
        e = QKeyEvent(QEvent.KeyPress, Qt.Key_H, Qt.CTRL)
        QCoreApplication.postEvent(_getMainMayaWindow(), e)
        mc.evalDeferred(partial(shortcut.setEnabled, 1))


def initShortcut():
    shortcut = QShortcut(QKeySequence(Qt.CTRL + Qt.Key_H), _getMainMayaWindow())
    shortcut.setContext(Qt.ApplicationShortcut)
    shortcut.activated.connect(partial(shortcutActivated, shortcut))

initShortcut()

Okay, let us go through it bit by bit.

Imports

We start with a simple import of partial which is used to create a callable reference to a function including arguments. Then from maya we the usual cmds, but also OpenMayaUI which we use to get a PySide reference to maya’s window.

Then the PySide import might look a bit confusing with that try and except block, but the only reason it is there is because between maya 2016 and maya 2017 they switched PySide versions, and the imports had to change as well. So, what we do is we try to import from PySide2 (Maya 2017) and if it cannot be found we do the imports from PySide (Maya 2016).

Getting Maya’s main window

Even though, Maya’s UI is built entirely by Qt (PySide is a wrapper around Qt), the native elements are not usable with PySide functions. In order to be able to interact with these native bits we need to find a PySide reference to them. In the example for hotkeys we need only the main window, but depending on what you are trying to do you might have to iterate through children in order to find the UI element you are looking for. Therefore this _getMainMayaWindow function has become a boilerplate code and I always copy and paste it together with the imports.

The way it works is, using Maya’s API we get a pointer to the memory address where Maya’s main window is stored in memory. That’s the omui.MQtUtil.mainWindow() function. Then what we do is, using that pointer and the wrapInstance function we create a PySide QWidget instance of our window. That means that we can run any QWidget functions on Maya’s main window. In our hotkey example, though, we only need it to bind the hotkey to it.

The logic of the hotkey

The shortcutActivated function is the one that is going to get called every time we press the hotkey. It takes a QShortcut object as an argument, but we will not worry about it just yet. All we need to know is that this object is what calls our shortcutActivated function.

It is worth mentioning that this function is going to get called without giving Maya a chance to handle the event itself. So, that means that if we have nothing inside this function, pressing CTRL + H will do nothing. Therefore, we need to make sure we implement whatever functionality we want inside of this function.

So, having a look at the if statement, you can see that we are just checking if the current panel with focus – mc.getPanel(wf=1) – is the Script editor. That will return True if we have last clicked either on the frames of the Script editor windows or anywhere inside of it.

Then, obviously, if that is the case we just clear the Script editor history.

If it returns False, though, it means that we are outside of the Script editor so we need to let Maya handle the key combination as there might be something bound to it (In the case of CTRL+H we have the hiding functionality which we want to maintain). So, let us pass it to Maya then.

As I said earlier, Maya does not get a chance to handle this hotkey at all, it is entirely handled by PySide’s shortcut. So in order to pass it back to Maya, what we do is we disable our shortcut and we simulate the key combination again, so Maya can do it’s thing. Once that is done, we re-enable our shortcut so it is ready for next time we press the key combination. That is what the following snippet does.

shortcut.setEnabled(0)
e = QKeyEvent(QEvent.KeyPress, Qt.Key_H, Qt.CTRL)
QCoreApplication.postEvent(_getMainMayaWindow(), e)
mc.evalDeferred(partial(shortcut.setEnabled, 1))

Notice we are using evalDeferred as we are updating a shortcut from within itself.

Binding the function to the hotkey

Now that we have all the functionality ready, we need to bind it all to the key combination of our choice – CTRL + H in our example. So, we create a new QShortcut instance, which receives a QKeySequence and parent QWidget as arguments. Essentially, we are saying we want this key combination to exist as a shortcut in this widget. The widget we are using is the main maya window we talked about earlier.

Then, we use the setContext method of the shortcut to extend it’s functionality across the whole application, using Qt.ApplicationShortcut as an argument. Now the shortcut is activated whenever we press the key combination while we have our focus in any of the maya windows.

Lastly, we just need to specify what we want to happen when the user has activated the shortcut. That is where we use the activated signal of the shortcut (more info on signals and slots) and we connect it to our own shortcutActivated function. Notice that we are using partial to create a callable version of our function with the shortcut itself passed in as an argument.

And that’s it!

Conclusion

Hotkeys, marking menus, shelves, custom widgets and everything else of the sort is always a great way to boost your workflow and be a bit more efficient. Spending some time to build them for yourself in a way where you can easily reproduce them in the next version of Maya or on your next machine is going to pay off in the long run.

I hope this post has shown you how you can override maya’s default hotkeys in some cases where it would be useful, while still maintaining the default functionality in the rest of the UI.

If you know of a nicer way of doing this, please do share it!

Today, I am going to share a really quick tip of achieving an uniform spacing along a curve.

Disclaimer: If you are not familiar with using the API, worry not, we are looking at a very simple example and I will try to explain everything, but it also might be a good idea to get some understanding of how it all functions. A good place to start is Chad Vernon’s Introduction to the API.

Very often in rigging we need to use curves. In quite a lot of these cases we need to get uniformly distributed positions along that curve. A simple example is creating controls along a curve. Chances are you would want them to be as uniformly distributed as possible, but in order to get that only using the parameter along the curve, you would need a perfectly uniform one that also matches the actual curvature. To get that you would need to do a lot of rebuilding, inserting knots and tweaking.

For another tip on rigging with curves have a look at my post about getting a stable end joint when working with IK splines.

I suppose that if you are doing it by hand then you can easily tweak the position along the curve and eyeball the distances between them to be roughly equal, but it sounds like too much hassle to me and also, more often than not, you would want to have that automated as I could imagine it being integral to a lot of rig components.

Let us have a look then!

The issue

So, I am sure everyone has run into the situation where they’ve wanted to create a few objects positioned uniformly along a nurbsCurve or a nurbsSurface, but they get this.

Getting an uniform space along a curve - example of non-uniform spacing on a nurbsSurface

Notice how larger the gap is between the joints on the left-hand side than on the right. The reason for that is that the distance between the isoparms is not equal throughout the surface, but the parameter difference is. What that means is, no matter how much we stretch and deform the surface, the parameter difference between the spans is always going to be the same – .25 in our example (1.0 / spansU).

Getting an uniform space along a curve - example of non-uniform spacing on a nurbsSurface with drawover

That discrepancy between the parameter space and the 3D space is what causes these non-uniform positions.

Getting uniform positions along a curve

So now that we know that, we can figure out that the way to get a reliable position is to find a relationship between the 3D space and the parameter space. That is where the API’s MFnNurbsCurve comes handy.

The 3D space information that we are going to be using is the length of the curve, as we know that is an accurate representation of distance along the curve. If you have a look at the available methods in the MFnNurbsCurve class, you will find the following one findParamFromLength. Given a distance along the curve this function will give us a parameter.

Example

Let us consider the following curve.

Getting an uniform spacing along a curve - example curve with non-uniform CVs

Let us position some joints along the curve using distances only based on the parameter.

for i in range(11):
    pci = mc.createNode("pointOnCurveInfo")
    mc.connectAttr("curve1.worldSpace", pci + ".inputCurve")
    mc.setAttr(pci+".parameter", i * .1)
    jnt = mc.createNode("joint")
    mc.connectAttr(pci+".position",jnt+".t")

All we do here is iterate 11 times and create a joint on the curve at the position of parameter equal to iterator * step where the step is 1.0 / (numberOfJoints - 1), which is .1 in our example.

Getting an uniform spacing along a curve - Example of non-uniform spacing on a curve using just the parameter

As expected, the non-uniform distance between the CVs results in an also non-uniform spacing of the joints.

Let us try a different approach then. We will get a reference to an API instance of our curve, and using the above mentioned function we will get parameters based on actual distance along the curve, hence getting an uniform distribution.

from maya import OpenMaya as om

def getDagPath(node=None):
    sel = om.MSelectionList()
    sel.add(node)
    d = om.MDagPath()
    sel.getDagPath(0, d)
    return d

crvFn = om.MFnNurbsCurve(getDagPath("curveShape4"))

for i in range(11):
    parameter = crvFn.findParamFromLength(crvFn.length() * .1 * i)
    point = om.MPoint()
    crvFn.getPointAtParam(parameter, point)
    jnt = mc.createNode("joint")
    mc.xform(jnt,t=[point.x,point.y,point.z])

So, the getDagPath function takes a name of a node and returns an MDagPath instance of that node, which we need in order to create the MFnNurbsCurve instance. The MDagPath is used for many other things in the API, so it is always a good idea to have that getDagPath function somewhere where you can easily access it.

Notice we are passing the curve shape node, as if we are to use the curve4 transform we will not be able to create the MFnNurbsCurve instance.

Having that MFnNurbsCurve, we iterate 11 times and following the same logic for getting a position along the curve as before – iterator * step – we get the parameter at that position, using the findParamFromLength method.

Now that we know the parameter we could still use the pointOnCurveInfo as we did before, but considering we are already working in the API we might as well get all the data from there. So, using the getPointAtParam method we can get a world space position of the point on the curve at that parameter.

Notice however that we are first creating an MPoint and we are then passing it to the getPointAtParam function to populate it.

And here is the result.

Getting an uniform spacing along a curve - example of uniform spaced joints along a curve using the mfnNurbsCurve from the Maya API

Using the same approach to get uniform positions on a surface

So, all that nurbsCurve business is great, but how can we apply the same logic to a nurbsSurface. Unfortunately, the MFnNurbsSurface does not have any method resembling the findParamFromLength one, but luckily we can always create a curve from a surface.

So in order to get uniform spacing along a nurbsSurface what I usually would do is create a nurbsCurve from that surface using the curveFromSurfaceIso node and using the described method find the accurate parameters and use those on the surface itself.

While writing this I realized that maybe the same approach can be used to actually get an uniform representation of the surface by getting curves from the surface and using them calculating the new, uniformly spaced CVs of the surface. Seems like we might loose a lot of the curvature of the surface, but it also seems promising, so I will definitely look into it.

Conclusion

Using curves and surfaces is something that I did not do a lot of in the beginning of my rigging path, but obviously they are such an integral part of rigging, that it is very important to be able to work with them in a reliable and predictable fashion. Thus, this tip has helped me a lot when building bits of my rigging system and I really hope you find it valuable in your work as well.

Additionally, I would like to reiterate who powerful of a tool the API is and I would definitely suggest anyone who is not really familiar with it to take the plunge and start learning it by using it. The major benefits are not only functional ones (like the one described in this post), but also performance ones, as the API is incredibly faster that anything to do with maya.cmds.

So, painting skin weights. It is a major part of our rigging lives and sadly one of the few bits, together with joint positioning, that we cannot yet automate, though in the long run machine learning will probably get us 99% there. Untill then though, I thought I would share some of my tips for painting skin weights with maya’s native tools, since whenever I would learn one of these I felt stupid for not finding it out earlier as, more often than not, it was just so simple.

I am sure a lot of you are familiar with these, but even if you learn just a single new idea about them today, it might boost your workflow quite a bit. Additionally, I know that a lot of you are probably using ngSkinTools and literally everyone I know who works with it says they cannot imagine going back. So I am sure that some of the things I am going to mention are probably already taken care of ngSkinTools, but if you, like me, have not had the chance to adopt it yet, you might find these helpful.

I am going to list these in no particular order, but here is a table of contents.

Contents

  1. Simplifying geometries with thickness and copying the weights
  2. Using simple proxy geometry to achieve very smooth weights interpolation quickly
  3. Duplicate the geometry to get maya default bind on different parts
  4. Copy and paste vertex weights
  5. Use Post as normalization method when smoothing
  6. Move skinned joints tool
  7. Reveal selected joint in the influence list
  8. Some handy hotkeys
  9. Average weights
  10. Copy and paste multiple vertex weights with search and replace
  11. Print weights

So with that out of the way, let us get on with it.

Simplifying geometries with thickness and copying the weights

This one comes in very handy when we are dealing with complex double-sided geometries (ones that have thickness). The issue with them is that when you are painting one side, the other one is left unaffected, so as soon as an influence object transforms the two sides intersect like crazy. That is often the case with clothes and wearables in general.

The really easy way to get around this is to
1. Make a copy of the geometry
2. Remove the thickness from it (when having a good topology it is as simple as selecting the row of faces which creates the thickness and deleting it together with one side of the geo)
3. Paint the weights on that one
4. Copy the weights back to the original geometry

Painting skin weights tips - Using a one sided proxy geometry when working with thickness

Now, a really cool thing that I had not thought of untill recently is that even if I have started painting some weights on the double sided geometry to begin with I can also maintain them, by copying the weights from the original one to the simplified one before painting it, so I have a working base.

That means, that if I have managed to paint some weights on a double sided geometry that kind of work, but the two sides are not behaving 1 to 1, I can create a simplified geo, copy the weights from the original one to the simplified and then copy them back to get the 1 to 1 behaviour I am looking for.

Using simple proxy geometry to achieve very smooth weights interpolation quickly

This one is very similar to the first one, but I use it all the time and not only on double-sided geometries.

Very often there are geometries that have some sort of a detail modeled in them that make it hard for weight painting smooth weights around it.

Consider the following example. Let us suppose that we need this geometry to be able to stretch smoothly when using the .translateX of the end joint.

Tips for painting skin weights in maya - Using a simple geometry to copy weights to models which are hard to smooth weights for.

Doesn’t look great with default skinning, but also if I try to block in some weights and smooth them, it is likely that maya won’t be able to interpolate them nicely. To go around it, I’d create a simple plane with no subdivisions so I can have a very nice smooth interpolation from one edge to the other.

Tips for painting skin weights - Using a simple plane without subdivisions to achieve a smooth weights interpolation for copying to complex geometries.

Copying this back to the initial geometry gives us this.

Tips for painting skin weights - Smooth skinned complex geometry using weights from a simple plane.

Very handy for mechanical bits that have some detail in them and also need to be stretched (happens very often in cartoon animation).

Duplicate the geometry to get maya default bind on different parts

So, very often I have to paint the weights on a part of a geometry to a bunch of new joints while I still need to maintain the existing weights on the rest of it. More often than not, I would be satisfied with maya’s default weights after a bind, but obviously if I do that it will obliterate my existing weights.

What I do in such cases is make a copy of the geometry and smooth bind it to only the new joints. Then I select the vertices on the original geometry that comprise the part I want the new influences in and I use the Copy skin weights from the duplicated one to the selected vertices. If the part is actually separated from the rest of the geometry that should do it, but if it s a more of an organic shape, there is going to be some blending of the new weights with the ones surrounding them.

I could imagine, though, that having the ability to have layers and masks on your skin weights would make this one trivial.

Copy and paste vertex weights

I am guilty of writing my own version of this tool just out of the ignorance of not knowing that this exists. Basically what you can do is select a vertex, use the Copy vertex weights then select another one (or more than one) and use the Paste vertex weights command to paste them. Works cross-geometries as well.

A cool thing about the tool that I wrote to do this is I added a search and replace feature that would apply the weights to the renamed joints. For example if I am copying a vert from the left arm and I want to paste it on the right I would add “L_” to “R_” to my replacement flags.

Use Post as normalization method when smoothing

So, I have met both people who love and who hate post. I think the main reason people dislike it is because they don’t feel comfortable with their weights being able to go above 1.0, but I have to say that sometimes it is very handy. Especially for smoothing. Everyone knows how unpredictable maya’s interactive smoothing is, and that’s understandable since in a lot of cases it is not immediately obvious where should the remaining weights go to.

Smoothing on post is 100% predictable which I think is the big benefit. The way it works is that it smooths out a joint’s influence by itself, without touching any of the other weights. That means that the weights are not normalized to 1.0, but instead of verts shooting off in oblivion post normalizes them for our preview. That is also why it is not recommended to leave skinClusters on Post as the weights are going to be normalized on deformation time which would be slower.

So more often than not my workflow for painting weights would be to block in some harsh rigid weights, then switch to Post and go through the influences one by one flooding them with the Smooth paint operation once or twice.

Move skinned joints tool

I am not sure which version of maya did this tool come in, but I learned of it very recently. Essentially you can select a piece of geo (or a joint) and run the Move skinned joints tool, then you can transform the joint however you like or you can also change the inputs going into it without affecting the geometry, though, you’d have to be careful to not change the tool or the selection as that would go out of the Move skinned joints tool. Ideally any other changes than just moving/rotating them about should be ready to be ran in the script editor.

I would not recommend using this for anything else than just testing out different pivot points. Doing it for actual positioning in the final skinCluster feels dirty to me.

Reveal selected joint in the Paint skin weights tool influence list

Only recently I found out what this button does.

Paint skin weights tool - reveal selected joint in influence list

It scrolls the list of influences to reveal the joint that we have selected, which is absolutely brilliant! Previously, I hated how when I need to get out of the Paint skin weights tool and then get back inside of it, the treeView is always scrolled to the top of the list. Considering that the last selection is maintained, pressing that button will always get you back to where you left off. Even better, echoing all commands gives us the following line of MEL that we can bind to a hotkey.

artSkinRevealSelected artAttrSkinPaintCtx;

Some handy hotkeys

I have learned about some of these way too late, which is a shame, but since then I’ve been using them constantly and the speed increase is immense. I hate navigating my mouse to the tool options just to change a setting or value.

  • CTRL + ALT + C – Copy vertex weights
  • CTRL + ALT + V – Paste vertex weights
  • N + LMB (drag) – Adjust the value you are painting with
  • U + LMB – Marking menu to change the current paint operation (Replace, Add, etc.)
  • ALT + F – Flood surfaces with current values

For more of these head on to the Hotkey editor, then in the Edit hotkeys for combobox go for Other items and open up the Artisan dropdown.

From here on I have added some of the functionalities that I have written for myself, but sadly the code is very messy to be shared. Luckily, it is not hard at all to write your own (and it will probably be much better than mine), but if you are interested, do let me know and I can clean it up and share it at some point.

Average weights

This one I use a lot. What it does is, it goes through a selection of verts and calculates the average weights for all influences and then goes through the selection once more and applies that average calculated weight. Essentially, what this results in is a rigidly transforming collection of verts. Stupidly simple, but very useful when rigging mechanical bits, which should not deform. Also I have used it in the past on different types of tubings and ropes where there are bits (rings, leafs, etc.) that need to follow the main deformation but not deform themselves.

Copy and paste multiple vertex weights with search and replace

In addition to the above mentioned copy and paste vertex weights, I have written a simple function that copies a bunch of vertex weights and then pastes them to the same vertex IDs on a new mesh. It is not very often that we have geometries that are copies of each other, but if we do this tool saves me a lot of time, because I can then just skin one of them, copy the weights for all verts and then paste them to the other geometry using the Search and Replace to adjust for the new influences.

Comes in particularly handy for radially positioned geometries where mirroring will not help us a lot.

Print weights

Quite often I’d like to debug why something is deforming incorrectly, but scrolling through the different influences can get tedious especially if you have a lot of them. So I wrote a small function that finds the weights on the selected vertex and prints them to me.

This is the kind of output I get from it.

#####################################
joint2 : 0.635011218122
joint1 : 0.364988781878
#####################################

As I said, there is a lot of room for improvement. It works only on a single vert at the moment, but I could imagine it being really cool to see multiple ones in a printed table similar to what you would get in the component editor.

What would be even cooler would to use PySide to print them next to your mouse pointer.

Conclusion

Considering that we spend such a big chunk of our time on painting weights we should do our best to be as efficient and effective as possible. That is the reason I wanted to share these, as they have helped me improve my workflow immensely and I hope you would find some value in them as well.