As someone who's done lots of Blender work as well as Maya and integrations with Unity and Unreal Engine, I still think there's quite a long way to go to make Blender a good option for game development. It's free, it kind of works but it's very painful when it comes to animation, texturing, and details.
So yeah, the last 3 are paid and it's not cheap. Blender can do all three but not as good. I hope that with this additional money the Blender Foundation team can hire more people to close these gaps. In 2.8 they've improved on different fronts but they are still quite behind in those departments.
Last but not least, they should make more effort to improve their keybindings. They've been monkey patching it in 2.8 with the "industry-compatible" keybindings but when you use it a bunch of other things stop working.
Datapoint: I've had/have a really good experience with Blender. Countless opensource resources to fill any gaps in the export pipeline. Not to mention the relatively easy python api that gives you access to all the scene's data.
Blender's Python API, although cleaner and better organized than Maya's, only offers a sliver of Maya's functionality. Maya has a standard maya.cmds API for scripting which works similarly to how Blender's Python API. Maya also has a separate library maya.api.OpenMaya which is a wrapper around its C++ plugin api. Blender doesn't offer anything like this.
I have the same setup as you, except rather than use Maya for animation I use the free UE4Tools plugin for Blender which works with 2.79.
I had pretty good success with it, and wrote a Python addon which builds on top to retarget mocap data from the Perception Neuron suit to the UE4 Skeleton.
The addon script is ue4-neuron-mapper.py. Only tested with 2.7.
You need to enable the addon as per usual along with UE4 Tools, and then once you have a model loaded and parented to the UE4 rig, and an Axis Neuron animation imported, it should be fairly easy to use. There is a "Bake Animation" control in there somewhere that should bake the animation out to an animation strip. I think you have to specify the name of the source rig (ie. the raw skeleton data imported from Axis Neuron), and there are buttons to enable/disable transferring finger/foot data as well.
I haven't used it in a couple of years however, so if it doesn't work feel free to hit me up with questions. It was just a quick hack, very rough and ready. No warranty :-)
Also any patches which improve the accuracy of the retargeting (hands can still be a problem in general) are very welcome!
This quick test clip I did the weekend I wrote it demonstrates the current quality of the retargeting, although it is rendered in Blender and not UE4:
EDIT: it is also worth noting that I believe the Perception Neuron people released a new version of their UE4 plugin a week or two ago. It might be worth checking out yet whether they have improved the workflow or not since the previous version. I wanted the ability to edit the animations directly in Blender and so I wrote this plugin.
I'm curious why you would use Blender over Maya for modeling if you have both?
I tried Blender a decade ago but since I had access to Maya & 3D Studio Max I didn't give it much attempt since the UI seemed more confusing & the material available to learn was lacking.
I also prefer blender over maya for hard surface modeling (sculpting and animation still needs work IMO, and rigging has a better plugin ecosystem in Maya). Here’s my reasoning:
0) Blender is much more stable than Maya. Also it loads more quickly and is less than 10% the filesize. Although it is quite powerful, Maya is still very bloated with decades of technical debt and legacy baked in. Also blender is OSS which makes me prefer it on principle if I can use it.
1) Blender’s interface has a higher “skill ceiling” — it reminds me of Vim in that most common operations are a few single-key presses away. For example, “rotate the selected object 45 degrees around the x axis” can be executed via R->4->5->X->Enter. Many of these kinds of basic operations require the use of a mouse in Maya or a MEL expression. Of course, this also raises the “skill floor” of blender because you have to remember more key combos. However these days the UI is pretty discoverable (even before Blender 2.8).
2) For basic hard surface UV unwrapping, I like the workflow in blender much better. The interface is the same in 2D as in 3D, and the key combos all carry over, which I really like. Maya has more automatic UV unwrapping options, but I rarely reach for those anyway.
3) 100% of blender’s interface is scriptable. You can literally hover over any button and the corresponding blender python code to execute that button will show in a tooltip. This makes it easy to automate basically anything and has empowered a great community of plugin authors. Maya also has this with MEL, and more recently maya python, but those solutions feel tacked on whereas blender’s UX was built from the ground up to support python.
I should also mention that blender has already went through a major UI refresh, so if you haven’t tried in a while it would be worth trying it out again. I would say that the jump in accessibility from the previous UI refresh is the same jump that we see now. So I’d give blender a second chance if you haven’t already!
Blender had a complete UX overhaul some years ago. They even recently just switched the default mouse button actions to match the defaults in literally every other modeling software. It's worth giving it a go again.
Not OP, but before 2.8 and the depsgraph refactor the viewport was really slow on not so complex scenes. The NLA system (Non Linear Animation) needed some love, since it wasn't so smooth to mix animations (can't recall the details at the moment). The animation library/database, like all of Blender's object system, was a little confusing and was easy to lose your work (e.g. if an animation/datablock was not assign to an object, you would lose it when you restart Blender. The refactor in Collections was a great start in the right direction.
> if an animation/datablock was not assign to an object, you would lose it when you restart Blender
That's what the little "F" button on datablocks is for. It adds a "fake user" to the datablock, which means that it won't be garbage-collected away if nothing is using it when you close Blender.
> e.g. if an animation/datablock was not assign to an object, you would lose it when you restart Blender
Really? Did not know about it. Thanks! The number of users of a certain entity has always confused to me. I've used Maya a lot, and I can't but wonder why Blender just don't copy the got bits of other software when it comes to usability improvement.
I was wondering that too; Sintel, Big Buck Bunny, and Operation Barbershop all have pretty-decent animation, so it's clearly capable of doing some good stuff.
Right now, this is what I usually do:
So yeah, the last 3 are paid and it's not cheap. Blender can do all three but not as good. I hope that with this additional money the Blender Foundation team can hire more people to close these gaps. In 2.8 they've improved on different fronts but they are still quite behind in those departments.Last but not least, they should make more effort to improve their keybindings. They've been monkey patching it in 2.8 with the "industry-compatible" keybindings but when you use it a bunch of other things stop working.