Hi.

Welcome to blender secrets.
A place to level up your blender skills.

Artist interview: Ascalon (aVersionOfReality) on BlenderNPR, Geometry Nodes, AI and more

Artist interview: Ascalon (aVersionOfReality) on BlenderNPR, Geometry Nodes, AI and more

Hi Ascalon! Could you briefly introduce yourself to the readers?

Hello! Most will know me through YouTube and Twitter as aVersionOfReality. I use the name Ascalon on forums and chat servers. I have been tinkering around with Blender for 11 years with a focus on Non-Photorealistic Rendering (NPR) to get anime styles working properly in 3D (first for illustration, and now Avatars) and to make the needed tools to produce it efficiently at a high quality level.

I'm some sort of full-stack technical anime character artist at this point. I am familiar with all aspects of creating and rigging characters, shaders and rendering, procedural and parametric modeling, tool creation and production pipeline optimization with python and geonodes, and more. But I studied it all within the context of creating anime girls and getting them to look stylistically correct. So there are gaps in my knowledge when it comes to other applications (although a lot of the technical side of this stuff is universal.)

The elements of a convincing BlenderNPR render

You recently made some very interesting tutorials for Blender where you use GeoNodes to do retopology of clothing meshes, as well as for baking textures in Eevee. How did you figure out how GeoNodes works, and do you have tips for how to learn this? How did you even know these two appliations of GN were possible?

Geonodes are typically used for scattering, particles, and procedural effects. But there's a huge amount they can also do to help with modeling and project management. They are also the closet thing we have to a Vertex Shader, and can be used to make all sorts of adjustments to vertex level mesh data before rendering (realizing this led me to many less common uses of it.)

I learned Geonodes before there was much information available. It was not very difficult for me as I was already familiar with Shader Nodes and the Animation Nodes addon. Geonodes fall in between conceptually. Animation Nodes could be used to perform mesh operations, but being python based, they allowed more flexibility than Geonodes, which are more like Shaders in that they parallel process everything (except per mesh element, instead of per pixel.)

So I was already familiar with how mesh data works and can be manipulated from using Animation Nodes (which I learned mostly through reading stack exchange questions), and then learned the details of Geonodes mostly by deconstructing Erindale's node groups and asking questions on their Discord server. There's more tutorials available these days, but stepping through node groups one node at a time to see how they work and asking questions is still the best way to learn. You can read the documentation to know what a node does, but it won't tell you how to actually go about doing things.

On the topic of your YouTube channel, what's your most popular video, and what do you think is the reason it stands out?

My most popular video is on Flat Modeling Anime Hair (see below). Most mesh hair is modeled using curves, but this leads to horrible intersections/clipping. That doesn't matter for alpha card hair, but for solid mesh anime hair it makes a mess of the Normals and UVs. Anime hair is supposed to be a clean surface. So the solve this, I model it flat as a sheet, and then wrap it into the proper shape using modifiers. It is popular because the method solves many frustrating issues with creating hair (it is unintuitive to some though.) That video is 4 years old now. Since creating it, I've made better versions of the setup using Geometry Nodes. Hopefully I'll have a chance to finish them and put out an updated video soon.

Why did you gravitate towards Blender NPR, was it sparked by a particular project?

I was intro creative writing when I was younger, but I'm a very visual thinker so I wanted to illustrate stories instead of pure text. But I don't really enjoy drawing, as it feels wierd to translate 3D things in my head into 2D things on paper. So I originally got into 3D with the hopes of using it to illustrate my own comics (I had done several years of Cinema 4D when I was a kid, so I knew a bit about it). I had seen a bunch of toon shaders and tools, and assumed there was established ways to do toon style illustration. How wrong I was! It turns out that existing tools and workflows were not up to the task of efficiently producing illustration quality characters and renders. And so I have ended up developing all sorts of technical skills in the attempt to develop the production pipeline and rendering style myself. But I accidentally learned enough to start getting professional work. So now I have a job, but I'm years behind on actually producing any artwork!

Do you remember which version of Blender is the one you started with?

I started in the spring of 2013 with Blender 2.66. We've come a long way!

I understand that a lot of the stuff you do is under NDA, but is there something you can tell us about your work? What kind of Blender NPR work do you do professionally?

I worked for many years making and managing assets to support the artist Yuumei with her comics. But it has only been the last couple of years that I've been getting professional work to the point I can call it a career (I previously worked in hardware and electronics as a lab tech). My first break was doing shaders and Geometry Nodes for The SPA Studios, makers of Klaus. My last 1.5 years I have been working on Avatars for AI powered characters at a couple different companies.

AI Avatars are a fairly new field that has increased interest in NPR models. Previously, NPR work was limited to some games and animation studios and avatars for vtubers. But in recent years vtubers have been proliferating (increasing demand and interest), and now we also have much more advanced AI chatbots which can be paired with a virtual body. Not all AI Avatars are going to use NPR models of course, but there's a big demand for anime characters, and the style dominates vtubing. It has opened a new field, and I only expect interest to increase.

NPR suffers from the problem of a lack of established and standardized tools and workflows that produce high quality results. There are some extremely popular NPR games like Genshin Impact. But even though that has been out for years, and the art is highly acclaimed, it hasn't caught on as much as one might expect. This is because it is difficult to do, and most existing tools are aimed at Physically Based Rendering (PBR.) So studios are not doing NPR because it is difficult, and the tools aren't being expanded because there isn't more demand, and there isn't more demand because it is difficult since the tools don't exist it. Its a catch-22!

Similar issues exist within vtubing. The artists making the models are mostly independent and self taught. Workflows are not standardized. And the various programs used for vtubing only support a few formats that don't allow more advanced features. Even if you can make a fancier shader or rig, the programs people use the avatar in won't necessarily support it. And its hard to implement new things in all the different programs. Its another case of poor incentives and catch-22 (although more recent programs, such as Warudo, are opening up more options.)

So the field of NPR characters has been a bit stuck. There's many well known problems that were theoretically solved years ago, but these improvements have not been widely implemented (for example, Shader based lines instead of Inverted Hull to solve artifacts, and better ways of handling Custom Normals to improve shading quality.) It is hard for small vtubers, independent devs, and hobbyists to get everything put together and implemented, especially with limited budgets, and often working on these problems on the side while focusing on work that pays the bills. But these AI Avatars are mostly business to business contracts. There's a lot more money involved. I'm hoping that somone will see the value of implementing more of these next gen features to push the whole field forward. We could have an anime and comic style workflow that's as well supported and accessible as PBR. It just needs to be built. (Another path would be an AAA game wanting to do this style, or Disney/Pixar doing a movie with proper toon shading and line art.)

What do you think, is key to a good NPR shaded model?

Normals! A lot goes into a good looking character of course, like proportions and rigging. But what makes NPR fundamentally different from PBR is that it usually uses some sort of Toon Shader and Line Art (certainly for Anime and comic styles.) A toon shader is made by ramping or thresholding a regular shader, which creates high contrast that immediately reveals all the problems with the mesh and normals which soft shading doesn't show. Many things need to be done differently to clean these problems up. Existing Normals tools, like regular Tangent Normal Maps, are usually used to add detail to a surface. But for NPR styles, you want to remove detail instead, and run into all sorts of issues (see this article on my website for more info)

And then you also have stylistic differences. Not only do we need clean shading, but we often need things to shade as if they were a much simpler, lower detail shape than their mesh actually is. And you specifically need it to look the way these styles are drawn. And these 2D styles often break rules of light, or even behave differently based on if you are viewing a character's face from the front or side. There's a huge list of caveats and exceptions you need to understand if you want a 3D model to look true to 2D drawn styles, and many relate to getting the Normals/Shading right.

Unfortunately, the tools to really take control of Normals and Shading are lacking, especially in real time engines. Most real time models you see in this style will make major compromises like not using dynamic light at all and instead will paint shading into the textures. When they do use dynamic light, its usually only a single directional light with other shading still painted. Custom Normals are often only used on the face, leaving jagged shading everywhere else. And the faces will use tricks like not recalculating Normals based on expression keys or SDF maps to avoid jagged shading (but this also means shading doesn't update when the expression changes at all, giving the effect of it stretching as the face moves.) Pleasing styles can still be achieved, but all of these things are compromises with their own downsides. It'd be much better to solve the fundamental problems leading to these issues! Many are theoretically solved in offline rendering and could be in real time rendering too. But then we run into implementation problems as mentioned before.

Your tutorials must be very helpful for the VTuber community. Can you explain to our readers what VTubing is, since it was also a new discovery for me and I think may not be well known to many readers, even though it's a big thing online?

The short description is that vtubing is like other streaming or youtubing, but instead of showing your own face via a webcam, you have a digital avatar. Usually this avatar is animated to follow your facial expressions and movements through some form of tracking/motion capture. Vtubing does tend to differ in content and performance from other streaming though. Many vtubers are putting on an in-character performance. And then there's the anime. Not all vtubers use an anime style avatar, but its a huge amount. Vtubing has roots in this style in terms of the internet culture, aesthetics,, and also the tech. For example, the virtual character Hatsune Miku isn't a vtuber but has had a huge influence aesthetically. And a lot of 3D vtuber tech is downstream of MikuMikuDance (MMD), a 3D animation program originally created for animating Miku.

What is your take on the whole AI hype we're currently going through? Do you think ultimately we can take advantage of this as 3D artists, or is it just something that makes people feel worried about the future, until it's replaced by the next tech hype cycle? Personally, I'm still waiting for those perfect UV unwrapping and retopologizing AI's...

The AI hype cycle has been a rollercoaster. Its hard to make predictions with how quickly things are changing, but we all still have impressions. I am also still waiting for UV and retopology tools. The only AI tool I use actually predates the current hype cycle: the Cycles Denoiser!

We've been watching a lot of AI tool and workflows demos this past year. Hands are getting better, and so is temporal coherency. We have a lot more options to keep control of things over multiple frames. But what we're also seeing is that using all this stuff is making things a lot more complicated. My question is, at what point is our prompt more complex than just building something the old fashioned way? At some point, the juice isn't worth the squeeze. Are we going to end up reinventing the wheel? AI is obviously very efficient for certain types of things. But once you want proper artistic control, it gets complicated fast.

For example, it is very easy to have an AI generate you a picture of any anime girl. But that's rarely what we want. We want a specific anime girl (that we can design), in a specific style, doing a specific thing. And we want to be able to make changes between different scenes, like a different outfit or hair color, without it altering any other aspect. All of these things add a lot of complexity. Wrangling the AI into giving the correct result may not be more efficient than other options. Or it may be more efficient than existing workflows in some areas, but cut off creativity. Or be less efficient than what 3D workflows could be (more on that below.)

We are also seeing lots of cases of AI tools that do impressive things, but that require high quality inputs that don't have an easy solution themselves. We're seeing demos of tools that take a high quality image of one sort and generate another from it. Like taking line art and ouputting a full colored illustration. But its only as good as the lineart, and has very little in the way of controls. Its not a proper illustration tool (yet.)

At this point, I'm fairly dubious that generative AI is going to come along and obsolete 3D artists (especially for real time applications.) There are obviously lots of ways it could improve 3D workflows and asset creation. But right now, everyone seems to be focused on generating completed things from scratch. I suspect that the better move would be to focus on solving problems in existing workflows. The Cycles denoiser is a good example of a very powerful and useful tool. But its not very glamorous. We could use similar tools for all sorts of things from mesh cleanup to tesselation to fixing mesh and texture artifacts. I'm not seeing any of that discussed.

There's also something big that seems to be left out of the discussion around AI tools and workflows: the current terrible rate of adoption for non-AI advances in 3D tools. My impression is that 3D workflows are horribly stagnent. There are lots of breakthroughs being made all the time, but almost nothing gets incorporated at scale because of pipeline and import/export problems. Even before all this AI stuff, there were lots of breakthroughs in procedural generation of assets. A lot of that still seems to barely be in use. Its hard to even do basic things like corrective shapekeys in most game engines. Those have always existed, but its not easy to set them up in practice because the tools are so rigid, or they just aren't implemented. I can't even use current tools properly because of these issues, should I be worried about AI? Even if magical tools get made, it does no good if nobody implements them into the wider software and game engines.

My fear right now isn't that AI is going to replace 3D artists. My fear is that its going to degrade our options even more. A good example of this is the removal of Blender's old Internal Render engine when 2.8 came out. Blender Internal was obsolete in many ways. But it was accessible and had a lot of options. PBR style engines like Eevee are more efficient and optimized, but have less flexibility. And they are inflexible at an engine level. Its not a matter of just writing a custom shader. You need to modify the entire engine, how it handles passes, etc. When we lost BI, it was a huge hit to the NPR community that took years to recover from.

I fear we'll see similar things with AI tools. We'll have one click solutions to common problems perhaps, but at the cost of them being rigid in ways that we cannot control or work around. We'll be able to make certain types of things efficiently, but it'll be the same stuff everyone else is making. And worse, people would likely abandon current work to improve 3D tools and make them more flexible and powerful due to AI being seen as better. I think this already happened in regular real time rendering when PBR came along (the problem I described earlier of people not wanting to make non-PBR games, because those are less standardized and thus more risky.) So AI could bring us a new future where we mix the best of AI and 3D options. But it could also gut 3D, and then not properly replace it either.

Where should people follow you online?

Here's my links. I am also available for work if its related to Non-Photorealistic Rendering, blender, or 3D anime!

twitter.com/AversionReality
Youtube.com/c/aVersionOfReality
aversionofreality.com

Creating VDM Brushes

Creating VDM Brushes

How to bake the Line Art modifier

How to bake the Line Art modifier