Building the Open Metaverse

Populating the Metaverse with High-Quality Content

Patrick Cozzi (Cesium) and Marc Petit (Epic Games) are joined by Brian Karis, an Engineering Fellow at Epic Games, to talk about Nanite. This virtualized geometry system was developed in Unreal Engine 5 and has completely transformed the level of quality and detail expected in real-time experiences and games.

Guests

Brian Karis
Engineering Fellow, Epic Games
Brian Karis
Engineering Fellow, Epic Games

Listen

Subscribe

Watch

Read

Announcer:

Today on Building the Open Metaverse.

Brian Karis:

Artists have lived with those sorts of budgets for their entire lives expecting, okay, you give me this number, and I have to optimize my content in this particular way. And now that that's no longer true, it's very surprising to them.

Announcer:

Welcome to Building the Open Metaverse, where technology experts discuss how the community is building the open metaverse together. Hosted by Patrick Cozzi from Cesium and Marc Petit from Epic Games.

Marc Petit:

My name is Marc Petit, I'm from Epic Games, and my co-host is Patrick Cozzi from Cesium. Patrick, how are you?

Patrick Cozzi:

Hey, Marc. I'm doing well; I'm really looking forward to today's episode.

As you know, I really love computer graphics, but within computer graphics, my favorite topic is massive models and everything that goes into doing that efficiently. I know we're going to learn a ton today.

Marc Petit:

Because our guest today is a colleague of mine, Brian Karis, who is an engineering fellow at Epic Games. He has worked on the development of graphics technology for Epic, including Nanite.

And he's also known for his work on real-time rendering and physically-based shading techniques, which have helped to advance the state of the art in computer graphics.

Brian, we're happy to have you with us. Welcome.

Brian Karis:

Hello. Nice to be here.

Patrick Cozzi:

Brian, we love to kick off the podcast where we ask our guests about their journey to the metaverse. So, could you tell us about your background?

Brian Karis:

I’ve been interested in computer graphics for a long time. I wanted to get into that, and specifically real-time graphics, coming out of school. The best place to do that is in video games, so I got into the game industry almost 20 years ago now. Worked at a company called Human Head Studios for a number of years. Worked on a game, Prey, and some following projects, and then came to Epic. I've been at Epic for a bit over 10 years now, working on Unreal Engine, specifically on the Nanite front.

 

It's been a long journey of a particular goal, a particular dream, that has spanned a good amount of those years, wanting to give artists more freedom from the budgets that have constrained them over all of my experience working in video games. I've just seen throughout that time working on many different games over multiple generations of hardware that no matter what new hardware comes, there's always this worry about how much memory you have and how much compute power you have to be able to render the geometry.

Each time, it's this just incremental improvement of number of triangles, number of draw calls, that sort of thing. It seemed like it shouldn't be necessary given some other technologies, other virtualization that existed for other things. So, it had been a goal of mine for a number of years to try to tackle that in the specific domain of geometry. The culmination of that work is Nanite, that shipped with UE5.

Marc Petit:

Can you explain to our audience what is a virtual geometry system and why do people feel like it's magic?

Brian Karis:

The reason I call it virtual geometry, it's a lot of things. There are a lot of different systems there that end up solving some different problems. But the reason why I wrap it up as virtual geometry in that moniker is keying off of some tech in the past and its properties. Building off of the idea of virtual memory, which is this concept of being able to have a very large memory space, or bigger than the amount of ram that you have in your machine. And being able to move bits of memory, page them in and out from the physical memory off to disc so that you can simulate the concept of there being much more than there is actually a physical space for it.

My personal experience with that being really powerful came from my experience with virtual texturing. I had written a virtual texturing system back before I joined Epic. I also wrote a component of the virtual texturing system that's in UE5 right now. And I'd seen the impact that had on artists' work in the various projects that I'd been on with that, especially ones that are older that we had much more constrained hardware at the time.

Just seeing the artist be able to make these just massive textures, 4K textures, from the console generation of the Xbox 360 and PS3. To be able to ship, create and be able to ship 4K textures on just all sorts of things, and that virtual texturing system enabling that. The freedom that the artists get, the losing of those constraints, just how much that frees them to be able to be more creative and to get at the thing that they are really envisioning.

Seeing those sorts of properties and trying to translate them over to geometry, which, they still had those constraints, they were able to make giant textures, but they were still stuck with these specific budgets for various forms in the geometry realm, specifically Poly Count and Draw Calls.

So, trying to bring that experience over. Not necessarily the particular technology that enabled it, just translated over because it doesn't translate all that well, but trying to translate over that experience from the artist's point of view of not having to worry about those budgets.

What that means in practice is that, again, in the virtual memory analogy, there is this paging of data that is coming off the disc such that only the set of memory, the set of triangles, and stuff that you need for that particular view that you're using at that moment in that frame is what needs to be in memory. But parts that don't really translate from the virtual memory analogy are that you only need to draw that set of triangles that is actually visible. Or a close enough set to that and a good estimation of what is actually needed to render those final pixels.

Because of that property, or if you can get it right or if you can get it approximate enough, it means that the performance for rendering that frame scales not with the scene complexity but scales with the number of pixels that you have on screen, which basically means it's somewhat of a constant from the artist's point of view.

They can add whatever they want into the world with whatever complexity and richness, and it still costs approximately the same for 1080P or 4K, or whatever resolution they're rendering at that. They can add in one object, 1000 objects, a million objects. Or that one mesh can have 1000 triangles or a million triangles, and it all comes out to basically the same cost.

That's kind of it in a nutshell.

Marc Petit:

Wonderful and incredibly liberating to be able to throw all the complexity that you want and still get the performance out of the system.

Brian Karis:

To answer that question of why does it seem like magic, I guess it's just because artists have lived with those sorts of budgets for their entire lives, their whole careers. They're just expecting, okay, you give me this number, and I have to optimize my content in this particular way. Now that that's no longer true, it's very surprising to them. It's actually been kind of funny to see some of those habits still linger even with people that have used it for a little while. Because they still get questions of like, oh, well, should I be doing this thing or that thing? Or that they see the frame rate in their particular scene isn't hitting the frame rate that they want to ship at, so they start attacking the typical ways that they would optimize things.

Then it turns out that that doesn't really help. That some of the typical ways that you're used to optimizing, or that they're used to optimizing, are just different now. It just has different properties.

I think a bit of that feeling of magic is just a foreign feeling because it's what they have grown up, or what they have been trained to do is just now turned on its head.

Marc Petit:

Over the past year, you had a front-row seat at seeing the Fortnite team use the technology. Any use of the technology that was unexpected and surprised you?

Brian Karis:

I guess yes and no.

No, in the sense that I was involved in the planning of it, so there was a lot of just the uses that were planned for and developed for as well as where we're trying to get the team to use it. Where I had been on multiple demos so far that had already shipped using Nanite, and that team hadn't used it yet. A lot of it was training them in the experiences that we had already had shipping demos. A good amount of it was telling them practices that we had already learned, do these things.

There were areas that we pushed on. That's hard to say that I was really surprised because some of them were just part of the plans. But I think we definitely shipped some… The performance that it ended up being was a bit surprising.

In particular, what I'm referring to there is the use in foliage. I've said from the get-go in our very first talk and documentation about Nanite was that it should not, just from a theoretical point of view, work well for foliage. I just knew for a fact that it was not going to work as well, but I was expecting it to just not really work well at all.

A big use case in Fortnite Chapter Four was in foliage for trees and grass. That was a thing that just the team had experimented with, and they were wanting to push. So, it wasn't a surprise that they did it because I knew that was part of their plans. But it was a surprise that it worked out.

We've seen further examples of that in the Opal demo. I'm not sure what we called it externally, the Rivian demo that we just showed off at GDC. It's still an area that you need to be very careful in, because it's not as automatic as how I was explaining things for other types of geometry, specifically this aggregate stuff that's like Swiss cheese filled with holes. Or a bunch of independent elements that are connected to one another, like blades of grass or leaves in a tree.

They don't work as well. But it has surprised me that they work well enough that we're able to ship a 60-hertz game and able to ship a demo with these massive forests. It still means that the particular demo was kind of crazy in pushing the complexity.

Patrick Cozzi:

Brian, maybe even stepping back a bit just on geometry, and in general, you spoke about the inspiration from virtual texturing and now virtual geometry.

Do you want to speak a bit about the unique challenges that geometry has compared to textures?

Brian Karis:

The problem there is purely just a memory one.

The hardware is already good at a level of detail. Textures have had MIP maps for pretty much as long as we have had hardware acceleration, and MIP mapping a texture is a fairly trivial operation. Downsampling a texture by the next power of two is a trivial thing to do, and the hardware picking which one of those MIP maps to sample from any pixel is a very easy operation for it to do. That level of detail and the cost of the data being used is not one that really impacts the frame rate at all.

Really, it's just a memory management problem. On the geometry side of things, that's not true. All of those various properties, well, not all of them, but the things that I was just talking about, those properties are just not really true for geometry.

It's not trivial to create levels of detail of meshes or whatever form your geometry is in. Even voxels are easier but still not trivial in exactly how it doesn't solve all the detail. Particularly, meshes are complex to try to create a level of detail far more complex than just down-sampling a texture would be.

Then to draw that bit of geometry on-screen, that has a performance impact. Having a higher resolution texture doesn't drop your frame rate; it just conceals more memory. But having a higher poly mesh, if you just draw it naively, all of the triangles, scale linearly with the number of triangles.

Trying to turn that into something that's a nonlinear scaling, either a log N sort of scaling or having it be constant scaling with the resolution, like I was saying, the goal is for virtualization is one that's just not really a direct translation for meshes.

You need to have this level of detail concept. Something that happens globally, the entire thing changes. It has to change in relation to what you're seeing on screen.

Say, for example, you've got a giant, giant mesh, which is massive in the scale of a mountain. You don't want to change the level of detail of the entire mountain at once. You want to have the stuff that's close to you be at a higher density than the stuff that's further away from you, so that once it gets projected on screen, they result in approximately the same size triangles.

To have those things go at different levels of detail means you want to do that in a continuous and seamless manner so that you're not causing any cracks as one thing switches, and another one stays at a lower resolution. There're a lot of problems that show up in the geometry area that just aren't really problems with textures. At a texture to have two different resolutions right next to one another, it's not like you see through. There's no causing of break. You can have those textures at two different resolutions right next to one another, and it doesn't cause any problems.

Patrick Cozzi:

Marc, just stop me if I'm getting too geeky, but Brian, is it using a hierarchical level of detail approach? Or is it doing like a compute shader-based continuous LOD?

Brian Karis:

It is compute shader-based, but yes, there is a hierarchical level of detail in the basic structure. There is a hierarchy of triangle clusters, and the clusters are just chunks of triangles in the form of, in Nanite in particular, there are like 128 triangle clusters. It creates a hierarchy out of those clusters and picks which kind of cut of that hierarchy it wants to draw at any particular frame. That's also the form that it streams it from disk into memory in that same granularity.

When it's drawing those on screen, it's picking not just the level of detail but also what is actually visible. It'll do frustum and occlusion culling to draw just the stuff that you can see.

Marc Petit:

Because I remember when I joined Epic in 2016, Kim Libreri was showing me around, and he pointed to your office, I think it was next to his on the second floor. No windows, if I remember well. And saying, "You need to leave these guys alone because if they succeed, this is going to change everything." It was 2016.

How long did you think about that and what was the process to solve such a, what looks to me, like a huge technical problem, technical innovation?

Brian Karis:

If I wanted to be very generous, I could trace back some of the thinking and goals that I've had since some of the earliest days that I tried making computer graphics prototypes. But if I was more realistic, I would still trace it back going well over a decade of work, and not just to this point.

I've been working at Epic for over a decade now, but Nanite's been up for a number of years. To the point where Nanite was actually a usable prototype, it was still like 12 years or something of thought. It wasn't actually work. I didn't have the opportunity throughout all that time to work on these ideas because there were other responsibilities, other priorities of things to do.

I worked on Wallet Epic, I worked on physically-based shading, I worked on character shading, skin shaders, eye, hairs, that sort of stuff. I worked on temporal anti-aliasing, which is a pretty important component for UE4; there were other things that needed to be done, other priorities that happened throughout time, which meant that I wasn't able to always work on this idea.

It was a very consistent idea for me going on for, off the top of my head now, I don't remember where it started, but maybe 15 years ago. Where I really got interested in my free time of just researching and ideating on how it could be done. Those ideas weren't what Nanite actually turned into.

The approach that Nanite takes is not where I started. It's not like I had this one brilliant idea 15 years ago or something, and it was just like if only I could have done work on this. No, it was I want to solve this virtual geometry problem. It was a goal that was consistent over all that time. I had different ideas throughout that period, and I would think, I think this could work.

I would explore what it would mean, read all sorts of various research papers, and just think about it in my free time what I felt would be a good solution. And then realize that there were just some really core flaws with that approach. Then think that there would be, okay, there's a different approach that doesn't have those flaws. Think about that for a while. Then find more holes in that and switch and shift the version of it.

Kind of like the core framework that matches how Nanite currently works is one that has lasted longer than some of those other ideas, but was not the one that has existed throughout all that time. Or it was a goal that existed throughout it and not a single idea approach or algorithm sort of thing. But yeah, there was a point there back in 2016, probably earlier than that.

Actually, I know earlier than that, of an idea that was pretty similar to what Nanite turned into, minus a few important components. That hierarchy of triangle clusters, it's hard to date exactly where some of the base ideas landed, but yeah, probably in that timeframe. That was already a pretty well-formed concept in my head.

But there were other things that were not at that time, like the approach that we take for rasterizing those very small triangles highly efficiently. We take the software approach where we have a compute shader that can rasterize these triangles faster than the hardware rasterizers can do it. That was something that only really came in later in the process after I explored some alternative approaches that were totally different.

I spent a bit, even after I had this idea that I was pretty confident in, for reasons of doubt, reasons that there were some advantages of going in a voxel direction that I thought was worth exploring. Even though it wasn't something that I really believed in at the time, I just thought it was necessary to explore it such that I could really be confident that this other direction, this triangle mesh-based direction, was actually the best one.

I thought I had to do the due diligence and check out this alternative that I didn't end up going with. But in that process, I discovered an approach for faster triangle rasterization and then brought that back to the scheme that I had in mind before.

Marc Petit:

So, you did take the time to explore an avenue that you did not believe in?

Brian Karis:

At the time, it felt stupid. I spent a good six months exploring voxels, and I came out at the end of it feeling like a failure. I went in not thinking it was going to... I wasn't really believing in it, so I was trying to do due diligence just to say, all right, well, there are other alternatives. I want to rule them out. I thought I was going to do that fast. I was trying to take this approach of let's just prove that it's not that, and I'll go back to doing the thing that I was doing.

I found that it is incredibly difficult to try to prove that an idea can't work. Not that a particular version of it doesn't work, but that it can't work. Because there's always the next follow-up of, like, oh, but if I change this thing, maybe. Or a, well, but what if that? It's never-ending.

Each one of them, even if you get as like, well, that's not really working, but if I tweak this thing, maybe it could. When, in every single step, it was like, well, do I believe that it will? Not really, but maybe it does. So yeah, it came to this point of just, I need to cut this off. This doesn't seem to be going in a direction that's working out.

I felt after that time that I had wasted, I felt bad about it. I felt bad trying to defend the time that I felt like I had wasted to my bosses and leads at that point. I had just burned six months with nothing as a result. I tried to hold up some of these other discoveries of, like, yeah, but I came away with this thing. It wasn't what I was looking for, but I came away with something.

I felt bad at the time, but in retrospect, you need to go through experiences like that there. You need to have that wandering through the desert to find things.

Also, I think it's important for an inventor, a researcher, an engineer in general, but just anyone who's trying to explore the boundaries of technology, anyone whose job it is to do that. You need to work on frivolous things at times. You need to explore ideas that seem interesting to you but don't necessarily have a use yet, but one that your own intuition says, this seems interesting; maybe this is useful for something in the future. Because you'll never know where that stuff ends up coming up. You have to plant those seeds to get trees in the end.

Marc Petit:

I think it's a very interesting lesson if you are a CEO like Patrick or running technical teams, about you can't dictate innovation. You have to let the process unfold, and you have to wander around a number of things.

Patrick Cozzi:

Yeah, it's not time wasted; it's just time invested.

Ultimately, Brian, it seems like you leapfrogged research, and you did it at a production scale of Fortnite and Unreal Engine, which is pretty cool.

Brian Karis:

To give credit, though, there was a good amount of time that it wasn't in production. The point at which it actually was able to make those big leaps forward was the time that I was left alone to work on this for a year or two. That was the point where it really went from, hey, I've got these ideas in my head, I'd really like to work on this, but we've got ship Paragon. I really got to work on this, but we've got this GDC demo that's got some photorealistic character in it, and we need to work on that. Et cetera.

There's a whole line of those happening at Epic. And before at the game studios I worked at before Epic, there's always the short-term priority. And when you want to work on something that's a long-term vision, it's very difficult to do long-term tasks just interleaved with other stuff.

It happened at a fairly production-focused company, but finally got the opportunity at Epic due to a window that happened due to other circumstances. That was the lucky moment that I got to explore those ideas and was able to have enough time to build up a prototype to prove the potential value. Once we had that, then it was very easy to defend. Oh, I should be spending more time on working on that because I had something to show. Here's the potential of it, and it's not just an idea that I'm telling you about. It's like, oh look, I've got something working. We should build that out. But to get from the idea in conversation to a prototype, that does need some amount of protected time.

Patrick Cozzi:

Brian, I wanted to ask you a bit more about the geometry representation. This season on the podcast, Marc and I have been learning a lot about NeRFs from some of our guests. Then also Marc and I are involved in the Metaverse Standards Forum, and folks talk a lot about different types of geometry representation, like subdivision surfaces.

I know you said for Nanite, you're using triangles right now. As you look towards the future, do you think that you would mix and match any other representations?

Brian Karis:

Of course, it's hard to predict the future. It's hard to commit. It's not just like, oh, somebody might do something. But even with what I'm interested in doing, it's hard to predict exactly which of those ideas that I'm interested in are going to land, referring to that vast previous wandering through the desert thing.

The ones that you've listed have some very nice properties for other purposes. The ones that I'm specifically interested in exploring, I'm working on support for tessellation and displacement mapping on top of Nanite. I'm working on that now.

I think displacement mapping is a very core tool for artists to be able to get that geometry quickly and do so in a procedural way that's not really possible with Nanite right now. If they wanted to create, say, a wall with brick where the bricks are popping out of the wall, they can make a model of that wall with those bricks in that form right now. And then maybe tile that wall section as just tiling the mesh itself.

But if they wanted to then make that brick and a cylindrical tower and have that curve, they would need to make another mesh that's now that curved section. Where if it's a displacement map, you can have that same displacement map beyond the flat wall. And then you throw it on a curved wall, and you can throw it on anything.

That same piece of data that sits on this, the same amount of work from the artist, can be applied to a bunch of different things very easily. You can do so in ways that are procedural in the same way that people do procedural texturing. The same sort of shader logic applies here, too. I think that's a really useful thing.

I'm also really interested in exploring going back to the voxel experiments that, at the time, I considered to be failures. This is kind of what I was alluding to, not just that it's necessary to explore ideas that fail. Failure is an important component of research and invention. But that planting those seeds where you don't know what something's going to be useful always when you go into it, but you kind of rely on your experience and your intuition that comes from experience.

I've been doing this for a long time. It's really hard to quantify why am I interested in that thing. Sometimes it's very easy to argue, oh, well, it was because of this, because I need it for this. But in other cases, it's just curiosity. There's a thing there that seems like it could have value, although I'm not exactly sure with what yet.

Or I might have an idea of what I think it would be useful for at that time, but it turns out it's useful for something else. There was something there that attracted my attention; that's where that hard-to-quantify intuition comes from. I was exploring it then for one reason, and now I've got some other uses that voxels might be useful for in the future.

Marc Petit:

Another thing we've heard on the Metaverse Standards Forum is there's a lot of appetite and a lot of usage of subdivision surfaces.

How could you combine the benefits of the two workflows, Nanite and subdivs? Did you think about that?

Brian Karis:

I have thought about it. I don't have a good answer yet, unfortunately. I'm hoping that an extension of the tessellation work that I'm working on now to try to support display set maps can end up being extended to provide tessellation for subdivision surfaces. Or just higher order smooth surfaces of whether they're subdivision-based or something else.

In particular, subdivision is difficult. It is difficult to parallelize on the GPU. There are schemes to make that work, but at its heart, it's a recursive operation. By definition, it's a surface that is formed through recursive subdivision.

Patrick Cozzi:

Brian, I wanted to ask you about scaling up to even bigger worlds. Just my interest in graphics and the work we're doing at Cesium, we do full globes; you can zoom in, you can see the terrain for the US. You zoom into Philadelphia, and you see the buildings. You go inside the building, and you can see a box on a table. It's all based on a hierarchical level of detail that enables frustum culling and occlusion culling, as you mentioned earlier.

We do a lot of mesh decimation to pre-compute those things and have geometric error to drive the LOD selection. It seems like there's just so much synergy and maybe opportunity for Nanite to scale to that, and maybe even use some of the facial data structures that we've worked on.

Have you thought about this, and how would you approach it?

Brian Karis:

Some. That scale of things hasn't been the goal overall. But yeah, it should, in theory, be able to scale. There would have to be some tweaks for sure, just to ramp up to the amount of precision needed at those bigger rates. We have a fixed bit precision for positions and stuff that it's just based on the bounds of the mesh from the starting point. We would need to add some changes to support that sort of thing.

Another key area that doesn't fit with for our current direction is that Nanite is built around the workflow of instancing right now because that's just how our artists work in building out video game worlds. They'll take that one mesh and instance it all over the place and do a lot of kit bashing. We'll have instant supporting very high instance counts into the millions; it’s a thing that we can do that's pretty unique in the game world. But that doesn't scale up.

There's a point where, even in some of the things that we have done and shipped already, like The Matrix demo that we did, the amount of total instances in that whole city environment was at least 8 million, if not more than that. I don't remember exactly what the numbers were in the final form, but seven to eight million I think we had. Scaling up to that extent is past where Nanite is at least currently good to scale to.

We've got a feature in Unreal that predates Nanite that was trying to deal with that problem before the counts were even anywhere near the millions. When they were in the tens of thousands, it was trying to deal with them. That is, funny enough, called HLOD in the engine, which stands for hierarchical level of detail, which is really just a super generic name for lots of algorithms that end up working like this. But it's at least a bit more traditional scheme down that route, which is the key difference there is that it is aggregating many objects together to form a mesh that is a unique mesh of some chunk of the world that replaces the many instances that came before it. It'll take all the instances, mash them all together and try to make a single mesh out of them.

Doing so with materials and shaders and such needs to have an aggregation of that stuff, too. It does a process of baking down those textures and that shaders and all of that stuff into a unique texturing over that unique bit of geometry. Once you get to that point, that's when you can start scaling up to the bigger things. Treating it as a bunch of individual objects just doesn't scale that far.

The example that I've given to try to demonstrate that of, well, clearly, there needs to be some point where this process happens. Where if you're flying in a plane over New York City and then you're looking down into some skyscraper, you’re not going to want to draw the coffee cup sitting on someone's desk that happens to be in some skyscraper in a city that's got tens of thousands of buildings. Just the amount of actual object count that you get at that point is, yeah, billions, trillions of things. It's just not going to scale.

But if you can aggregate them, then you can get down to the memory, and the data scaling in a similar way to how I was explaining where you want it to scale with your pixel count. That's something that's actually practical to do when they're not separate instances. They're just one big chunk of data like Google Earth does, where you can fly around and see stuff chunked. Or Cesium has a similar concept.

Switching over to that mode is something that Nanite could do. We could make a mesh in that aggregated way, and we do that with the HLOD feature. When we shipped The Matrix demo, for example, the meshes that it created, those unique meshes with its baked textures, those were Nanite meshes as well. We could make all of it just one big Nanite mesh, and that whole hierarchy could be managed just by Nanite itself. It's not currently.

Each one of those chunks of the city creates a Nanite mesh. There's not one Nanite mesh of all of them merged together just because it wasn't important. There's not enough of them that merging them together, having that hierarchy go all within Nanite, was useful. But if we scaled up to that scale, it would be necessary.

We are looking into scaling up Nanite for other use cases from a modeling perspective, like an art creation perspective that's interesting. We're looking at whether it would make sense to have terrain geometry be Nanite meshes and actually offer them as meshes.

We'll see where that stuff goes, but that would be very large-scale. It wouldn't be planet-scale necessarily.

Marc Petit:

One of the topics that Patrick and I think a lot about and care a lot about is standardization. A few questions about exploring the potential relationship of Nanite with standards.

First of all, have you come across other groups or individuals proposing alternative solutions for virtualized geometry or their implementations?

Brian Karis:

There's work done into standardizing model interchange formats, but those aren't virtualized. I think the only one that has happened that I'm aware of recently is NVIDIA's micro-mesh work. It's not standardized yet. The spec isn't even released.

Marc Petit:

Can you see similarities in the approach? Are we starting to see a convergence of some ideas that seem to be good solutions?

Brian Karis:

Definitely not. Convergence, definitely not; the micro mesh approach from NVIDIA is more of a displacement approach. It's not irregular meshes that's kind of the fundamental structure of Nanite. So, no, definitely no convergence there.

I expected that there were going to be more competitors or copycats, I guess you'd say. Just people following. Because I've done presentations now on how Nanite works as well as it's just been out, so people can look at it and try to dissect it and reverse engineer it and that sort of thing for a few years now.

I expected that there were going to be more people trying to replicate what it does and how it does it. Especially because the info on how it works has now been SIGGRAPH presentations and stuff on how it works. I expected more people to be replicating the work, but I've yet to really see that in practice.

We'll see. Maybe they're incoming any day now, but they haven't. Or I'm not aware of any that have shipped yet, and any sort of video game product or major engine.

Marc Petit:

Those super high-quality assets, as you mentioned, every artist wants them, even Patrick, for GIS workflows.

Is there a path to see something like Nanite incorporated in standards like glTF 3D tiles or USD? What would be your recommendation there?

Brian Karis:

The source geometry for Nanite to build things off of can very well be standardized in the same form, glTF or USD or whatever interchange formats for the source geometry. All of that's fine. But trying to standardize the compressed representation, for lack of a better word, would be the difference between standardizing a TARGA file or something like that versus standardizing JPEG, which, obviously, is standard.

The tricky bit for Nanite is there are, yeah, a lot of different facets and different aspects of it that make it more difficult.

For one, it's in rapid development. There are things that are still changing about Nanite pretty regularly, such that we bump the version number on Nanite probably almost monthly at this phase; certainly, every Unreal Engine release at this point, I'm sure, has had a version bump in it, but we bump it even more often than that internally because we've made some sort of format change.

Now, some of those changes don't need to be actually binary format-breaking. They're instead just changes for how we build the data. If you could think about it like there is the encoding of JPEG, and then there is a better way to generate that same encoding that looks higher quality but would be compliant in any JPEG viewer. It wouldn't result in the same binary data, obviously, because otherwise, you wouldn't have changed anything. But comes up with a different JPEG file that looks better but is still viewable by the same viewer.

There are changes that we make to that we bump the version number for just for reasons I won't go into. There're some of those that we don't need to bump, but there are still ones that are actually compatibility-breaking changes that are good changes for us to make.

They result in things either compressing better, being smaller data, or faster to render, supporting the addition of new features. Or are those new features being added that can be performant that otherwise wouldn't if they didn't come with those changes; some of that would be problematic if we locked down a particular format that it's at. A next step there that would seem like it would be, well, okay, the solution is, well, now you have a version one, and you've got a version two, and you've got a version three, and each one of those could be standardized.

For some file formats, that's fine that you could have this versioning and backwards compatibility and all that. That's much harder, if not impossible, in the current form of Nanite because of how GPUs work and the particular way that the compute shader rendering works with Nanite.

We have a single shader that runs in this. It gets very nerdy here, but it runs in this persistent threads, single dispatch mode to be efficient. That processes all. It does the level of detail and occlusion culling of all of the clusters that Nanite will draw of the entire scene. Breaking that up into multiple different shaders and different compute dispatches would be far less efficient; trying to get the logic from many different versions and decodings of the geometry all into some of these single shaders would make that shader far less efficient.

It would be possible, but it would also be damaging to Nanite's performance and, in some cases, functionality, to try to get many meshes or meshes with different versions in the same scene at the same time.

If all of it could be one version or all of it could be another, it'd be fine, but if you want to mix and match, the performance drops. Those are all considerations in what it is right now. As it gets more and more stable over time as the development slows, which inevitably will happen, it's not happening right now, but inevitably the pace of these changes will slow down. Then it could start becoming more realistic.

I think there's a possible thing that could happen here that could make it happen in a somewhat shorter timeframe; that would be if there was a reason to implement what Nanite is doing in hardware.

For a variety of reasons why that might be a thing to do, one of which would just be, well, we're going to make fixed function hardware ASIC sort of thing to make exactly what Nanite is doing way faster than what we'd run in software shaders. But it's still rasterizing and doing the same thing that it's currently doing.

I think what would be more likely of a future for this to happen, if it would happen, would be to implement Nanite's data structure in a way that is fast to rate trace, which is not currently the case. If that happened, say NVIDIA, AMD, whoever, Intel, wants to make fixed function hardware for this encoded triangle cluster format with the ability to change which clusters are picked due to some level of detailed properties.

I'm being very hand-wavey right now because it's hard to say exactly how this would go down.

If something like that happened, where the specific format was done, not exactly our encoded format, the N encoded format, that would get at the goals that Nanite is trying to do. One that would be very fast to rate trace against, we would be very compelled to support specifically that format with Nanite; it would mean for our ray tracing data, to ray trace against it, we can ray trace against the same data. It would be probably silly for us to have the ray tracing version and our current encoding that are just slightly different from one another without a big benefit.

Even if that other form is slightly bigger from a compression point of view than what we currently have. Just the fact that we'd only have one of them instead of two parallel data types would be compelling enough for us to support specifically that hardware version, and then for that to really be something that would have a big impact across the industry.

Everybody, all parties involved, would have good reason to try to standardize what that format was. Otherwise, it probably wouldn't be used if it was only on a single vendor's hardware unless it was a council. But in the PC space, there are different GPUs out there, and standardization through DirectX or Bolton or whatever is important to get the rollout of features like that. I could see that feature possibly accelerating the standardization of a Nanite-like data format, but it's hard to say.

Patrick Cozzi:

Brian, it's been a ton of fun chatting with you. I love these deep technical episodes, so thank you so much. We'd love to wrap up if you want to give a shout-out to an individual or organization.

Brian Karis:

I guess I just say shout out to Epic Games. Thanks for enabling me to do this work.

I call out, in particular, the rest of my team, Graham Wihlidal and Rune Stubbe, and Jamie Hayes, who are working with me on Nanite stuff. Also, Andrew and Ola are working on the shadow side of Nanite work.

I've got a great team of folks that I'm working with.

Marc Petit:

So, Brian Karis, you are a geek, you are a nerd, you're a mathematician, and you are probably an elite programmer, but I want to conclude the podcast with a quote from you, which I think is very interesting and very telling and very unique.

When you said in a presentation recently, and I'm going to read you, "In my opinion, there is no bigger problem in my field than making 3D content cheaper to create than making the art of creating it more accessible to more creators and knocking down every last impediment left to artists manifesting their vision."

I think I want to congratulate you and thank you for that because that focus on artists is, you also said about graphics, is all about helping artists or something like that. I think it's a very unique and very interesting point of view, and thank you.

On behalf of everybody, thank you very much for your contribution, and thank you very much for being with us today.

Brian Karis:

Well, thank you for having me.

Marc Petit:

Patrick, thank you, too. We'll see you in the next episode of Building the Open Metaverse.

Patrick Cozzi:

Looking forward to it. Thanks, everybody.