"LODs" of scenes

Disclaimer: I’m just thinking aloud and maybe (surely) you have already thought about this in some way or another.

Are there ways to deliver e.g. high and low versions (regarding vertex count, texture size) of the same scene to different devices (depending on some yet-to-be-defined metric)?

I know there are LOD groups and deferred texture loading which are steps in this direction. But I would not want for an old device to even download the “high” versions, so e.g. different LODs had to be split to different files I guess.

We could of course manually author a high and low versions of a scene, but I’d rather want something dynamic where I do not have to rewire components multiple times and keep them in sync when something changes.

Original Post on Discord

by user 395602247196737546

Hello there – we have an experimental AI bot :robot: that might be able to help you with your question. Would you like to try this out?

How it works:
Reply to the messages of the bot that end with a thread_id to interact with it and ask follow-up questions. You can also mention the bot directly in this channel to get help.

Help requested: please wait a moment…

Here is what I found for you :brain: Discord

Hello Krizzl0r,

Yes, the concept you are describing is similar to the idea of Level of Detail (LOD), where multiple versions of a model or scene with differing complexities are used and switched between, based on certain conditions like distance to the camera or device capabilities. This approach helps in optimizing performance by using lower-detail assets when high detail is unnecessary.

Needle Engine supports the LODGroup component, which can be leveraged to swap between different versions of models at runtime based on the camera’s distance. However, in terms of delivering different asset quality levels to various devices without having them download all versions, Needle Engine would typically require manual setup for each scene variation.

There are a few strategies you could consider:

  1. Device Detection: Use the DeviceFlag component to alter the content dynamically based on the device type. You can load different scenes or assets based on whether the user is on a mobile or desktop device.

  2. Dynamic Asset Loading: You can implement a system that checks device performance at runtime and then loads the appropriate asset quality. You could use the AssetReference type and dynamically call its instantiate or loadAssetAsync method to load the correct version of your assets.

  3. Splitting Assets: Split your assets into multiple files, and then only load the ones appropriate for the user’s device. This might require separate builds or setups for each target group.

  4. Backend Service: You could offload the decision logic to a backend service that, upon recognizing the device type and performance metrics, delivers a URL to the correct asset version to be loaded.


I think one current approach would be that you build the same scene twice with different optimization settings – you’d end up with two seperate builds and then could have a needle-engine element where, depending on some device metrics, you wire one or the other build into the “src” attribute

As part of the build settings you could override settings like max texture resolution for deferred loading or max. vertex count per object etc. –

As part of a repeatable pipeline this could likely also be added to the bundling step (outside Unity) directly, so that multiple versions of files are generated and there’s a callback for that choice

And/or what nowadays is called edge computing, that basically the request to “give me the glb file” would return different things based on what the requesting device is

Regarding the textures: Using progressive loading there’s only a “preview” and “full” version of the textures generated, right? Nothing in between.

by user 395602247196737546

My thinking was that if there are multiple texture versions I would not even need different versions of the scene as I could just modify the max resolution right on scene load.

by user 395602247196737546

Yep, that’s kind of what I mean above, having the bundler generate multiple versions and then choosing based on some condition.

The current implementation of deferred textures is basically mainly designed to improve load times and to ease memory strain. Was always the plan to improve upon that later with multiple texture LOD levels and likely also mesh LOD levels that behave more like “streaming”, similar to what you describe

Yeah streaming, that term did not come to mind. But I guess that’s the core idea of what I’m contemplating about :slightly_smiling_face:

by user 395602247196737546

As Felix said: currently there’s no option exposed to enable/disable texture LODs per device (altough we could possibly add this relatively easily) - and yes you dont have an inbetween version as we only have two levels right now. If I remember correctly the code is setup to potentially support multiple levels and yes this is something we have thought about and would like to add at some point. As well as adding the same kind of deferred loading/dynamic resolution for meshes as well

That’d be great. But it’s not on the roadmap so far I take it?

by user 395602247196737546

Hmm but on the other hand, and don’t get me wrong here – if all that is crammed into glTF extensions, what is the point of using glTF anymore? (Saw @herbst🌵 's comments on the glXF repo regarding scene composition and this would be going in the same direction I guess.)

by user 395602247196737546

Sorry, I’m kind of split here. On the one hand I need new features but on the other hand I’m saying maybe slow your horses ;D

by user 395602247196737546

It’s not a priority right now. Note that it’s possible to fund certain features

Certainly good to know, I’ll keep it in mind. Do I have a vote with a pro subscription? :wink:

by user 395602247196737546

The point of using glTF is that it allows this flexibility, in a way where files are still very compatible, without requiring a gigabyte of custom libraries like USD does :slightly_smiling_face: