So I am building an AR solution for mobile inlcuding IOS. It is important in this usecase that the load time for the experience is very low.
I want to start AR experiences from a QR code in a video.
I had the issue that it took over a minute to launch the AR scene, and I cannot find really what is important in optimalisation.
What are some key things to keep in mind to keep the build time low?
I was using Glitch for build deployment.
Probably:
low poly models
which 3d filesize is best for converting into usdz?
Can I use usdz files in unity for both IOs and Android mobile applications?
-Should I delete all unused assets in the scene hierarchy?
How are interactions and componets affecting build time?
USDZ files are generated on-edge, so at the users device. This has the advantage that you don’t need to ship multiple files (e.g. glb and usdz) and also that there is no additional download since the file is just created in memory and passed to QuickLook.
It is almost always much faster than shipping a pre-created USDZ file.
If it takes a minute from “clicking the QuickLook button” to “the scene opening in QuickLook” then that usually means your scene is very large, has giant textures or meshes, and requires a lot of processing to be generated dynamically.
If you can share the Glitch URL I can also take a look on what may be affecting it and if we can optimize it.
Using USDZ files inside Unity
If you’re using an USDZ file inside Unity (e.g. imported with the unity-usd package), the data will be converted to glTF for web viewing by Needle Engine, and then converted again when an USDZ is actually needed for iOS AR. If you already have a USDZ file, you can put it into the “Custom USDZ File” slot on the USDZExporter component.
Some general notes
Unused assets should generally be either deleted or set to EditorOnly, disabled objects are exported since we can’t know what you want to enable at runtime.
Interactions and components don’t really have an effect on USDZ generation speed
Models do not need to be low poly – you shouldn’t have to compromise on quality – but of course they need to be real-time capable for the intended devices. For lower-end iOS and Android phones that typically means max. 50 objects, max. 2k textures, max. 100k vertices.
USDZ files are generated on-edge, so at the users device. This has the advantage that you don’t need to ship multiple files (e.g. glb and usdz) and also that there is no additional download since the file is just created in memory and passed to QuickLook.
It is almost always much faster than shipping a pre-created USDZ file.
Is this statement correct?
So, the needle workflow in Generating USDZ files on edge is much faster compared to other IOS xr web application that use multiple USDZ files etc, because it’s compromised to one generated file?
and is this statement correct?:
It doesnt matter which files u use in unity, OBJ, FBX or whatever because before USDZ generation it will always be glTF first. So it will be (multiple)OBJ > (one)glTF > (one)USDZ
It will not matter, yes.
More precicely if you export from Unity for example it will be GLB → threejs → USDZ. This gives you a lot of flexibility which wouldnt be possible if the USDZ was generated beforehand (for example if you have Text components in your scene that change dynamically they will display the correct text in Quicklook)
Typically, you’d have e.g. a 2 MB glTF file + 8 MB textures (exported + compressed with Needle)
So on like a 20 Mbps connection that takes 0.5s to load + 3.5s until all high-quality textures have loaded.
Now, if someone pressing the “QuickLook” button would first have to download a 20 MB USDZ again (USDZ can’t use supercompressed textures), that’s 8s of just download + then the file being passed to QuickLook.
With generation on the edge we can build that file in memory, which should just take a few seconds, with zero additional download time.
Typically that is overall much faster, especially on a mobile connection, plus way more flexible as the actual state of your scene will be exported, not some pre-preduced state.