Questions on WebAR workflow and best practices

Attached is my mapped out workflow for my current (nd probably future projects). I have a few specific questions on specific images. Again sorry if these are basic questions, but most of my dev has been all Unity or All HTML, so working with TS and Unity is still new.

Site Loaded:

  • What is the best way to test WebXR compatibility? TS or JS?
  • Best way to add UI for redirecting users and usage guide. This will probably be HTML as it is screen space, but again how to go about this with TS.
  • Control when the AR Button shows, or replace command, with the redirects or open guide.

Acquire Pose:

  • What is the best way to Acquire Pose once the screen is tapped?
  • How to adjust model position after the offsets have been calculated?

Model Placed:

  • How to determine that the model has been placed?
  • Everything after Icons ready is done.

Thanks for any info.



Original Post on Discord

by user 943936853348855838

  • compatibility: the WebXR type has static getters for IsARSupported and IsVRSupported
  • UI with TS: many possible pathways, there are libraries to do this and add databinding etc (e.g. https://lit.dev/)
  • Buttons: You can disable the buttons to be created automatically in WebXR component and then create the buttons yourself (again WebXR component exposes the methods for that) and either add them to the dom where you want or build your own buttons and just invoke the “click” event on the button elements you get from WebXR) WebXR.createVRButton for example
  • Pose: you can get that from the XR Session (you get the session via this.context.xrSession)

Model Placement:

  • Determine if it has been placed: there’s currently no good api for it. But you can poll if the WebXRSessionRoot is invisible - if yes than it’s still placing, if no the scene has been placed

Worth noting that depending on the experience you’re trying to build, there’s another path on iOS for dynamically generating a USDZ right out of Needle Engine. That works on all iOS browsers in AR then but has limited interactive capabilities (but not none, e.g. can still click buttons to play animations and sounds and so on)

Also, not sure if you did but it’s worth looking at the overall UX of https://modelviewer.dev which has a good amount of Google research and user testing behind it. It’s not perfect (you’re gonna find me in the Issues there complaining about a number of UX details) but worth taking a look at.
(it does use USDZ auto-conversion by default for iOS)

@herbst🌵 As you can see from the images I have specific buttons and actions being called, so don’t think the iOS Quick Look is an option.

Thanks for all the info. I will look into it all and let you know if I have any questions.

by user 943936853348855838

Is there documentation on how to export from Needle for Quick Look?

by user 103054507105067008

You mostly just create a three USDZExporter (look at their sample) and are ready to go

We’re planning to integrate that better soon (so that e.g. it’s just an option on the WebXR component), but for now that’s how you can do it :slightly_smiling_face:

Some cool things are already implemented on our end (like animation support and ability to do simple interactions) but not production ready yet