How can I add Additional UI to WebXR Scenes?

Hello! I’m trying to add some additional feedback to a scene in AR that I’m currently building.
The idea is that, once you click Enter AR and enters into the mode, to show an image/animation with instructions, after that, when the surface is detected I want to swap to an additional “Touch to place object” image/animation.

I’m not quite sure where to start with this, but so far I figure out that these events exists:
xrSession.onselectstart xrSession.onselect xrSession.onselectend.
Yet I have no idea of a way to have access to the current xrSession object, I looked the source code for the WebXR class, but apparently it doesn’t have a property for the current Session.

Any suggestions?

Original Post on Discord

by user 368204675922657280

Hello there – we have an experimental AI bot :robot: that might be able to help you with your question. Would you like to try this out?

How it works:
Reply to the messages of the bot that end with a thread_id to interact with it and ask follow-up questions. You can also mention the bot directly in this channel to get help.

Help requested: please wait a moment…

Here is what I found for you :brain: Discord

Hello De!

To implement the additional feedback and instructions for your AR scene, you can subscribe to various events from the WebXR component, which manages the session and interaction in AR. The WebXR component provides an API for entering and exiting XR sessions and handles the lifecycle events internally.

Here’s a high-level overview of how you might implement this:

  • Add a WebXR component to your scene if not already present.
  • Listen for the ARSessionStart and ARSessionEnd events provided by the WebXR component to handle the AR session lifecycle.
  • When the AR session starts (ARSessionStart event), display the instruction image/animation.
  • Implement a surface detection mechanism to determine when a suitable surface has been detected for object placement.
  • Upon surface detection, swap the instruction image/animation with the “Touch to place object” feedback.
  • Listen for user interaction events (like touch or click) to place the object at the desired location.

Now let’s write some code for this. We’ll create a component that subscribes to the necessary events and updates the UI feedback accordingly.


import { Behaviour, WebXRSessionStartEvent, WebXR } from ‘@needle-tools/engine’;
import { Sprite } from ‘@needle-tools/engine’; // Import additional types as necessary

export class ARInstructionFeedback extends Behaviour {

initialInstruction: Sprite;

touchToPlaceInstruction: Sprite;

xrComponent: WebXR; // Reference to the WebXR component

awake() {
    this.xrComponent = this.getComponent(WebXR);
    if (!this.xrComponent) {
        console.error("WebXR component not found");
    // Subscribe to WebXR events
    this.xrComponent.addEventListener(WebXRSessionStartEvent, this.onSessionStart.bind(this));
    this.xrComponent.addEventListener(WebXR.ARSessionEnd, this.onSessionEnd.bind(this));
    // Implement additional event subscriptions as necessary

onSessionStart(event) {
    // Show initial instructions
    this.initialInstruction.enabled = true;
    this.touchToPlaceInstruction.enabled = false;
    // Implement surface detection and show touch to place instruction when detected

onSessionEnd(event) {
    // Hide all instructions
    this.initialInstruction.enabled = false;
    this.touchToPlaceInstruction.enabled = false;

// Implement additional methods for surface detection and UI updating



Make sure to add the Sprite components for initialInstruction and touchToPlaceInstruction and assign them in your scene.

Since Needle Engine doesn’t provide direct access to xrSession like the raw WebXR API does, the interactions are managed within the WebXR component, which abstracts the session handling. If you need more specific control over the WebXR session, you would typically need to extend the existing WebXR component or create a new one that exposes the xrSession instance.

For additional information on WebXR interactions, you can review the “Virtual and Augmented Reality”, “WebXR” sections in the documentation:

This is just a starting point. You’ll need to implement the surface detection logic and any additional user interaction logic for your specific use case.

1 messages left GPT 4

Can you describe a bit more what you’re missing? You can interact with WebXR scenes basically the same as with screen (e.g. use Buttons on objects or UI, use animations, …);
if you need session access then yes, you get the current session from the context, the renderer and via XR-specific callbacks (multiple ways)

Well lets go by parts… lets say I want to add a button (html + css) inside the AR context (an overlay basically), what is the Needle-engine friendly approach to this?

by user 368204675922657280

PD: currently I’m using vue as frontend, so I’m a little confused about how to combine both or if it is possible.

by user 368204675922657280

Hey :wave: have you read this section of the docs regarding HTML Overlays for AR?

For an overlay you can simply nest it inside the HTML, as described on the docs page:

    <div class="desktop ar">
        <button>I'm in AR!</button>

If you’re using Vue a common approach is to nest inside a “NeedleEngine” vue component and then put a slot inside the component, similar to this pattern:


  <needle-engine ...>
    <div class="desktop ar">

and then from somewhere else it would look like this

  <button>I'm in AR!</button>

Under the hood we’re handling the WebXR dom-overlay feature and when it’s available that content will be visible on the overlay. Keep in mind dom-overlay is typically only available on screen-based AR systems (Android WebXR, iOS Mozilla WebXR Viewer)