Hello! Iām trying to add some additional feedback to a scene in AR that Iām currently building.
The idea is that, once you click Enter AR and enters into the mode, to show an image/animation with instructions, after that, when the surface is detected I want to swap to an additional āTouch to place objectā image/animation.
Iām not quite sure where to start with this, but so far I figure out that these events exists: xrSession.onselectstartxrSession.onselectxrSession.onselectend.
Yet I have no idea of a way to have access to the current xrSession object, I looked the source code for the WebXR class, but apparently it doesnāt have a property for the current Session.
Hello there ā we have an experimental AI bot that might be able to help you with your question. Would you like to try this out?
How it works:
Reply to the messages of the bot that end with a thread_id to interact with it and ask follow-up questions. You can also mention the bot directly in this channel to get help.
To implement the additional feedback and instructions for your AR scene, you can subscribe to various events from the WebXR component, which manages the session and interaction in AR. The WebXR component provides an API for entering and exiting XR sessions and handles the lifecycle events internally.
Hereās a high-level overview of how you might implement this:
Add a WebXR component to your scene if not already present.
Listen for the ARSessionStart and ARSessionEnd events provided by the WebXR component to handle the AR session lifecycle.
When the AR session starts (ARSessionStart event), display the instruction image/animation.
Implement a surface detection mechanism to determine when a suitable surface has been detected for object placement.
Upon surface detection, swap the instruction image/animation with the āTouch to place objectā feedback.
Listen for user interaction events (like touch or click) to place the object at the desired location.
Now letās write some code for this. Weāll create a component that subscribes to the necessary events and updates the UI feedback accordingly.
Make sure to add the Sprite components for initialInstruction and touchToPlaceInstruction and assign them in your scene.
Since Needle Engine doesnāt provide direct access to xrSession like the raw WebXR API does, the interactions are managed within the WebXR component, which abstracts the session handling. If you need more specific control over the WebXR session, you would typically need to extend the existing WebXR component or create a new one that exposes the xrSession instance.
For additional information on WebXR interactions, you can review the āVirtual and Augmented Realityā, āWebXRā sections in the documentation:
This is just a starting point. Youāll need to implement the surface detection logic and any additional user interaction logic for your specific use case.
Can you describe a bit more what youāre missing? You can interact with WebXR scenes basically the same as with screen (e.g. use Buttons on objects or UI, use animations, ā¦);
if you need session access then yes, you get the current session from the context, the renderer and via XR-specific callbacks (multiple ways)
Well lets go by partsā¦ lets say I want to add a button (html + css) inside the AR context (an overlay basically), what is the Needle-engine friendly approach to this?
If youāre using Vue a common approach is to nest inside a āNeedleEngineā vue component and then put a slot inside the component, similar to this pattern:
Under the hood weāre handling the WebXR dom-overlay feature and when itās available that content will be visible on the overlay. Keep in mind dom-overlay is typically only available on screen-based AR systems (Android WebXR, iOS Mozilla WebXR Viewer)