Hello, we have a printed brochure with 16 pages and augmented elements for each site, it works with mindAR webgl, we have an QR code for each page but it could be better: the idea: we would use needle to make an collaborative ar brochure with multi image tracking, is it possible? I didn’t find a way searching here and the documentation. we tried to combine elements of the collaborative sandbox sample with the image tracking sample but I think that doesn’t work.
for example on one page we can have an city plan with buildings and when colleagues have the same brochure open and scanned with ar, they could open a room and can track and drop buildings on the AR map. I hope you can imagine what I mean sound complicated but this would be a game changer
maybe multi image tracking on quest3 and AVP would be nice too
Hello Michael Herrmann! Your concept for a collaborative AR brochure using Needle Engine sounds exciting and feasible. Implementing multi-image tracking and collaborative features is indeed a game changer for AR applications.
To achieve this, here’s a high-level plan:
-
Multi-image Tracking:
- Use Needle Engine’s image tracking capabilities to recognize multiple images on the brochure. You can have one tracker for each visible page, allowing users to interact with respective AR content by scanning the QR codes.
-
Collaborative Features:
- Implement a real-time communication system (like websockets or a similar service) to allow multiple users to share their state and actions in the AR environment, such as positioning buildings on a city plan.
-
User Interface:
- Design a simple UI to provide users with options for tracking and placing buildings. This can be components that react to AR events triggered by image detection.
Here’s a rough pseudocode outline:
1. Initialize AR Session
2. Load multi-image trackables linked to the brochure pages
3. On QR code detection:
- Display the relevant AR content
- Enable collaborative features (websocket connection)
4. User places buildings:
- Share the position data with other users via the websocket
- Update AR scene accordingly for all users
5. Allow users to interact with the AR elements collectively
Here’s a basic code snippet to get you started with multi-image tracking:
import { AR, MultiImageTracker } from "@needle-tools/engine";
// Initialize an AR session
const arSession = new AR();
const markers = [
{ image: 'marker1.png', id: 'Page1' },
{ image: 'marker2.png', id: 'Page2' },
// Add more images as needed
];
const tracker = new MultiImageTracker(markers);
arSession.addTracker(tracker);
// Handle events when a marker is detected
tracker.on('markerFound', (markerId) => {
console.log(`Detected marker: ${markerId}`);
// Logic for displaying AR content or enabling interactions
});
// Establish a websocket connection for collaboration
const socket = new WebSocket('ws://your-websocket-url');
socket.onmessage = (event) => {
// Update AR environment based on received data
};
You will need to refine the above code based on your exact requirements, but this should give you a starting point.
For further integration, check out the collaborative sandbox sample and ensure the interactions are synchronized across users.
If you need specific details on integrating features or have particular questions, please let me know!
1 replies left - Get a PRO license for more AI replies.
thread_Xgs4r976zkrWn2AA44HgWU1e
Reply to this message to continue the conversation
Hi,
This should totally work if you have a AR scene running. You could join rooms depending on the detected image. Our core image tracking is currently using the webxr incubations but you can also use mind AR instead with your own components.
Would be great if you could share some more details about what you tried.
Btw search should be available at the top bar
yes thanks I mean I searched already. I looked for combination of ar and collab but didn’t find solutions . okay so you said it is possible to combine the collab sandbox components with ar image tracking?
Yes, the main components that make up the collab sandbox are DragControls, Duplicatable, SyncedTransform and the SyncedRoom.
Try this - first without networking, we can add this later once the single player experience works. It’s always easier to break down a project first and then adding to it so you get to your final goal step by step. Adding networking ontop is not the issue.
- Create a prefab for testing for 2 tracked images
- Add some objects to the prefab and add a DragControls component to whatever you want to drag around
- Test the app with your images in AR
You might want to add MindAR at this point, you can follow their documentation to add it to Needle Engine. Feel free to post if you need assistance there.
thanks. I tried with one custom image and standard shapes it worked but added a second image tracking and tracked image object it doesn’t. the website shows the rotated object but when I click view in AR on iPhone it says , objekt konnte nicht geöffnet werden.
Where should i insert and customize this code?
Hello Michael Herrmann! You should insert and customize this code within your Needle Engine project, specifically in the section where you initialize your AR application. Here’s a step-by-step guide on how to do that:
-
Setup Your Project: Ensure your Needle Engine project is correctly set up and that you have the necessary components for AR.
-
File Structure:
- Locate the main JavaScript or TypeScript file where the AR logic is implemented. This is typically where you set up the AR session and any required components.
-
Insert the Code:
- At the beginning of this file, ensure you import the necessary classes from Needle Engine:
import { AR, MultiImageTracker } from "@needle-tools/engine";
-
Initialize AR Session:
- Right after your existing AR initialization code, you can insert the provided tracking code. Ensure you replace the placeholder image paths with the actual paths for your images (e.g., ‘marker1.png’, ‘marker2.png’).
-
Customize Interaction Logic:
- Customize the event handlers within the
tracker.on('markerFound')
callback to define what happens when each marker is detected. This is where you’ll implement your custom AR content display logic.
- Customize the event handlers within the
-
WebSocket Integration:
- If you don’t have existing websocket logic, you can implement the websocket connection setup where you handle user interactions and synchronize the states.
Here’s a more tailored example of where to place the code:
// Main File (e.g., main.js or app.js)
import { AR, MultiImageTracker } from "@needle-tools/engine";
// Initialize an AR session
const arSession = new AR();
// Define your markers
const markers = [
{ image: 'assets/marker1.png', id: 'Page1' },
{ image: 'assets/marker2.png', id: 'Page2' },
// Add more as needed
];
// Create multi-image tracker
const tracker = new MultiImageTracker(markers);
arSession.addTracker(tracker);
// Marker found event
tracker.on('markerFound', (markerId) => {
console.log(`Detected marker: ${markerId}`);
// Customize your AR logic here
});
// Websocket setup for collaboration
const socket = new WebSocket('ws://your-websocket-url');
socket.onmessage = (event) => {
// Logic to update AR content based on incoming messages
};
// Continue with your AR application logic...
Make sure to test your implementation thoroughly. If you need further assistance on specific features or encounter any issues, feel free to ask!
1 replies left - Get a PRO license for more AI replies.
thread_Xgs4r976zkrWn2AA44HgWU1e
Reply to this message to continue the conversation
Hi Michael,
To clarify: did you add two ImageTracking components or two entries to the TrackedImages array? The screenshot looks like your might have multiple webxr components in the scene?
For iOS AR currently only the first image can be tracked - that’s a limitation by Apple’s Quicklook unfortunately. (This limitation wouldnt be present with something like MindAR but tracking is much more stable with the native solution). Which image is tracked can be controlled by modifying the order of the array but I’m aware that this might not be a workable solution for the project that you described.
thanks. I add two Imagetracking components where I attached 2 images of page one and 2 and the tracked image 1 and 2 as object.
There seems to be a bit of a mixup of concepts here, so some clarifications:
-
The built-in Image tracking in Needle has platform-specific limitations, which you can also read about in our docs.
- On Android, WebXR image tracking is not yet enabled by default in Chrome (a decision by Google, not us); users have to turn on a browser flag. So from our perspective it’s great when you have control of the devices, but not for general usage.
- On iOS, we’re converting scenes to interactive USDZ, as Apple does not support WebXR on iOS. Interactive USDZ is limited to a single tracked image and does not support scripting (so, no networking and so on).
-
There are other browser-based solutions to image tracking that require additional user permissions, like camera access, but also have more features.
- For example, using something like MindAR or 8thwall as tracking solution uses the in-browser camera and does not use WebXR.
- The downside is that users have to go through an additional permissions dialogue to allow camera access, and usually image tracking quality is lower than with our built-in solutions.
For your usecase, it sounds like the built-in image tracking (WebXRImageTracking) is not sufficient. You will have to use a solution like MindAR to support multi-image tracking and networking for both Android and iOS.