"Collaborative Sandbox" - VR avatar

We have been testing the “Collaborative Sandbox” example and when we enter the project through the Meta Quest 2 browser and another user through the desktop browser, the VR avatar does not appear synchronized, it does not render. In the Castle Builder example project, the VR avatars are perfectly synchronized. Is the Castle Builder project available in unity?

Original Post on Discord

by user 285541785730547714

Yes, it’s available when you have a commercial use license
The avatars should definitely work in the Sandbox as well – if the avatar doesn’t show are you sure you’re in the same room? Does everything else sync correctly?

We have tested our own project and the project you have in the examples web. When you connect to the project a user with a meta quest 2 and another user with desktop or mobile, in the same room the avatar is not displayed in the scene. If you move an element if it is synchronized, but the avatar does not appear for the other users. However in the castle builder scene everything works fine.

by user 285541785730547714

Ok we are interested in buying a license to use in our project. We have been using “UNITY” for a long time and believe it is the best way forward. Our project is based on building training experiences using immersive technology that is functional for most VR devices and compatible with mobiles, tablets, etc… very similar to your philosophy. I hope you can help us in our adventure.

by user 285541785730547714

Are you planning to add functionalities such as hand gesture gripping? for us this part is very important.

by user 285541785730547714

We’ll check out the Avatar visibility issue, thanks for reporting. Glad that you’re considering a license! You can get seats directly from https://needle.tools, let me know if there are any questions.

Regarding hand gesture gripping: we have lots of improvements planned for VR and AR scenarios. With gesture gripping, do you mean:

  1. predefined gestures that are used when grabbing specific objects (e.g. holding a cup with the controllers)
    or 2) automatic gesture handling and collision for hand tracking?

For us, the predefined gestures used when grasping specific objects (e.g., holding a cup with the knobs) are important. But I understand that the second point can also be beneficial for everyone.

by user 285541785730547714