While this is more a community question, I was wondering who else have been starting to create their games using Needle and how well it scales the larger it becomes. I’ve been following this project for quite a while and due to the remarkable jobs from both marcel, herbst and others, convinced me to start a project here. You both are welcome to join the convo too!
Is needle for now still the ultimate tool to create mostly small to slightly bigger experiences or is something that is large in scale just as good to go with or even better, comparing it to developing in plain threejs or webgl/webgpu (w/ fallback) engines more aimed at games currently? The last part would be amazing to talk in private if possible (although threads are a thing).
Hope to hear from this growing community what you guys are up to!
Right on! Thanks for sharing. Have you published any from the C# times?
I’m also incorporating both web3, AI and WebXR features in my projects! (however I have not dabbled with VR for a while, still on the rift v1 haha)
So performance-wise you should be at a very similar ballpark as if you would be using just three.js
Obviously implementing your own engine is the best way but the most expensive one as well. Since you can fit the structure to your needs but it takes a lot of time.
It really depends on what you mean by “of scale”. Is it an open world? Is it a racing sim? Multiplayer focused?
But if the question is if it scales at all? Then needle mainly utilises Unity as its editor. It mimics majority of the flows unity has. This means you are able to granulate your assets. Load them asynchronously and utilizes some of unity tools. Like the animation or the shader graph (limited). You can make or add custom tools to the unity editor as well.
You will have to implement or use optimalization the same way as in Unity, Unreal or Godot.
Thanks! but I knew this already I originally meant larger scenes, my bad.
although this is new for me:
“But if the question is if it scales at all? Then needle mainly utilises Unity as its editor. It mimics majority of the flows unity has. This means you are able to granulate your assets. Load them asynchronously and utilizes some of unity tools. Like the animation or the shader graph (limited). You can make or add custom tools to the unity editor as well.”
Cool! Yeah I’ve been picking up some things about the shaders, but still a lot to learn in that regard
Just to add about Needle Engine and Unity editor etc: it is also possible to use the whole pipeline with any custom engine / vanilla three setup (without using Needle Engine at runtime but just use the exporter capabilities to build glbs with data or settings in either Unity or Blender + compress it)
Atleast we both know this combination of what you (guys) made is still in it’s infancy, albeit just in growth alone. Matter of time before this Discord is a few times the size it’s now, wondering what the trigger could be!
From my perspective a core strength of Needle Engine (powerful fast runtime on top of existing editors) is that managing complexity becomes much more feasible then with “traditional” webdev means, on par or better than “traditional” unity gamedev workflows.
Why it’s better than “traditional” game dev workflows in Unity/Unreal:
For example, things like AssetBundles/Addressables from Unity land are “added on top” of a regular game engine that usually tries to load everything up front. In contrast, in Needle Engine things are built for dynamic loading and everything can be fetched when needed with minimal (or zero, in many cases) code.
Why it’s better than “traditional” webdev workflows like in R3F:
In webdev land, people consider scenes with hundreds of elements big - e.g. in React-three-Fiber - whereas even projects that we consider very simple (like Castle Builder) have thousands or tens of thousands of elements. Those would be hard to manage “in text” but are super easy to manage and extend with our approaches. Projects like Songs of Cultures easily have ~3000 files, thousands of individual animations, and initial load times of <1 second. This would be hard to manage if not with a proper visual UI and build tools that are made for the web.
You just answered the question I’ve always had since I’ve used this but couldn’t find, wow, thank you! Coming from traditional threejs/R3F projects with React, always had my doubts but hadn’t tested it to see how it compared.
Dynamic loading, GPU instancing, Dynamic Texturing, Animation LOD, Render Surface Map, Zone Culling, Alpha Distance Fade are great to have for example if deciding to create a game of scale, but something like Cell-based Frustum Culling is also amazing so you can put your entities in a grid which you can change the size of, so it does frustum culling on each cell/grid instead of per entity. That with Occlusion Culling that is hardware rendered are the possible perks of those game engines and as Needle is not really marketed as a game engine for the web, this kept me wondering what to expect and if someone else had done something at a larger scale yet! I enjoy Needle so much and trust it with “small” experiences, but right now can’t make a mistake in choosing the wrong engine/workflow!
Thanks again for your explanation, definitely deserves to be told somewhere within the Needle docs! People still use Unity with WebGL for it’s output I noticed around me (where optimizing seems like a ton of wasted time)
Given that I’m already running Chrome inside Unity for all the desktop computers, reimplementing this as Needle might actually be feasible: https://cornonthecob.games/
This is a regular Unity app tho not threejs unfortunately (but i hope we can do something similar with webGPU one day the stuff in the video is all custom compute rendering too)
i came here to ask this same question. Like if i have a full on VR app made in Unity that is somewhat hefty, can i recreate this same project in Needle Tools in order to have a web-version of the project? Are there limitations? Like for my project i’d be wondering stuff like this:
can i have a basic VR player with VR camera?
can i have raycasts that come out of VR player’s controllers and then allow the player to interact in various ways with buttons and other unique interactables?
Right now the project uses Unity’s XR interaction toolkit, which does not support WebGL, so all of that would have to be replaced somehow