I’m trying to use mediapipe script to detect specific landmarks on hands. is it possible to do that?
Yes, Saad, it’s possible to use the Mediapipe library to detect specific landmarks on hands. Mediapipe offers a component called MediapipeHands
which you can use to control a 3D scene with hand gestures (see the documentation). You can find an example of this integration and see it live here (see the documentation). This requires using the @mediapipe/tasks-vision
package with a specific version used in your project (see the documentation).
For detecting specific landmarks, you’d typically access the detected hand landmarks through the Mediapipe API and then apply any logic you need based on these landmarks (like tracking pointers, detecting gestures, etc.) using JavaScript or a similar language supported by your development environment.
If you need more specific guidance on integrating Mediapipe with Needle Engine or have more questions, feel free to ask!
24 replies left
thread_i3ZTAiFkiZ22zMD7Gw168W8c
Reply to this message to continue the conversation
Hi, did you checkout the sample here? MediaPipe Hands | Needle Engine
It’s not using landmarks on hands to display objects but could still be useful in getting started.
Also a sidenote: The facefilter package / code is now fully open source on github GitHub - needle-engine/facefilter: Face Filter for Needle Engine: "npm i @needle-tools/facefilter" and can be installed from npm with the latest Needle Engine Beta version.
I saw that but i need camera renderer too like facefilter.
Does facefilter can be adjusted to achieve this? is it also made with mediapipe?
It’s also using mediapipe, yes