【マヌカ】 Manuka Face Tracking Module
- Digital1,200 JPY

Modular Avatarを使用したマヌカ(v1.02)face trackingモジュール。VRCFaceTracking(VRCFT)ソフトウェアを使用して、様々なface tracking対応VRヘッドセットで動作します。ケモ耳、しっぽ、プルルン瞳のアニメーションもあります!(NOTE) ヘッドセットによってサポートする機能が異なります。Quest Proでのみテストしていました。サポートの問題についてはBoothかTwitterでメッセージしてください! VRCFT (5.0) Face Tracking module for Manuka (v1.02) using modular avatar. Works with various facial tracking VR headsets using VRCFaceTracking (VRCFT) software. It also comes with a lively blink animation, ear and tail movements, modular avatar support, and a few expression activators~! (NOTE) Different headsets support different features. Only tested on Quest Pro. Message me on Booth or Twitter for support or issues. ((Manuka not included (^. ̫ .^) *:・゚☆ )) Please buy here first: jingo1016.booth.pm/items/5058077
Prerequisites / 前提条件
【English】 - Manuka (v1.02): https://jingo1016.booth.pm/items/5058077 - Modular avatar: https://modular-avatar.nadena.dev/ - Compatible device with face tracking - VRCFT: https://github.com/benaclejames/VRCFaceTracking/releases 【日本語】 - マヌカ(v1.02): https://jingo1016.booth.pm/items/5058077 - Modular avatar: https://modular-avatar.nadena.dev/ - facial tracking対応VRヘッドセット - VRCFTソフトウェア: https://github.com/benaclejames/VRCFaceTracking/releases
How to / 使用方法
【English】 1. Make sure the original FBXs are untouched. 2. Click the "naeruru" tab at the top of Unity and choose "Face Tracking Patcher" 3. Choose Manuka from dropdown. FBX should prepopulate if fine. 4. Choose "Patch FBX." 5. If successful, you will see "patch completed successfully." 6. Use the provided **Modular Avatar** prefabs in Assets/naeruru/Manuka/prefabs. You can drag and drop the "FaceTracking" prefab into any existing avatar too! 【日本語】 1. オリジナルFBXは正しい位置になければなりません。 2. Unity上部の "naeruru"タブをクリックし、"Face Tracking Patcher "を選択します。 3. Manukaを選択します(元のFBXがあれば、あらかじめ入力されているはずです)。 4. "Patch FBX"を選択する。 5. サクセスすれば、"patch completed successfully "と表示されます。 6. Assets/naeruru/Manuka/prefabsにあるModular AvatarのPrefabを使用します。また、「FaceTracking」のPrefabも既存のアバターにドラッグ&ドロップできます。
What's Included / 入れてるもの
- Full Face Tracking setup that works with VRChat and includes all shape keys for ARKit integration - プルルン瞳 ("Live2d" style blink animation) - Expression activated ear and tail movements - extra expressions toggle (adds tears to lip suck, sad eyebrows on frown, straighten eyebrows on squint....) - modular avatar support - gesture toggle - ear movement toggle - eyelid toggle - mouth tracking toggle Tracked VRChat Expressions: - EyeLeftX/Y - EyeRightX/Y - EyeLidRight/Left - EyeSquintRight/Left - BrowExpressionRight/Left - BrowPinchRight/Left - MouthUpperUp - MouthLowererDown - MouthClosed - MouthTightener - MouthStretch - MouthRaiser - LipPucker - LipFunnel - LipSuckUpper/Lower - SmileFrown - JawOpen - JawX - MouthX - NoseSneerLeft/Right - CheekPuff - TongueOut - TongueX/Y
Updates
※ 2/19/2025: v1.00: - first release! ※ 7/6/2025: v1.01: - fix issue with new versions of Modular Avatar not properly turning off gestures during face tracking ※ 7/9/2025: v1.02: - fix issue where eyes would jitter for split second when doing hand gestures ※ 7/28/2025: v1.03: - fix issue where in some cases visemes would not work as well as they should
Credits / 謝辞
- モデル: https://ponderogen.booth.pm/items/6106863 - VRCFT: https://github.com/benaclejames/VRCFaceTracking - OSCmooth: https://github.com/regzo2/OSCmooth - hdiff patcher: https://github.com/sisong/HDiffPatch - modular avatar: https://modular-avatar.nadena.dev/ - song in video: https://dova-s.jp/bgm/play2558.html
FAQ
- If I dont reply on Twitter, I probably didn't see it. Try again on Booth!