Voice controlled action launcher

A different way of launching actions when immersed. You can control home automation like your Hue Lighting. Role - Experimental Developer

© Image by Mika

As you can see the device is inspired by Rabbit.tech r1.
I made several versions using Meta XR SDK's including Interactions, Depth API, Voice SDK.

The basic version has Conduit actions that were triggered by voice controls. Here you can see me controlling my lighting.

I used Scene understanding and scene anchors to get locations of my lighting. This unlocks the ability to trigger lighting based on your game / spatial movements


DESK CLEANER

Using SceneManger I spawned a grid of sprites on all desk surfaces. A hand collider cleans the sprites. This can be seen here
https://x.com/augmentedcamel/status/1754192592961466441?s=20

Later I tweaked the application to be wearable like a watch.

can be seen on youtube here

LAM CAPABILITIES
I will restart this project when Chat GPT launches its LAM (language to action model).
Another requirement is multi app experience inside Quest (4?)


Get in touch

Whether you have a question, a project idea, or just want to say hello, I'd love to hear from you. Reach out and let's start a conversation.

mikameel@outlook.com

Voice controlled action launcher

A different way of launching actions when immersed. You can control home automation like your Hue Lighting. Role - Experimental Developer

© Image by Mika

As you can see the device is inspired by Rabbit.tech r1.
I made several versions using Meta XR SDK's including Interactions, Depth API, Voice SDK.

The basic version has Conduit actions that were triggered by voice controls. Here you can see me controlling my lighting.

I used Scene understanding and scene anchors to get locations of my lighting. This unlocks the ability to trigger lighting based on your game / spatial movements


DESK CLEANER

Using SceneManger I spawned a grid of sprites on all desk surfaces. A hand collider cleans the sprites. This can be seen here
https://x.com/augmentedcamel/status/1754192592961466441?s=20

Later I tweaked the application to be wearable like a watch.

can be seen on youtube here

LAM CAPABILITIES
I will restart this project when Chat GPT launches its LAM (language to action model).
Another requirement is multi app experience inside Quest (4?)


Get in touch

Whether you have a question, a project idea, or just want to say hello, I'd love to hear from you. Reach out and let's start a conversation.

mikameel@outlook.com

Voice controlled action launcher

A different way of launching actions when immersed. You can control home automation like your Hue Lighting. Role - Experimental Developer

© Image by Mika

As you can see the device is inspired by Rabbit.tech r1.
I made several versions using Meta XR SDK's including Interactions, Depth API, Voice SDK.

The basic version has Conduit actions that were triggered by voice controls. Here you can see me controlling my lighting.

I used Scene understanding and scene anchors to get locations of my lighting. This unlocks the ability to trigger lighting based on your game / spatial movements


DESK CLEANER

Using SceneManger I spawned a grid of sprites on all desk surfaces. A hand collider cleans the sprites. This can be seen here
https://x.com/augmentedcamel/status/1754192592961466441?s=20

Later I tweaked the application to be wearable like a watch.

can be seen on youtube here

LAM CAPABILITIES
I will restart this project when Chat GPT launches its LAM (language to action model).
Another requirement is multi app experience inside Quest (4?)


Get in touch

Whether you have a question, a project idea, or just want to say hello, I'd love to hear from you. Reach out and let's start a conversation.

mikameel@outlook.com