Rabbit launch and AI devices

Rabbit launch and AI devices
896c98ab-4ace-45c8-8a55-40ad9d90ecbf_4_5005_c

The rabbit launched a new hardware (r1) that enables the customer to have an AI assistant for a fixed price of $199 (no subscription required). The keynote presentation is worth watching entirely, and Jesse Lyu did a good job demonstrating the product. It is much more fun than the Ai Pin from Humane that was released last year.

The device is nice and looks fun to use. There are lots of similarities from Playdate from Panic in terms of hardware design.

I don't know if I'm too old to understand these AI devices or if they are niche products, but I also think this could have been an app. I still think it's valid for these companies to develop these products and innovate tho, but I don't think they will change everything like the iPhone did 17 (!) years ago. The innovation part here seems to be more on the software than the hardware.

Along with the hardware announcement, they also announced the concept of LAM: Large Action Model. It is a very innovative way to use AI and I think it's super promising for future applications. From their research page:

We have developed a system that can infer and model human actions on computer applications, perform the actions reliably and quickly, and is well-suited for deployment in various AI assistants and operating systems. Our system is called the Large Action Model(LAM). Enabled by recent advances in neuro-symbolic programming, the LAM allows for the direct modeling of the structure of various applications and user actions performed on them without a transitory representation, such as text. The LAM system achieves results competitive with state-of-the-art approaches in terms of accuracy, interpretability, and speed. Engineering the LAM architecture involves overcoming both research challenges and engineering complexities, from real-time communication to virtual network computing technologies. We hope that our efforts could help shape the next generation of natural-language-driven consumer experiences.

The model can understand basically any user interfaces and interact with them. Since today we already have millions of apps and websites that do many kinds of stuff, the AI can interact with these interfaces without the need for the developer to integrate them. I haven't tested it yet, but the concept and possibilities sounds interesting to me!

I can see how useful it would be to have an AI to plan your trip on Tripsy by just learning how the interface works and doing all the work for me.

To wrap it up, I still have many questions on these hardware devices. Does it make sense to carry another device just for that? What about taking photos and seeing them? What about consuming video and entertainment, witch is a big part of our society right now? If these devices won't do that, can't the iPhone have this powerful AI assistant and we continue to carry only one device?

Can you imagine if Siri had all these capabilities, using the existing Shortcuts integration, but better? That would be the ideal, in my opinion.