Rabbit R1: Boxed AI Failure!
There’s no second question to the fact that we’ve been surrounded by large language and machine learning models well before the dawn of consumerist AI tools like ChatGPT, Google Gemini, etc. But fundamentally these tools were either made for a particular task or serve as a portal to get text-based answers to questions. But with its increasing hype, AI tools have nothing more than that — a text tool. When will it be able to do things for us even before we think about it? When will it be able to take meaningful action? This product might have the answer to it. But is it what it looks like or a complete faff? Let’s find out…
The Good:
The R1 by the folks over at Rabbit is not the first AI-powered grounds-up product for tech enthusiasts. Before it, the Humane AI pin definitely left its mark on the custom AI hardware scene but due to its immobilising interface and lack of connection with a variety of existing technologies, it was named one of the worst-reviewed products by famous tech reviewer Marques Brownlee. But what does this change? It’s the LAM. LAM or a Large Action Model is foundationally a new AI model which leverages the output of humans interacting with interfaces and technologies to train its dataset. What it allows is the fact now with a quick voice prompt, the user will be able to take action on various technologies with AI which was previously hard to achieve. This was because of the interaction of the AI model and the technology using an intermediary called an API (Application Programming Interface). But one can limit how much interaction is possible in the API rules & for which Rabbit built this custom LAM so that it can interface any existing tool or technology as if it was a human doing the same. This is what got the minds of people for Rabbit in the first place but when Teenage Engineering came onboard for the visual hardware, enthusiasts were more interested than ever before.
The Bad:
While getting it designed by one of the best product designers in the world was a cool move, the R1 has issues which might either have a steep learning curve or flawed fundamentals. The external design is great, it’s distinctive & you’ll definitely recognise it from far away. The scroll wheel feels like a fun inclusion, but let me tell you, it’s not! The R1 has only one physical button, which is responsible for the majority of the interaction & a camera that’s used for visual search. But the thing where it goes wrong is the user interaction. While it does have a touchscreen the user has to forcefully use the scroll wheel to interact where the spin curve values are so slow that you’ve to scroll multiple times until you get to the next UI element & with the lack of haptic feedback and inertial scrolling, things look worse in the rabbit hole (pun intended). The only time you can use the touch screen is when it is in the landscape in the terminal mode for text input. And with the weird interface elements clinging on top of each, things don’t look good for a $200 product, which is deemed to replace a modern-day smartphone. Moving on…
The Ugly:
Getting straight to the point. The worst thing about the R1 is its unfinished approach. While the company boasts about the LAM being the next big thing in the world of AI, the R1 still struggles to play the exact song Spotify asked by the user even though they have a direct integration with it. On top of it, the visual search is so underbaked that Google Lens from 2017 would have more accuracy in recognising a house plant. And the limited use of the touchscreen is a definite sore point. While many might hype it as the next big thing, let me tell you, it isn’t. Especially when Mishaal Rahman over Android Authority ran it as an app on a phone.
“It could have been an app, but isn’t. It should have been a product, but isn’t”.