When Steve Jobs introduced the iPad in 2010, he made a case as to why it should exist. He spoke about how there are certain things a modern smartphone (aka iPhone) does well and certain tasks a Macintosh does exceptionally. He then spoke about how if a 3rd category of device was to exist, it would have to excel at seven things better than a laptop and a smartphone. The iPad came into being and to this day, it still excels at those 7 key tasks (and arguably even more today).
The VisionPro is being touted as a spatial computing device. At a glance it seems to be iPadOS forked and reworked for an AR/VR environment. It makes for a great demo and the technology used is without a doubt groundbreaking. Meta Quest relies on hand held controllers to navigate the UI whereas VisionPro has advanced sensors which lets a user interact the UI with their eyes, fingers and voice through Siri (whose performance is still questionable even after a decade).
However the biggest question is what’s the use case for VisionPro? Where does it fit into a users workflow? Is it aimed as a Macintosh replacement, something an iPad achieved for some users, while others still gravitating towards a Mac for majority of their computing needs. Will VisionPro have similar success as a supplemental device? The hardware is definitely impressive, but if it follows in the footsteps of iPadOS, VisionPro’s success might be hampered the way iPads growth has been stunted.
I’m a huge iPad fan (and user). I’m writing this on an iPad Pro with a Magic Keyboard. The iPad has become for 90% of tasks my primary computing device. iPad has the potential, however as the years go on, it feels as if iPadOS is being held back just enough so that it does’t overtake MacOS. If VisionPro is to succeed (and usher in a future of spatial computing) it needs to be free to overtake and potentially replace traditional computing paradigms that have relied on physical input devices and monitors.