Alfa Romeo F1 demo from WWDC Apple Vision Pro presentation
Author profile picture

Introducing the Apple Vision Pro, unveiled at WWDC 2023: a groundbreaking device that opens up new possibilities in virtual reality innovation. This device brings spatial computing to life, offering a fully immersive 3D interface controlled seamlessly by eyes, hands, and voice. With a hefty price tag of $3,499, it is designed with developers and early adopters in mind. Its computing power, dual micro-OLED display with an astonishing 23 million pixels, create a stunning visual experience. With spatial computing at its core, the Apple Vision Pro takes a different approach from devices like the Meta Quest. What new innovations will be unlocked by the Vision Pro?

Revolutionising user interaction

Apple aims to revolutionise user interaction with technology through the Apple Vision Pro’s spatial computing capabilities. Instead of lengthy VR sessions, this device focuses on short VR trips, communication, content viewing, and gaming. The Vision Pro lays the foundation for Apple’s future AR hardware and software offerings, underlining their commitment to the metaverse and spatial computing.

The groundbreaking headset builds on Apple’s existing ecosystem of hardware, software, and services. It differs from competitors like the Meta Quest Pro by targeting a broader audience and integrating data more effectively. The Vision Pro’s sleek, slim, and controller-free design relies on cameras and sensors to track finger and hand movements. In contrast, the Meta Quest Pro has a larger, boxier appearance and comes with motion controllers.

Powerful performance and display

When it comes to performance, the Apple Vision Pro utilises a dual chip design, featuring the Apple M2 chip and a new R1 chip for sensor data processing. In comparison, the Meta Quest Pro employs Qualcomm’s Snapdragon XR2+ chip, which is built for headsets but based on smartphone technology. The Vision Pro also boasts a better-than-4K resolution, with an external display showing the wearer’s eyes, while the Quest Pro’s resolution is 1,920 x 1,800 per eye, with about 7 million total pixels and no external display.

Battery life for the two devices is comparable, with the Vision Pro lasting for 2 hours and the Quest Pro between 1 and 2 hours. The Vision Pro connects to a battery pack via a cord, while the Quest Pro has a built-in battery. The Vision Pro is also compatible with Apple Arcade games and supports productivity, movie watching, video calls, and 3D photo and video capturing. Similarly, the Meta Quest Pro has a library of VR apps and games, but these are all custom developed for the platform.

Shaping industries with spatial computing

The introduction of spatial computing in the Apple Vision Pro has the potential to impact various industries, including gaming, manufacturing, field service, medical, and industrial simulations. Its compatibility with existing iPhone and iPad games and apps, as well as the new Vision Pro App Store, allows users to enjoy a wide array of content. The device also features eye, voice, and hand operation, replacing the need for controllers. This should make it possible for remote teams to work together on a 3d model. In the WWDC presentation Apple showed an Alfa Romeo F1 car as an example.

Apple’s collaboration with Zeiss for magnetically attached lenses, a separate battery for a lighter headset, and the development of VisionOS as the software core further demonstrate the company’s dedication to innovation. The Vision Pro’s customisable ergonomic design, breathable Head Band, Fit Dial adjustment, Light Seal to prevent light leakage, and Digital Crown for Home View and Environment immersion level control make it an appealing device for users.

Transforming communication and connection

Apple CEO Tim Cook believes that the Apple Vision Pro is “enhancing communication and connection”. The device’s Spatial Audio system, Apple Silicon two-chip design, and 2-hour external battery life (all-day use when plugged into a power source) make it a versatile and powerful tool for users. The feature that allows users to scan their face and create a digital “Persona” that mimics face and hand movements with machine learning offers a more realistic and immersive experience for FaceTime and Zoom calls. However the future or AR should hold much more then enhanced Zoom calls.