Breaking Down VisionOS
Building Blocks of visionOS and spatial computing
Creating a vision os application
Creating volumes and 3D objects with Reality Composer Pro
Adding Videos and Audio with Reality Kit
Analyze your surroundings with ARKit
Tips for designing visionOS UI
Tips for designing with vision and motion
Tips for designing for share play
Creating and manipulating Space
Launching UIKIT and SwiftUI apps in visionOS
Enhance Iphone/Ipad apps for shared space
Testing Reality Kit with Reality kit trace
Using ARKit to track and perceive the visual world
How to use Reality Composer Pro
Utilizing Safari Development Features
Using Quick Look In Spatial Computing
Building blocks of VisionOS
Apple describes the following as the core components that are the building blocks of spatial computing.
Principles of Spatial Design
Below are some rules and guidelines to keep in mind when developing spatial applications.
How to create a visionOS application?
Once the necessary Xcode version is installed you can create a visionOS app by selecting the visionOS tab when creating a new project. After selection visionOS, users can designate their initial scene to be either a window or a volume. In addition, users can designate the space in which that object will appear
How to create and modify 3D objects in visionOS?
Another way to display 3D content in volumes is with Model3D. If you have a USD file/Reality File saved in your project or a URL to one of those files, you can render that file by designating the files name with Model 3D.
Adding windows and 3D objects to complement your 3D objects using attachments
Attachments are new properties that can be added on to swiftui views to add additional ui elements
Features available in reality Kit
Adding Videos using Reality Kit
Reality Kit can be used to display video play back in your app. By utilizing the AVKit frame work, an AVplayer when passed a url of an asset can be converted to an entity. This entity can be added to RealityView to display videos.
Creating Worlds and Portals using Reality Kit
In reality kit, worlds are container entities that contain other encapsulated entities only accessible in that world. In the below example, a world entity is created that contains three 3D entities of a moon, earth, and a sky. In order to see these entities you would need to transition to this world by way of a portal. A portal acts as a transition entity, that segues users from one world into another. This ability to separate entities by different worlds showcases how much of a huge canvas visionOS gives developers when creating.
Adding Audio using Reality Kit
Reality kit also gives developers the ability to assign audio to components. In reality kit sound can be designed to have 3 states, channel, ambient, and channel. Ambient audio works by emitting sound from multiple sources surounding a user. This audio is good to duplicate atmospheric sounds such as background music. Channel audio is good for focusing sound from a specific direction, and spatial audio works well when assigning a moving object with sound. Sound can be classified and designated in an entity object and the sound emitting parent entities can inherit these audio sources. In the example code, a spacial audio entity is created and added to a moving 3D satellite entity.
Tips for designing in visionOS
App icons in visionOS are 3D. It is recommended when designing icons to use three layers of UI.
Keep windows transparent to give users context of the outside world
Keep text white and bold for clear communication
Tips for designing with vision and motion
Provide depth cues.
Depth cues are visual cues humans perceive to get a an idea how close are far something away is. It is important to provide depth cues for your user to create a realistic exrpeciance. Depth cues can be created by using relative size, color, shadow and blue
Any content that requires a user to read needs to be at arms length distance. Also keep text content centered. Make less wide if possible
Tips for dealing with objects in motion
When objects are in motion make them slightly transparent to communicate to the user the objects are in motion and not the user
Avoid head locking content. Head lock content occurs when an object follows the line of sight of a user. This can make the experience feel constrained. Make content world locked, meaning objects have static positions, or implement a lazy follow to where objects to stay in a viewers line of sight but they react slowly to when a viewer turns their head.
Avoid ossicalting objects, for they can make a user dizzy
Avoid repetitive large objects passing by a user to avoid an uncomfortable experience
Designing for share play
Even though the app appears to be presenting to all of the user at once, in the code each user is actually experiencing individual version of the app on their own device. This is an example of shared context. Your app should include personalization configurations where users can adjust personalized preferences such as volume, without effecting the volume of the other users.
How to create and modify space in visionOS?
Launching swiftUI and UIKit app in visionOS
SwiftUI Views and UIKIT view controllers automatically appear as windows when launched in vision OS, those windows can be scaled by dragging the corner of the window
Enhance Iphone/Ipad Apps for shared space
When launching an iPad/iOS app in shared space the features unique to xrOS must be taken in consideration to understand which new and old features will be compatible
Hover effect can be added to views, custom styling, disabled and configured to encompass a specific area
Analyzing Reality Kit Objects with Reality Kit Trace
RealityKit Trace, gives details on the frames produced by a reality kit object. A visual breakdown of the amount of frames, the cost of each frame, cpu/gpu usage, and status of frames are good for trouble shooting slow rendering objects. Frame states can be classified by early, just in time, and late based on the rendering speed. Slow rendering frames appear as red. Use chart to evaluate the frame/second rate. Apple advises a frame rate of 90 frames / second.
Optimizing Strategies in Vision OS
Below are list of strategies to optimize developing in visionOs
When positioning your Windows, avoid overlapping translucent windows. This prevents a scenario called over draw which is difficult for the system to render
Avoid unnesary view updates and redraws.. Views created with swiftUI observe published properties to trigger an update. Write your code in a way that the published propetires only update views when necessary
Reduce offscreen render passes by using shadows rounded rectangles, and visual effects with consideration to there expense on the system
Use physical based materials to optimize for lighting
When creating reality kit objects, use simple objects
When displaying entities, create them in advanced and hide/show them instead of them having to be constantly recreated in the view
When making network calls use async loading apis
When optimizing input performance use static colliders and keep updates below 8ms
Optimize AR Kit usage by using tracking mode to reduce anchor cost and minimizing persistent anchors
Optimize video display by using 24 - 30 hz videos, also avoid concurrent video playback
Optimize share play by turning off any features that aren’t necessary during that experience
Optimize and test your app for adverse temperature conditions using, thermal inducing
Optimize for memory by reducing memory ui allocations, and reducing texture and geometry memory
Using Core Location in spatial computing
Location services must be approved by user, approval will be triggered by adding the below privacy key in the plist
In the shared space, apps that send location updates must be looked at user before they will emit location updates
Using ARKit to track and perceive the visual world
ARKit has been redesigned for spatial computing to access features in an easy way. ARKit in spatial computing can be broken down into three components.
Data Provider - Individual ala cart AR api services
Anchors - Used to designate positions in the real world, for data providers and visual objects.
ARKit Session - Manages a defined group of data providers
World Tracking adds world anchors for virtual content placement
This is important for keeping objects fixed to the real world
This is important for persistently keeping an object in real space. For example, if I had a 3D virtual book and I placed it on my desk, when I use visionOS the next day the virtual book in real space.
Scene understanding
Scene Understanding is important for integrating the surround the environment.
This can be broken down into three categories
Plane Detection - allows to determine walls, tables, floors and other plane objects
Scene Geometry - determines physical dimensions and volumes of of content
Image Tracking - Detects pre defined images
Hand Tracking is used to detect hands
Useful for content placement and detecting custom gestures
Below is an example app using ARKit hand tracking.
This Sample project allows you to create 3D cubes by snapping your fingers
2nd Slide: The View The view adds any objects generated in reality view to the seeable environment. Also adds a tap gesture recognizer to any any 3D objects that triggers the creation of a cube when tapped. Lastly the view kicks off the initialization for for the ARKit services scene recognizer and hand tracking.
3rd Slide: The View Model
The view model contains properties to represent ARKit sessions for hand, tracking, scene understanding. Also contains functions to be called from hand tracking/ scene tracking updates and 3D cube creation
How To Use Reality Composer Pro
Reality Composer Pro can be set up in two ways.
1. To Open Reality Composer Pro as a standalone, use the Xcode tool bar, to click developer tool, then Reality Composer pro
2. To have a project integrated with Reality Composer pro, create an Xcode project using an XR os template, this creates an default use, click on the filer and click the open reality compare pro button
Reality Composer Pro’s UI has 5 main components
Left Side Bar for navigation called hierarchy panel
Right Bar, used to edit properties of 3D objects
Main View, called view port. Used to vie 3D objects
Add Component Section, add reality kit components
Bottom panel, called editor panel, used as a project browser
Add assets to projects in 3 different ways
1. Drag and Drop content from project browser
2. Add content from content library
3. Use object capture to, upload a an image and RCP will generate the image
How to incorporate audio via Reality Composer Pro
Add particle emitters via reality kit
Particle emitters can be used to create 3D objects that have visual dynamic properties.
Example, a flickering flame or a hovering cloud
Create Particle emitters by clicking the (+) button in the add components section.
Use the Inspector panel to edit the proprieties of a particle emitter to your liking
Using statistics to optimize reality composer scenes
Statistics inside Reality Composer pro offer metrics to various categories of a Reality Composer scene.
Use these statistics to observe output values that show areas of a scene that are a bottle neck for resource consumption.
Utilizing Safari Development Features
Use Web inspector to inspect and edit html elements create by your application
Open web inspector by using right click + inspect element
Use responsive design mode to preview web page in different layouts. If Xcode is installed located previews on different iOS devices can be viewed as well
Using Quick Look For Spatial Computing
How To Create 3D models for quick look
3D models can be created by 3rd party digital creation studios, scanning 3D models in reality kit, or if using a room plan, the room plan api
Understanding Reality Composer Pro
RCP can be launched directly through developer tools or through a code to link in xcode. When launching via the code link Reality Kit will appear as a swift package
Tour of the Reality Composer Pro Software
Creating a scene in reality composer pro