Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
Apple announced a number of new software features for its popular devices on Monday at its WWDC 2023 conference: computers, iPhone, iPad, Apple Watch, Apple TV, AirPods and new Apple Vision Pro headphones. As expected from reports and rumors leading up to the event, many of the tech giant’s new features used artificial intelligence (AI), or “machine learning” (ML), as Apple’s announcers were careful to say.
In keeping with Apple’s previously announced commitments to user privacy and security, these new AI features largely avoid connecting and transferring user data to the cloud, instead relying on the device’s processing power, which Apple calls its “neural engine.”
Here are some of the most exciting features coming to AI-powered Apple devices.
For Persona Vision Pro
The star of Apple’s event, as has often been the case in the company’s history, was the “one more thing” revealed at the end: the Apple Vision Pro. The new augmented reality headset resembles thick ski goggles that the user wears over their eyes, allowing them to see graphics of the real world projected onto their view.
Event
Transformation 2023
Join us in San Francisco July 11-12, where senior leaders will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Sign up now!
Due in early 2024, and with a staggering starting price of $3,499, the new headset, which Apple calls its first “spatial computing” device, packs a long list of impressive features. These include support for many of Apple’s existing mobile apps and even allow you to move Mac desktop interfaces into floating digital windows.
One of the key innovations Apple showcased in Vision Pro relies heavily on ML, known as Persona. This feature uses built-in cameras to scan the user’s face to quickly create a lifelike, interactive digital doppelganger. This way, when the user wears the device and joins a FaceTime call or other video conference, a digital twin appears in their place inside the goofy helmet, mapping their expressions and gestures in real time.
Apple said the Persona is a “digital representation” of the wearer, “created using Apple’s most advanced ML techniques.”
Better “duck” autocorrection
As iPhone users well know, Apple’s current built-in autocorrect features for texting and typing can sometimes be inaccurate and unhelpful, suggesting words that are nowhere near what the user intended (“duck” instead of…one another word that rhymes but starts with the word: “f”). However, that all changes with iOS 17, at least according to Apple.
The company’s latest annual major update to the iPhone operating system features new autocorrect that uses a “transformer model,” the same category of AI software that includes GPT-4 and Claude, specifically to improve autocorrect’s word prediction capabilities. This model works on the device, preserving the user’s privacy as they create.
Autocorrect now also offers suggestions for entire sentences, and presents its suggestions built-in, similar to the Smart Writing feature found in Google’s Gmail.
Direct voicemail
One of the most useful new features Apple showed off is the new Live Voicemail for the iPhone’s default phone app. This feature comes into play when someone calls a recipient on an iPhone, can’t pick up, and starts leaving a voicemail. The phone app then displays a text-based transcription of the voicemail in progress on the recipient’s screen, word for word, as the caller speaks. Basically, it converts audio to text, live and instantly. Apple said the feature is powered by its neural engine and “occurs entirely on the device … this information is not shared with Apple.”
Improved dictation
Apple’s existing dictation feature allows users to tap a small microphone icon on the iPhone’s default keyboard and start speaking to translate words into written text, or try. While the feature has had a mixed success rate, Apple says iOS 17 includes a “new speech recognition model” that supposedly uses the device’s ML to make dictation even more accurate.
FaceTime host mode
Apple didn’t announce a new physical Apple TV box, but it did introduce a major new feature: FaceTime for Apple TV, which uses an iPhone or iPad that users have nearby (assuming they have one) and use it as a its input camera. when showing other participants in FaceTime calls on the user’s TV.
Another new aspect of the FaceTime experience is presentation mode. This allows users to present an app or their computer screen during a FaceTime call, while displaying a live view in front of their face or head and shoulders. One view compresses the presenter’s face into a small frame that they can reposition around the presentation material, while the other puts the presenter’s head and shoulders in front of the content, allowing them to gesture as if they were a TV weatherman pointing at a digital screen. weather map.
Apple says the new presentation mode is powered by its neural engine.
Magazine for iPhone
Do you keep a diary? If not, or even if you already do, Apple thinks it’s found a better way to help you “reflect and practice gratitude” powered by “on-device ML.” The new Apple Journal app in iOS 17 automatically pulls recent photos, workouts, and other activities from a user’s phone and presents them as an unfinished digital journal entry, allowing users to edit content and add text and new media as they see fit.
Importantly for app developers, Apple is also releasing a new Journaling Suggestions API that allows them to code their apps to appear as potential journal content for users. This could be especially valuable for fitness, travel, and dining apps, but it remains to be seen which companies are implementing it and how gracefully they can do it.
Personalized volume
Apple touted Personalized Volume, an AirPods feature that “uses ML to understand environmental conditions and listening preferences over time” and automatically adjusts volume to what users think.
Photos can now recognize your cats and dogs
Apple’s previous ML systems for the iPhone and iPad allowed its default Photos app to identify different people based on their appearance. For example, you want to see a photo of yourself, your child, your spouse. Pull up the iPhone Photos app, navigate to the People & Places section, and you’ll see mini-albums for each of them.
As useful and enjoyable as this feature was, it clearly left out someone, our furry companions. Well, no more. At WWDC 2023, Apple announced that thanks to improved ML software, photo recognition now works on cats and dogs.
VentureBeat’s mission should be a digital town square for technical decision makers to gain knowledge for transforming enterprise technology and transaction execution. Discover our briefings.