Apple says your iPhone will soon be able to speak in your voice with 15 minutes of training

Apple has previewed a bundle of new features designed for cognitive, vision, hearing, and mobility accessibility

One of the new features is Personal Voice, which allows users to create a synthesized voice that sounds like them to talk with friends or family members

Users can create a Personal Voice by reading a set of text prompts aloud for a total of 15 minutes of audio on the iPhone or iPad

The feature integrates with Live Speech, allowing users to type what they want to say and have their Personal Voice read it to whomever they want to communicate with

Personal Voice is designed for people who may lose their ability to speak. The new accessibility features will be available in iOS 17 and macOS 14

The feature uses on-device machine learning to speak in the user's own voice and identify who is speaking

Users will also be able to pause GIFs in Safari and Messages, customize the rate at which Siri speaks to them, and use Voice Control for phonetic suggestions when editing text

All of these features build upon Apple's existing accessibility features for the Mac and iPhone, which includes Live Captions, a VoiceOver screen reader, Door Detection, and more

Accessibility is an important focus for Apple, and the company is committed to making its products accessible to everyone