Tag Archives: AVFoundation

How to use a specific voice for text-to-speech in iOS

There are two ways of creating voices with which we can make iOS talk: creating a voice using a locale (classic), or creating a voice using a specific voice identifier (futuristic).

Let me show you both options here.

Classic and Easy

In this snipped we’re creating a voice with a certain dialect, British English in this case:

NSString *phrase = @"I'm listening.";

AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init];

AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc]initWithString:phrase];

AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-GB"];

utterance.voice = voice;

[synthesizer speakUtterance:utterance];

While very straightforward, we don’t know if the voice is going to be male or female. All we can specify is the language and dialect.

Futuristic and Specific

The above works fine and probably was enough when the speech synthesiser framework was introduced in iOS 7, but since then there are a myriad of other voices we can use in our applications. To specify one of them, we need a voice identifier.

Here’s how to do it:

NSString *phrase = @"I'm listening.";

AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init];

AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc]initWithString:phrase];

AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithIdentifier:@"com.apple.ttsbundle.Karen-compact"];

utterance.voice = voice;

[synthesizer speakUtterance:utterance];

The setup is almost the same, but instead of the voiceWithLanguage method, we’re using the voiceWithIdentifier method here.

Finding Voices

To see a list of all available voices on the device, we can access the speechVoices method of the AVSpeechVoices class. This will return an array of AVSpeechVoices, all of which have a name, quality and identifier property. The latter is what we’re looking for so we can create a specific voice.

Here’s a method lists all available voices on the current device:

- (void)listVoices {

    NSArray *allVoices = [AVSpeechSynthesisVoice speechVoices];

    for (AVSpeechSynthesisVoice *voice in allVoices) {
        
        NSLog(@"Voice Name: %@, Identifier: %@, Quality: %ld", voice.name, voice.identifier, (long)voice.quality);

    }
}

Not all voices may be installed on all devices. For example, Alex is an optional high quality voice that the user needs to download first before he will show up in this array.

The quality parameter either returns 1 for standard/low-res, or 2 for enhanced/hi-res voices. Again it is up to the user to enable the hi-res quality of a voice under Settings.





How to play videos in iOS 9

Until iOS 8 we could use the trusty old MPMoviePLayerViewController class to play videos on our devices, but that’s been deprecated in iOS 9. From now on, Apple recommend we use the AVPlayerViewController class instead. It has many advantages and even supports picture-in-picture out of the box.

Although AVPlayerViewController is a subclass of UIViewController, instances of it cannot be presented using presentViewController – but Apple make sure not to mention this little tidbit. It’s much more “fun” to figure this out on our own.

Here’s how we can use it with a local video from the main bundle:

@import AVFoundation;
@import AVKit;

// ...

// grab a local URL to our video
NSURL *videoURL = [[NSBundle mainBundle]URLForResource:@"video" withExtension:@"mp4"];

// create an AVPlayer
AVPlayer *player = [AVPlayer playerWithURL:videoURL];

// create a player view controller
AVPlayerViewController *controller = [[AVPlayerViewController alloc]init];
controller.player = player;
[player play];

// show the view controller
[self addChildViewController:controller];
[self.view addSubview:controller.view];
controller.view.frame = self.view.frame;

First we grab a URL to either a local or remote video. Next we create an AVPlayer with this URL, and add said player to the newly created AVPlayerViewController instance. You can auto-play a video by using the player’s play method, or remove it and leave it up to the user to start the video.

Next we’ll present the controller by adding it as a subview to our current view, making sure it has the same frame size.

The class works equally well with local and remote URLs. To play a remote asset, construct the URL like this:

NSURL *videoURL = [NSURL URLWithString:@"https://github.com/versluis/Movie-Player/blob/master/Movie%20Player/video.mov?raw=true"];




How to extract a UIImage from a video in iOS 9

Here are two ways to extract the first frame of vision and turn it into a UIImage. Both methods are from the generous Stack Overflow community.

Both methods make use to the AV Foundation framework. There’s no need to import it, UIKit will take care of it for us.

Video URLs can be obtained in many ways, one of which is from something like an image picker. You can grab the URL in its didFinishPickingMediaWithInfo delegate method:

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    
    // dismiss the picker
    [self dismissViewControllerAnimated:YES completion:nil];

    // grab the video
    NSURL *videoURL = [info objectForKey:UIImagePickerControllerMediaURL];
}

Now that we have a video URL, let’s see how we can extract a UIImage from it.

Continue reading





How to play audio in iOS

Photo Jan 11, 19 08 40We can play back an audio file we’ve previously recorded or providing in the main bundle, again thanks to the AV Foundation Framework. Much like recording, it’s not as straightforward as “hitting play” somewhere. Here are the steps involved:

  • create an AVSession
  • create an AVPlayer
  • prepare for and start playing

For the method below to work we need to import and link the AVFoundation Framework to our project. We also need a property that holds the player object.

- (IBAction)playbackButtonPressed:(id)sender {
    
    // grab a URL to an audio asset
    NSURL *documentsURL = [[[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask] lastObject];
    documentsURL = [documentsURL URLByAppendingPathComponent:@"audiofile.wav"];
    
    // create a session
    AVAudioSession *session = [AVAudioSession sharedInstance];
    [session setCategory:AVAudioSessionCategoryPlayback error:nil];
    
    // create player
    self.player = [[AVAudioPlayer alloc]initWithContentsOfURL:documentsURL error:nil];
    self.player.delegate = self;
    [self.player prepareToPlay];
    [self.player play];
}

Stop playback at anytime by calling

[self.player stop];

Two delegate methods can inform us of the progress here:

- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag {
    
    // done playing
}

- (void)audioPlayerDecodeErrorDidOccur:(AVAudioPlayer *)player error:(NSError *)error {
    
    // something didn't play so well
}

There are two other delegate methods that can inform us of an interruption to the playback, such as a phone call. If you’re interested in playing an MP3 file from the music library, check out the MPMediaPickerController.





How to record audio from the microphone in iOS

Talk-Icon-512Recording audio is a complex process – for any computer. iOS makes it simple-ish with several classes courtesy of the AV Foundation Framework.

In this example we’ll record audio from the device microphone. This happens in four steps:

  • create an AVSession
  • setup a dictionary with audio recorder settings
  • create an AVAudioRecorder
  • prepare for and start recording

For the method below to work you need to import and link the AVFoundation Framework to your project. We also need a property to hold our AVRecorder object, otherwise it will no longer exist by the time the method reaches the end and nothing will work. Again.

Here’s a method that is called by a “Record Button”:

This content is for members only.

Additional delegate methods are available to check for a “higher power” interruption, such as a phone call.





How to use the speech synthesiser in iOS 7

New to iOS 7’s AVFoundation Framework is the speech synthesiser. It’s what Siri is using when he/she speaks to you. We can use this as well in our apps by passing a string into an AVSpeechUtterance object, which in turn we pass into an AVSpeechSynthesizer instance.

Here’s how:

- (IBAction)sayThis:(id)sender {
    
    AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init];
    AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc]initWithString:self.textField.text];
    AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-GB"];
    
    // all these values are optional
    utterance.rate = 0.2f;
    utterance.pitchMultiplier = 0.8f;
    utterance.voice = voice;
    
    [synthesizer speakUtterance:utterance];
}

To tweak the voice of the synthesiser you can specify the language (defaults to the user locale if not specified), as well as pitchMultiplier and rate values. It’s quite fun to play around with!