Tag Archives: UIImage

How to extract a UIImage from a video in iOS 9

Here are two ways to extract the first frame of vision and turn it into a UIImage. Both methods are from the generous Stack Overflow community.

Both methods make use to the AV Foundation framework. There’s no need to import it, UIKit will take care of it for us.

Video URLs can be obtained in many ways, one of which is from something like an image picker. You can grab the URL in its didFinishPickingMediaWithInfo delegate method:

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    
    // dismiss the picker
    [self dismissViewControllerAnimated:YES completion:nil];

    // grab the video
    NSURL *videoURL = [info objectForKey:UIImagePickerControllerMediaURL];
}

Now that we have a video URL, let’s see how we can extract a UIImage from it.

Continue reading





How to apply Blur Effects to images and views in iOS 8

This slideshow requires JavaScript.

In iOS 8 there’s a new class called UIVisualEffect with which you can apply blur effects to entire views. Reading the Class Reference however you’d never figure out how.

In a nutshell, we need to create a UIVisualEffect (either a UIBlurEffect or a UIVibrancyEffect) which is applied to a UIVisualEffectView. The latter needs the same size as the view we want to blur and is then added as a subview to the view we’d like to apply an effect to.

This sounds way more complicated than it really is. Imagine this: you have a UIView, any view will do. To it you add a subview (the UIVisualEffectView). The effect view is created with an effect.

So as soon as you apply the effect view, all other existing views below it are blurred. Anything above the effect view is not blurred. Remove the effect view and the blur goes away again.

Here’s an example:

// show image
self.imageView.image = [UIImage imageNamed:@"boat6"];

// create effect
UIBlurEffect *blur = [UIBlurEffect effectWithStyle:UIBlurEffectStyleLight];

// add effect to an effect view
UIVisualEffectView *effectView = [[UIVisualEffectView alloc]initWithEffect:blur];
effectView.frame = self.view.frame;

// add the effect view to the image view
[self.imageView addSubview:effectView];

In this example we have a UIImageView which is referenced in the storyboard. To it we apply the desired effect.

The UIBlurEffect can be created with three options and no other parameters:

  • UIBlurEffectStyleLight
  • UIBlurEffectStyleExtraLight
  • UIBlurEffectStyleDark

 

UIVibrancyEffect

There’s an additional (disappointing) UIVibrancyEffect which cannot be used on its own, only in conjunction with a blur effect. To use the vibrancy effect, we need to first create a blur, then a new vibrancy effect with that blur. From both effects we need to then create two separate effect views and add both of those to the view we’d like to blur.

Here’s an example:

// show image
self.imageView.image = [UIImage imageNamed:@"boat6"];

// create blur effect
UIBlurEffect *blur = [UIBlurEffect effectWithStyle:UIBlurEffectStyleLight];

// create vibrancy effect
UIVibrancyEffect *vibrancy = [UIVibrancyEffect effectForBlurEffect:blur];

// add blur to an effect view
UIVisualEffectView *effectView = [[UIVisualEffectView alloc]initWithEffect:blur];
effectView.frame = self.view.frame;

// add vibrancy to yet another effect view
UIVisualEffectView *vibrantView = [[UIVisualEffectView alloc]initWithEffect:vibrancy];
vibrantView.frame = self.view.frame;

// add both effect views to the image view
[self.imageView addSubview:effectView];
[self.imageView addSubview:vibrantView];

I know… this seems an awful lot of trouble to get an effect which isn’t even all that good. But then, “new things” aren’t always “improvements” these days (the 2014 Mac Mini springs to mind).

 

How to do this in iOS 7

If you’re as disappointed by the results as I am, take a look at a UIImage Category with which Apple have demonstrated this effect at WWDC 2013. Look for a sample project called UIImageEffects.

The category is free to use and redistribute and allows for greater control over such things as colour tint and blur radius:

 

Further Reading





How to take a screeshot in iOS programmatically

In iOS we can press the HOME and the POWER button together at any time and a screenshot of what’s currently being displayed will be saved to the Camera Roll as if by magic. If we wanted to do the same thing programmatically it’s not that easy. It appears there’s no iOS system method that does this for us.

We have several options, let me describe two of those. I’ll add others if there are more convenient methods.

OPTION 1: using UIWindow

To make sure we screengrab everything in our display we can transfer the content of our top most UIWindow into a graphics context with the size of our screen. This will work no matter which class you use it in:

// create graphics context with screen size
CGRect screenRect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);

// grab reference to our window
UIWindow *window = [UIApplication sharedApplication].keyWindow;

// transfer content into our context
[window.layer renderInContext:ctx];
UIImage *screengrab = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The advantage with this method is that even popovers and alert views are added to the resulting UIImage.

OPTION 2: using UIView

If we only need to turn a UIView into a UIImage then the next method will work fine. The disadvantage is that certain types of views (such as OpenGL and popovers) will not be captured. In addition, you need a reference to the UIView you’d like to capture. In this example I’m using my Master/Detail App’s split view controller:

// grab reference to the view you'd like to capture
UIView *wholeScreen = self.splitViewController.view;

// define the size and grab a UIImage from it
UIGraphicsBeginImageContextWithOptions(wholeScreen.bounds.size, wholeScreen.opaque, 0.0);
[wholeScreen.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screengrab = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Saving a UIImage to Camera Roll

The result of both methods above is a UIImage you can do with as you please. Let’s save it to the Camera Roll for now:

// save screengrab to Camera Roll
UIImageWriteToSavedPhotosAlbum(screengrab, nil, nil, nil);

One thing of note:
If you call either method above via a standard button, the screenshot is taken during the button animation (half faded for example).

Further Reading





How to save a UIImage to the Camera Roll in iOS

The UIImage Class provides a method that can save an image straight to the Camera Roll. In this example, yourImage is a UIImage:

// super simple saving
UIImageWriteToSavedPhotosAlbum(yourImage, nil, nil, nil);

That’s short and sweet – but there’s no feedback by default, like a delegate method that informs you of how this operation went.

camera-roll

A much better option is to make use of the selector this method can call upon success or failure, like so:

This content is for members only.



How to save a UIImage in Core Data and retrieve it

Way simpler than I had thought: Core Data can save binary NSData objects – all we have to do is declare the attribute as “Binary Data” in our model.

Screen Shot 2014-01-06 at 20.52.21

Optionally you can choose to “Allow External Storage” for the attribute, which means that the data is not stored in the database (to be honest, I don’t know where – Core Data takes care of everything). Not a good choice if you plan to export the store file, or populate database changes to other stores via iCloud.

Screen Shot 2014-01-06 at 20.57.15

To save an image, we’ll turn it into data and add it to our managed object (Event in my case):

self.myEvent.picture = UIImageJPEGRepresentation(chosenImage, 1);

To retrieve it from Core Data, we’ll do the opposite:

self.imageView.image = [UIImage imageWithData:self.myEvent.picture];

I have tried this successfully in iOS, works like a charm.

In Cocoa however I believe an NSImage needs an NSValueTransformer for this operation.





How to test the size of a UIImage (in bytes)

We can use NSData’s length method for this. Imagine your UIImage is yourImage:

NSData *imageData = UIImageJPEGRepresentation(yourImage, 1);
NSLog(@"Size of your image is %d bytes", [imageData length])

[imageData length] returns a double in bytes, which will be the size of your image.

This is useful if you’d like to save something and you’re limited in size, such as iCloud Key Value storage where a data object may only be 1MB in size or smaller.