AzoftCase StudiesTips for Using UIImage Objective-C Class

Tips for Using UIImage Objective-C Class

By Andrey Chevozerov on April 3, 2013

UIImage is one of the most commonly used classes in iOS development. It is generally used for drawing icons, backgrounds, etc. However, not all iOS developers realize how powerful this simple class may become with a proper approach. Today, I'm going to suggest several unusual ways of using UIImage to fill an area with a texture, create a stretchable UI element, changing and masking an image.

Creating tiled backgrounds

Let's begin with tiled backgrounds. You may thoroughly study the reference sections for UIImage and UIImageView and still be unable to find out how to fill an area with a tiled background. While the answer might seem strange, we need to use UIColor, more specifically its method called colorWithPatternImage. The example below shows how to use it to fill the area of the current image with a texture:

UIImage *pattern = [UIImage imageNamed:@"TextureImage"];
self.view.backgroundColor = [UIColor colorWithPatternImage:pattern];

Source image:

UIImage Objective-C Class - Source image

Fill result:

UIImage Objective-C Class - Fill result

Creating stretchable images

Another useful UIImage feature is creating stretchable images. Suppose we'd like to create several buttons of same style but different size or a badge like the iOS badge shown over the icons. We don't have to create each button separately. The image will stretch itself to fit any required size. To make it stretchable, use the following code:

UIImage *buttonBackground = [UIImage imageNamed:@"Button"];
buttonBackground = [buttonBackground stretchableImageWithLeftCapWidth:15 topCapHeight:0];
[self.logoutButton setBackgroundImage:buttonBackground forState:UIControlStateNormal];

Button image in the project resources:

Button image in the project resources

Result:

UIImage Objective-C Class - Result

This way of drawing images is very widely used in all kinds of projects. It allows reusing the same resources and this, in turn, results in smaller size of the application.

Certain controllers like UIImageView have image stretching options right in InterfaceBuilder (look under Stretching in the View section on the Attributes tab). These options don't affect some of the UI elements, e.g. buttons.

Changing and masking images

Time to move on to something more practical. The next example shows how to change images and mask them. As an example, let's take a user avatar and display it in circle with a border, given that the original image is square and doesn't have borders.

We'll use 3 images: the avatar, the border and the mask:

UIImage Objective-C Class

UIImage Objective-C ClassUIImage Objective-C Class

Our avatar is substantially larger than the border image, so one of the images has to be resized. That's why it’s a good idea to declare a category, e.g. Operations, that extends UIImage. Add the following method to the category:

- (UIImage *)imageScaledToSize:(CGSize)size
{
    if (UIGraphicsBeginImageContextWithOptions != NULL) {
        UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
    } else {
        UIGraphicsBeginImageContext(size);
    }

    [self drawInRect:CGRectMake(0.0, 0.0, size.width, size.height)];

    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return result;
}

It makes it possible to use the next sample of code below:

UIImage *avatarImage = [UIImage imageNamed:@"UserAvatar.jpg"];
self.avatarImageView.image = [avatarImage imageScaledToSize:self.avatarImageView.frame.size];

The scaled image will look like this:

UIImage Objective-C Class - Result

Content mode of avatarImageView is set to center, so it is not scaled by the control itself. The image is properly resized and it's time to cut off the circular border and stroke it.

In reality, we're not going to cut anything away, but instead we'll apply a mask to our image. A mask is a special black-and-white image with shape and size identical to the shape and size of original image. Please note that black-and-white is a must for a mask (the Grayscale mode in Photoshop), otherwise the trick fails. Black areas of the mask represent opaque portions of the original image and white areas represent completely transparent pixels with shades of gray in between them.

The other requirement to a mask image is 100% opacity, i.e. the alpha-channel should be disabled. If not, such a mask can't be applied. The best option is to save image to any format without alpha-channel support, like JPG.

To apply the mask add the following method to our category:

- (UIImage *)imageWithMask:(UIImage *)maskImage
{
    CGImageRef maskRef = maskImage.CGImage;
    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                        CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);

    CGImageRef maskedImageRef = CGImageCreateWithMask(self.CGImage, mask);
    UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef scale:self.scale orientation:UIImageOrientationUp];

    CGImageRelease(mask);
    CGImageRelease(maskedImageRef);

    return maskedImage;
}

Edit the ViewController code as follows:

UIImage *avatarImage = [UIImage imageNamed:@"UserAvatar.jpg"];
avatarImage = [avatarImage imageScaledToSize:self.avatarImageView.frame.size];
avatarImage = [avatarImage imageWithMask:[UIImage imageNamed:@"AvatarMask"]];
self.avatarImageView.image = avatarImage;

The result should look like this:

UIImage Objective-C Class - Result

There's actually a quirk that I'll talk about in a bit. Meanwhile, let's add the finishing touch - draw a border around the image. This is implemented by the following method:

- (UIImage *)imageByAddingImage:(UIImage *)image
{
    CGSize firstSize = self.size;
    CGSize secondSize = image.size;

    CGSize mergedSize = CGSizeMake(MAX(firstSize.width, secondSize.width),
                                   MAX(firstSize.height, secondSize.height));
    CGFloat mergedScale = MAX(self.scale, image.scale);

    UIGraphicsBeginImageContextWithOptions(mergedSize, NO, mergedScale);

    [self drawInRect:CGRectMake(0.0, 0.0, self.size.width, self.size.height)];
    [image drawInRect:CGRectMake(0.0, 0.0, image.size.width, image.size.height)];

    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return newImage;
}

Again we need to adjust the controller:

UIImage *avatarImage = [UIImage imageNamed:@"UserAvatar.jpg"];
avatarImage = [avatarImage imageScaledToSize:self.avatarImageView.frame.size];
avatarImage = [avatarImage imageWithMask:[UIImage imageNamed:@"AvatarMask"]];
avatarImage = [avatarImage imageByAddingImage:[UIImage imageNamed:@"AvatarFrame"]];
self.avatarImageView.image = avatarImage;

The result of all our work combined:

UIImage Objective-C Class - Result

And now about the quirk that I mentioned earlier. It happens when the original image doesn't have an alpha channel. In many cases alpha channel is not required, but processing such images may produce unwanted artifacts such as black polygons in the transparency regions. I couldn't reproduce this bug in a test project, but I've seen this in production projects, especially on iOS 4.3 (it looks like one of the frameworks in 4.3 had a bug that was fixed later in 5.0).

Anyway, to solve this problem we need to add the missing alpha channel to the image. This is a rather cumbersome procedure, which should only be used in extreme cases and the resulting image should be cached, if possible.

Alpha channel support requires the following methods in our category:

// Checking alpha channel availability
- (BOOL)hasAlpha
{
    CGImageAlphaInfo alpha = CGImageGetAlphaInfo(self.CGImage);
    return (alpha == kCGImageAlphaFirst ||
            alpha == kCGImageAlphaLast ||
            alpha == kCGImageAlphaPremultipliedFirst ||
            alpha == kCGImageAlphaPremultipliedLast);
}

 // Applying alpha channel to image

- (UIImage *)imageWithAlpha
{
    if ([self hasAlpha]) {
        returnself;
    }
    
    CGImageRef imageRef = self.CGImage;
    size_t width = CGImageGetWidth(imageRef);
    size_t height = CGImageGetHeight(imageRef);
    
    // The bitsPerComponent and bitmapInfo values are hard-coded to prevent an "unsupported parameter combination" error
    CGContextRef offscreenContext = CGBitmapContextCreate(NULL,
                                                       width,
                                                       height,
                                                       8,
                                                       0,
                                                       CGImageGetColorSpace(imageRef),
                                                       kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
    
    CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), imageRef);
    CGImageRef imageRefWithAlpha = CGBitmapContextCreateImage(offscreenContext);
    UIImage *imageWithAlpha = [UIImage imageWithCGImage:imageRefWithAlpha];
    
    CGContextRelease(offscreenContext);
    CGImageRelease(imageRefWithAlpha);
    
    return imageWithAlpha;
}

That pretty much sums up the hidden powers of UIImage that I've personally encountered. No doubt, there are much more and if I discover anything else, I'll share it here. Stay tuned!

VN:F [1.9.22_1171]
Rating: 4.6/5 (12 votes cast)
VN:F [1.9.22_1171]
Rating: +7 (from 7 votes)
Tips for Using UIImage Objective-C Class, 4.6 out of 5 based on 12 ratings

Content created by Andrey Chevozerov