Remote Nib Loading for Fun (But Not Profit)

A while ago I noticed an interesting API for creating a UINib object from data:

+ (UINib *)nibWithData:(NSData *)data bundle:(NSBundle *)bundleOrNil

At the time I didn’t have a use for it, until this exchange occurred on Twitter:


The resulting exchange was very fruitful, including this gem from ex-Apple employee Michael Jurewitz:

So I wouldn’t recommend using this in a shipping application, but I wanted to see if it worked. I created a simple app that loads a nib from a website, then tries to initialize a view controller’s view using it. You can view the whole project on GitHub, but here’s the relevant code:

Would I recommend using this in a shipping app? Absolutely not, given Jury’s recommendations. But it is an interesting idea for enterprise, in-house, or jailbreak apps, and I can see the possibility for some very cool stuff to come out of it.

Cocoa Touch: Working With Image Data

Images in Cocoa Touch, represented by the UIImage class, are a very important subject. Apple’s iOS platform prides itself on visual appeal, with Retina Displays, custom UI in many top apps, and a focus on photos with apps like Instagram. To that end, it behooves you as an iOS programmer to know a bit about working with images. This post won’t discuss everything you need to know about using the UIImage class, as that’s more appropriate for a book than a blog post—though maybe a series of blog posts would do—but instead will focus on one advanced topic: working with pixel data. You can find the basic stuff in the UIImage documentation, anyway.

Turning an Image Into Data

One of the first things you might want to do with a UIImage object is to save it to disk. To do that, you’ll need to save it to an image file. There are built-in functions to get properly-formatted data from an image, in both PNG and JPEG functions:

  • UIImagePNGRepresentation(), which returns an NSData object formatted as a PNG image, taking a pointer to a UIImage object as its sole parameter.
  • UIImageJPEGRepresentation(), which returns an NSData object formatted as a JPEG image. Like the previous function, its first parameter is a pointer to a UIImage object, but it has a second argument: a CGFloat value representing the compression quality to use, with 0.0 representing the lowest-quality, highest-compression JPEG image possible, and 1.0 representing the highest-quality, lowest-compression image possible.

Once you have the image data represented by an NSData object, you can then save it to disk with various NSData methods, such as -writeToFile:atomically:.

Getting Raw Pixel Data

While the above functions are great for saving images, they aren’t so great for image analysis. Sometimes you need to analyze the pixel data for a given pixel, down to the values for the red, green, blue, and alpha components. To get that kind of granularity in an image, we’ll be using a lot of CoreGraphics functions. If you haven’t used CoreGraphics before, know before going in that it’s a C-based API à la CoreFoundation, so you won’t be using the Objective-C objects you know and are used to. Instead, there are opaque types (represented by CFTypeRef, which is analogous to Objective-C’s id) representing objects grafted onto C, complete with manual memory management—no ARC for you. That’s neither here nor their, however; let’s talk about pixel data.

Color Space

The color space of an image defines what the color components of each pixel are. Represented by the CGColorSpace type, you’ll typically use either an RGB color space or a Gray color space, which have red, green, and blue components or a white component, respectively. For this example, we’ll be using the RGB color space. We can create an instance of it with the CGColorSpaceCreateDeviceRGB() function, which returns a CGColorSpaceRef type—think of it as a pointer to a CGColorSpace object.

What does using this color space get us? We now know that the pixels of our image will have three color components, and in what order. This will come in handy later on when we need to query the data.

Graphics Contexts

A graphics context, represented by the CGContext type, is analogous to a painter’s canvas—it’s what you draw into. For the purposes of drawing an image, you’ll create a CGBitmapContext, the ideal type of context for this data. You create a context with the CGBitmapContextCreate() function, which return a CGContextRef type. Let’s look at the declaration of that function (from CGBitmapContext.h):

CGContextRef CGBitmapContextCreate (
    void *data,
    size_t width,
    size_t height,
    size_t bitsPerComponent,
    size_t bytesPerRow,
    CGColorSpaceRef colorspace,
    CGBitmapInfo bitmapInfo
);

So, that’s pretty simple, right? It’s actually fairly straightforward, despite its appearance. Let’s break it down into more easily-digestible components. It’ll make more sense if we don’t go top-to-bottom, so we’ll go in the order I think makes the most sense.

First is the bitmapInfo parameter. The CGBitmapInfo type is a bitmask that represents two options: the alpha component, which contains transparency information, and the byte order of the data. We’ll talk about the alpha component here; byte order is another topic altogether. On iOS, only some pixel formats are supported. Looking at this chart in the documentation, we can see that, for all supported pixel formats on iOS in the RGB color space, these are the CGBitmapInfo constants we can use:

  • kCGImageAlphaNoneSkipFirst
  • kCGImageAlphaNoneSkipLast
  • kCGImageAlphaPremultipliedFirst
  • kCGImageAlphaPremultipliedLast

We can do two things with the alpha component: skip it, or use it in a premultiplied format. The premultiplied flag tells the system to multiply the individual red, green, and blue components by the alpha value when storing it. So, instead of RGBA values of 1, 1, 1, and 0.5, it’s stored as 0.5, 0.5, 0.5, and 0.5. This is a performance-saving measure on iOS devices, and is done automatically to all of your PNG images by Xcode when you build for a device.

So, for the bitmapInfo parameter, I generally pass kCGImageAlphaPremultipliedLast.

The penultimate parameter, colorspace, is a CGColorSpaceRef pointing to a color space you’ve created. This informs the context about the number of color components. Keep in mind that there’s one extra component for the alpha information if you’re not skipping it, so an RGB color space uses 4 components including alpha.

The width and height parameters are pretty simple: the number of pixels wide and high to make the context. Keep in mind that for Retina displays, you may need to double the values. You can use the scale property of the main UIScreen object as a quick “am I on a Retina device?” check.

Next, let’s talk about the first parameter: data Here you have two options: to pass in a pointer to a region of memory you’ve allocated for the image data, or to pass NULL and have the graphics subsystem create it for you. If you’re trying to access pixel data, however, it’ll help to have a pointer to the data, so here you’d pass in memory you’ve allocated. How do you know how much is enough? Let’s look at the bitsPerComponent parameter. I usually use 8-bit components—again, see the chart linked above for valid options—so I would pass 8 for bitsPerComponent. Once you know that, you can determine bytesPerRow easily:

size_t bytesPerRow = (bitsPerComponent * width) / 8;

And then, finally, we can determine how much data to use. I use the uint8_t data type to represent this, as it’s an unsigned 8-bit integer, perfect for our needs.

uint8_t data = calloc((width * height) * numberOfComponents, sizeof(uint8_t));

The entire stack might look like this:

The only thing in this code that we haven’t gone over so far is the call to CGContextDrawImage, which (surprisingly) draws the image. It takes three parameters: the context to draw into, a CGRect defining where to draw, and a CGImageRef for the image. You can obtain a CGImageRef from a UIImage using its -CGImage method.

Now that the image is drawn in our context, the rawData array will be filled with real, live image data! You can access it like so (modify the values of x and y as suits your needs):

int x = 0;
int y = 0;

int byteIndex = (bytesPerRow * y) + (x * bytesPerPixel);

uint8_ t red   = rawData[byteIndex];
uint8_ t green = rawData[byteIndex + 1];
uint8_ t blue  = rawData[byteIndex + 2];
uint8_ t alpha = rawData[byteIndex + 3];

And there you have it! Now that you’ve gotten the data out of your image, do whatever you want with it. Just remember the blog authors you read along the way when Facebook buys you for a billion dollars.

Note: The venerable Mike Ash published a similar article while this one was half-done in my drafts folder. I thought about scrapping it altogether, but since mine is iOS-specific, and with some prodding from a co-worker, I decided to press on. Go read Mike’s blog, too. It’s awesome.

Cocoa Touch: Circumventing UITableViewCell Redraw Issues with Multithreading

In your career as a Cocoa or Cocoa Touch developer, every now and then you’ll encounter an issue with something Apple has written. Whether it’s a full-blown bug, something that doesn’t work quite how you’d expect it to, or a minor inconvenience, it happens. When it does, naturally the first thing you do is file a bug report (right?). After that, though, you need to do something about it. This usually occurs right when a project is due, so often we can’t wait for Apple’s engineering teams to fix the problems (or tell you that you’re wrong). This post is an example of using KVO to get around the problem without worrying about it anymore.

The Problem: In iOS, if you create a UITableViewCell and return it to the table view in its data source’s -tableView:cellForRowAtIndexPath: method, but then return later (say, after doing some background processing) to add an image to the cell’s imageView, you don’t see anything! Why? Well, it looks like either the image view isn’t added to the cell’s view hierarchy if you don’t immediately add an image or there’s some other bug in the UITableViewCell implementation. I don’t think it’s a bug, I think it’s just a side effect of an optimization; if there’s no image, why add it to the cell?

So how do we fix it? Well, a simple call to -setNeedsLayout gets the cell to fix itself quite nicely. But we shouldn’t have to do that from our table view data source—that has a bit of code smell to it. Lines like that quickly get overused, with programmers calmly stating, “I don’t know why, but we always do that.” No, a better solution is to get the cell to handle this problem on its own.

We’ll create a subclass of UITableViewCell and use KVO. When we create the cell, we’ll register for KVO notifications with the on the image view whenever its image property is modified—but we’ll send the option to include the old value in the change dictionary. When we receive the notification, we’ll look at that dictionary, and if the old value was nil, then we’ll send self a -setNeedsLayout message. This avoids having to do it in other classes, and only does it when necessary. We simply set it and forget it.

Ta-da.

Asynchronous Synchronous Requests: Effortless Networking Code

Today I showed a couple of people at work a technique I use to do asynchronous URL loading in iOS, and the response on Twitter was great, so I’ve written it up for everybody. If you’re used to using ASIHTTPRequest or rolling your own NSURLConnection delegates, hopefully this method will be a breath of fresh air.

The Problem: When you want to load something from the Internet, you don’t want to block your UI—especially when iOS might just kill your app for doing so—but writing delegate code is a pain. You have to remember which delegate methods get called in what order, to set yourself as the delegate (can’t tell you how many times that’s tripped me up), and handling multiple simultaneous connections with one delegate is… tricky, at best.

The Solution: Use Grand Central Dispatch. Maybe I just love GCD too much and this is me seeing everything as a nail, but let’s look at the following code for loading a URL:

[sourcecode language=”objc”]- (void)loadAwesomeURL
{
NSString *awesomeURI = @"http://www.awesomeexample.com/?output=JSON";
NSURL *awesomeURL = [NSURL URLWithString:awesomeURI];
NSURLRequest *awesomeRequest = [NSURLRequest requestWithURL:awesomeURL];

NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:awesomeRequest
delegate:self];

[theConnection start];
}

– (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data
{
[myMutableData appendData:data];
}

– (void)connectionDidFinishLoading:(NSURLConnection *)connection
{
[self processTheAwesomeness];
}[/sourcecode]

That sucks. Three methods, and I didn’t even do any error handling! There has to be a better way. NSURLConnection offers a synchronous method, but everybody knows you don’t use it… so let’s do exactly that. But since we want to make this asynchronous, we’ll use Grand Central Dispatch to wrap it in a dispatch_async() call:

[sourcecode language=”objc”]- (void)loadAwesomeURL
{
NSString *awesomeURI = @"http://www.awesomeexample.com/?output=JSON";
NSURL *awesomeURL = [NSURL URLWithString:awesomeURI];
NSURLRequest *awesomeRequest = [NSURLRequest requestWithURL:awesomeURL];

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0ul);
dispatch_async(queue, ^{
NSURLResponse *response = nil;
NSError *error = nil;

NSData *receivedData = [NSURLConnection sendSynchronousRequest:awesomeRequest
returningResponse:&response
error:&error];

[self processTheAwesomeness];
});
}[/sourcecode]

We can easily do error checking after the NSURLConnection call; simply check to see if receivedData is nil, cast response to an NSHTTPURLRequest and check its statusCode property, and if all else fails, check out error.

Note: I’ve received a fair amount of negative feedback on this article on Twitter, Reddit, and in the comments, and I feel like I ought to make a few points clear:

  • This is not the last networking solution you’ll ever need. Among other things, this does not support:
    1. Canceling the connection
    2. Running code when the connection is half-done
    3. Streaming data to a file for large downloads
  • This is a quick example. It’s mainly designed to illustrate dispatch_async() as a wrapper for synchronous APIs.
  • It isn’t good for multiple connections. You’ll want a custom dispatch queue for that.
  • It doesn’t run on the main thread. If you’re updating your UI, you’ll need to do that on the main thread.

What Every Designer Should Know About iOS

Working with designers over the years, I’ve seen a few areas where the world of a designer and the world of a developer merge very well, and a few areas where they don’t. Photoshop comps that lead to sliced assets with non-localized text on them, storing a vertical gradient in a 1,024 × 1,024 JPEG image, Retina Display graphics that don’t match up to their non-Retina Display versions, and other places where I feel that a little bit of knowledge about iOS would go a long way. So, I’ve prepared this piece on what every designer should know before working with an iOS project.

  1. Apple Controls Everything.
    Literally. Since there’s no getting around this fact, we might as well start with it now. When your developer works with iOS, she’s using Apple’s tools to run on Apple’s operating system. So when she tells you that, for instance, a navigation bar can accept a tint color but not a custom gradient or an image, that’s because the Apple-provided version has those restrictions. Normally this isn’t an issue, but a designer needs to be prepared to provide their art in several different formats. For a tab bar, for instance, icons need to be (around) 30 × 30 pixels and filled out in the alpha channel.
  2. The Retina Display is not for layout.
    I think the best example of my last point above is the Retina Display. When it came out, developers started asking their designers for double-sized versions of their assets. Every image you provided for the original product needed to be resized. But the important thing to note about the Retina Display for a designer is not that suddenly there are two screen sizes to worry about on the iPhone. In fact, that can lead to catastrophe. When you design for the iPhone, you still create according to a 320 × 480 point screen. The Retina Display, unlike the regular display, happens to have two pixels per point. So when you make your assets, you have to design around the smaller size, but then take your assets and make a version exactly twice as large. This needs to be exact because the developer isn’t specifying the double-sized art or layout—in fact, they don’t specify anything. The art is simply named with an @2x suffix and iOS loads it in automatically.
  3. Things Change.
    When the Retina Display came out, that was a big change for designers (and developers). Apple can do this at any time. Tomorrow morning, Apple could announce a new iPhone Nano with a smaller screen or an iPad Pro with a Retina Display screen. If the screen size changes beyond a certain threshold, then developers will need to re-work their applications’ UI to accomodate. If that happens, your developer will be asking you for new assets, and he’ll want them immediately. If you saved everything in Photoshop six months ago and forgot what exactly you did to style everything, it’s going to be a long week. That’s why I recommend working in vector art for all but the most photorealistic elements (like skeuomorphism). If your work is in vector art and the developer suddenly needs assets at 150% of the original size, you re-export as .PNG, send it to the developer, and go back to doing whatever it is designers do in their free time.
  4. Push Your Developer.
    iOS has very sophisticated drawing abilities. If you want the background of a certain UI element to have a gradient, you might generate that gradient at the size of the element, then send it to the developer. If you know that the gradient can be stretched horizontally, you might send a one-pixel-wide version of it, instead. But you can also just tell the developer, “Draw a gradient from this color to this color and use it here.” This applies to more advanced drawing as well—need a circle with a dark-blue fill at 80% opacity, stroked with a 3-point thick, white line? The developer can draw it in code. This has the advantage of working at any resolution and being extremely changeable. Decide tomorrow morning that you want the color of the circle a bit lighter? Instead of sending the developer a new image, send him a new color and have him draw it differently. I called this tip “push your developer” because not every developer is as comfortable as the next with more advanced drawing, but I firmly believe that the more drawing you can do in code, the better.
  5. Spend Time on the Icon.
    The app’s icon is the first thing a user sees when they’re browsing the App Store. A beautiful, well-conceived icon can do wonders for an app. There are plenty of resources online for iOS icon design, so if you’re not sure where to begin, just head to Google.
  6. Standards are High.
    The most successful iOS applications have a level of beauty to them that other apps just can’t match. Utilitarian layouts with spartan design work, and if your client is an enterprise looking for an internal app, are appropriate, but won’t help the app. Your goal should be to make the app successful because of your design, not in spite of it.