Aleksandar • Vacić

iOS bits and pieces

Fighting feature creep

Feature creep is an easy trap to fall into. As developer and software vendor, you always want to please your users. So when a request comes in and it’s eloquently and nicely laid out, it’s hard to resist. I experienced it so much while working on Run Mate 1.1, which is the main reason why it took me 3 months to publish it.

Never again. I hope.

I have a new weapon against it now – it’s a wonderful thought that Wil Shipley said in an interview for Mac Developer Network podcast:

…Technology is just being too complicated. People just don’t enjoy using it, they don’t get it, they’re not getting the most out of it, they’re not able to use the feature they have and if we radically simplify it, then people suddenly get a lot more out of it.

They actually use more features if you give them less features.

MDN podcast is wonderful resource. I had a large backlog of various podcasts – this Shipley interview is in MDN Show 003 from back in July 2009 – which I have recently cleared out. I never check what is in any of the shows, I like when they surprise me. This interview with Wil is chock full of great thoughts.

Another fantastic feature of the MDN Show is World according to Gemmell where Matt Gemmell picks an UX subject and really, really hummers it down.

Note: Steve Scott, the man who ran MDN, has retired that show and now has a new show called iDeveloper Live. It’s more dynamic than MDN Show (more people) plus it’s recorded live and you can be part of it through the chat.

Debugging [CALayer retain]: message sent to deallocated instance

While working recently on an iPhone app, I had a subclass of UITableViewCell with a specific indicator, all defined like this:

1
2
3
4
5
@interface EmailListCell : UITableViewCell {
  UIImageView *indicator;
}

@property (nonatomic, retain) UIImageView *indicator;

In the implementation part - since the property is retained, I automatically added this:

1
2
3
4
5
6
7
8
9
@implementation EmailListCell

@synthesize indicator;

- (void)dealloc {
  [indicator release];
  indicator = nil;
  [super dealloc];
}

This is what I usually do, mostly mechanically, to not forget to add proper releasing. And it came back to bite me this time.

It was because of the initializing code I wrote after the code above:

1
2
3
4
5
6
7
8
9
10
11
- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier {

  if (self = [super initWithStyle:style reuseIdentifier:reuseIdentifier]) {
      indicator = [[UIImageView alloc] initWithImage:widgetEmpty];
      [self addSubview:indicator];
      [indicator release];
  }

  return self;

}

When testing this table view, it would - in some cases only - crash the app, with one of these messages shown:

1
2
*** -[CALayer retain]: message sent to deallocated instance 0x5c73490`
modifying layer that is being finalized 0x713a170`

These are thrown when you attempt to access UIView of any sort that is in the process of being deallocated. Catching these things is very tricky, because the objects mentioned are already gone, so even if you setup breakpoint to objc_exception_throw, it won’t help you much.

So, do you see where the issue is? :)

I am releasing indicator view twice. First time when it is created and second time in dealloc. The latter is not needed, since all subviews of the cell’s view will automatically be released along with it. The code I wrote would be correct if the init line used the property:

1
self.indicator = [[UIImageView alloc] initWithImage:widgetEmpty];

By using self, I’m actually raising the retain count to 2, so two release calls would be fine. Thus, the proper initialization code would be:

1
2
3
4
UIImageView *ind = [[UIImageView alloc] initWithImage:widgetEmpty];
self.indicator = ind;
[self addSubview:ind];
[ind release];

And then the indicator release in the dealloc is mandatory (and also a proper way when dealing with retained properties).

iPad 2gen prediction

When Apple updated screen res of the iPhone 4 to 640x960 on the same 3.5" (diagonal) form factor as previous iPhones, the magic Retina Display number turned out to be 326ppi (pixels per inch). The result is an awesome display, the best I have ever seen.

iPad on the other hand has 9.7" (diagonal) with 1024x768 resolution, which gives 132ppi. John Siracusa said that next iPad will most likely have the same improvement in display rez, meaning it will have 2048x1536 - so the iOS4' @2x API stuff work the same.

Granted, such resolution sounds ginormous - not even Apple’s latest 27" monitor is big enough to design interfaces that big. But if that really happen…

…how big the iPad would physically needs to be?

iPad has 1.33x aspect ratio and 9.7" diagonal display now. 2048x1536 and with 326ppi equals to about 9650in in one very long line, or divide again to get about 29.6in2. From there, the math is easy: 1.33x * x = 29.6, means that x is 4.71in and that physical screen size of the Retina Display iPad would be 4.71 x 6.28, or about 7.85" diagonal.

Sounds quite possible, does it not?

Color of raw image pixel data and iPhone 4's retina display

In my Ambient Mood Lamp app, I have a color picker where you can choose the background color by simply tapping the color on an image, like this:

Color swirl

What I need from there is the actual RGB representation of the pixel color. To get that, there’s quite a bit of code involved. A large part of it is taken from Apple’s technical note QA1509.

Pixel color fetching is done in this if block from that article:

1
2
3
4
5
6
7
8
9
if (data != NULL)

{

    // **** You have a pointer to the image data ****

    // **** Do stuff with the data here ****

}

I picked up the code to get the pixel color from some web page I lost track of. This is the code:

1
2
3
4
5
6
7
8
9
10
11
int offset = ((w*round(point.y))+round(point.x)) * 4;

int alpha =  data[offset];

int red = data[offset+1];

int green = data[offset+2];

int blue = data[offset+3];

color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];

w is the width of one row of data, and point {x,y} is where the screen was touched. The * 4 in the first line means 4 bytes of raw data per pixel. Well, 4 bytes when your screen res is up to 160ish ppi. On iPhone 4’s Retina Display, with its 326ppi resolution, this should be 8. Which means correct code now is:

1
int offset = ((w*round(point.y))+round(point.x)) * 4 * [[UIScreen mainScreen] scale];

Welcome to wonderful world of resolution independent programming.