A Great Product is Necessary – but not Sufficient – for Success

A Great Product is Necessary – but not Sufficient – for Success

Great Story

Yesterday, I posited that great marketing is simply great storytelling.

And great stories all begin with an interesting subject to frame a compelling narrative around.

In marketing, that subject is your product (for simplicity’s sake, I’m purposefully conflating products and services, to being simply the “thing” that you’re trying to influence an audience to buy – or buy into).

And while a solid and innovative product is the very beginning of crafting a compelling campaign, it is not sufficient for the success of that product in the marketplace. In fact, I would claim that great products fail almost entirely because their creators didn’t tell their “story” in a way that hit home with their audience… or they told it at a time when their audience wasn’t ready to hear that particular story.

Let’s look at some illustrative brand examples: mobile devices.

The current mobile marketplace is dominated by Samsung (Android) and Apple (iOS). But they were by no means the first companies to market “smart” mobile devices. Who were? Palm, Research in Motion (RIM, neé Blackberry), and Microsoft (in the case of Microsoft, nearly seventeen years ago)!

So – why did Palm, Blackberry, and Microsoft fail to win – or in case of Palm and Blackberry, fail to keep – hearts and minds, while Apple and Samsung (Android) now rule the world?

In the case of Microsoft, they never made the cogent case for why Microsoft CE (their first mobile smart OS) was something that consumers needed to buy. Arguably, the nascent mobile web wasn’t ready ten years ago – from a design and UX standpoint – to make CE an attractive portal for readable web sites. So in a sense, it was a combination of Microsoft not successfully pitching why the devices were needed, and the mobile web not being ready to support a new wave of mobile consumers.

In the cases of both Palm and Blackberry, you have two early market dominators who enjoyed a near monopoly – for a time – but squandered their positioning through poor leadership, lack of innovation, and the inability of each company to successfully innovate and change their product narratives, when new challengers entered their respective markets.

As a result, Palm is history (for all intents and purposes), and Blackberry is a mere shadow of its former self, all in the span of a handful of years.

So – why did Apple and Samsung (Android) succeed (ed: so far), while these other brands stumbled so badly? Because they had a larger story to tell, that was more than simply describing the specs of their product.

Apple was able to leverage a huge installed base of users in an existing ecosystem (with their captive credit card numbers in tow), tethered to their iTunes music store. They were able to tell a story of “everything just works” (true or no – it was a simple and compelling tagline).

Samsung was able to leverage the massive popularity of Android, while touting innovation over their main competitor Apple, playing heavily upon a narrative that iOS is very cool – for your parents. And, they arguably have a successful narrative around doing things “years ahead” of Apple (the phablet form factor, near field communications (NFC), contact sharing, etc.).

There are other recent examples in the mobile space, demonstrating how strong brands can fail, lacking a compelling product narrative – like Nokia; a brand that dominated the “feature phone” handset space globally, that has now virtually ceased to exist as a separate mobile brand, in no small part to idiotic “storytelling“, via their CEO, Stephen Elop, in his “Burning Platform” memo.

These examples all demonstrate that great products can fail, and fail hard – either to take root, as was the case with Windows CE, or to keep marketshare, as was the case with Palm, RIM, and Nokia – because of the lack of a compelling narrative promoting and maintaining their brands.

And, they demonstrate how a strong product stories can elevate what could have been “also ran” products, into the next generation market leaders, which is no mean feat; ask Microsoft, trying to claw their way back into mobile relevance with a very good product (Windows Phone), a superior camera (best in breed, in my opinion) – but a decidedly muddled marketing story.

We’re about to see a similar battle engaged anew, with the announcement of the new Apple Watch. Will Apple be able to create a strong enough narrative as to why their new product is more compelling than the Samsung Gear, the Moto 360, or the Pebble, to possibly create a new market leader?

If past is prologue, I wouldn’t count them out.

Tomorrow, I’ll discuss how a brand’s voice is essential to telling a great marketing story.

Video

Apple Announces Watch, iPhone 6, and iPhone 6 Plus

Apple just announced their much anticipated Watch, plus two new iPhone models (the 6 and the 6 Plus). What do you think about the new offerings?

You’re Doing It Wrong. Obviously.

You’re Doing It Wrong. Obviously.

I depend upon my work computer for my livelihood. I work at home. On the road. In Starbucks. In motel rooms. Waiting for the kids in pickup line.

Which means I am constantly wrapping and unwrapping my power cord, all the live-long day.

Now, being a savvy latte-swilling, elitist dbag MacBook Pro power user, I of course wrap my power cable, this way:

That is to say, until this happened:

You're Doing It Wrong

No problem, I think. I’ll just order a replacement.

HOLY COW. These bad boys are 79 US DOLLARS. What are they made of? Printer Ink?

Well. Let’s check Amazon.

Amazon Screen Shot

It seems that I can save a whopping 1% through Amazon. Yay me. And, “free” shipping (yay, Prime).

Oy. 

Apparently, Apple is more than aware of this little design flaw in their MagSafe adapters.

It’s cold comfort at this point.

For now, I’ll just have to live with stuffing my MagSafe adapter into my travel bag like a “normal.”

That, or invest in adapter futures.

iOS 7 – First Thoughts

iOS 7 – First Thoughts

After using iOS 7 a few days, here are some likes and dislikes. Nothing too cerebral, just my surface observations:

Likes:

  • New “Physics” – bubble animations on the Messages app, zoom on App opens, depth of field and animated pano on lock screens and on background.
  • New Utility Panel (drag up from the bottom) – includes buttons for Airplane Mode, Wifi, BlueTooth, Do Not Disturb, and Orientation Lock; Built-in Flashlight (goodbye, app category) App, and buttons for Alarm, Calculator, and Camera. Slick.
  • Ability to have App groupings of unlimited size.
  • Multitasking screen

Dislikes:

  • Alert Panel (Pull down from the top gesture) Redesign. I can’t quite say why it’s not appealing to me. Maybe because I don’t think the descriptors at the top (Today, All, Missed) really describe what you find underneath. I mean, do I really need a separate tab for appointments I missed? Isn’t that what a smart device is supposed to prevent?
  • Bugginess of the Beta – Yes, it is developer-only beta code. It crashes a lot at this stage. Biggest dislike.
  • Stuff I Used to Know How to Do That I Had to Re-Learn – Spotlight search no longer has its own screen; instead, if you drag down on any Springboard screen (screen with apps), you get a Spotlight Search input. Took me a while to intuit that. Also, running apps are no longer killed by pressing the icon until they wiggle and you “x” them out – you now scroll through the multi-tasker and “flick” them up and off the screen, ala Palm Pre WebOS. Do I dislike these behaviors because they are different, or because I had to relearn them? Dunno. But they’re on my dislike list all the same.

Ambivalent Abouts:

  • New icons
  • Transparencies
  • Flat look

In short, iOS 7 is a work in progress. And works like it. Reserving further judgement until I walk around a bit more with the OS.

Apple Lion Signature Capture

Apple Lion Signature Capture

Big Time Bug in Facebook Connect for the iPhone

Big Time Bug in Facebook Connect for the iPhone

While working on a new app for a client in Orlando today, we uncovered a significant bug in the Facebook Connect classes from Facebook for the iPhone.

What’s the bug?

Well, if you’re one of the fortunate souls who have a FB user id larger than an int, FB Connect for the iPhone, as it comes from Facebook, will blow up.

Blow up REAL good.

Oh, it may authenticate. But just wait till you check the uid in the session.

HEY! It doesn’t match my real uid!

The big bug is this – the FB Connect code uses library methods meant for integer values… all the while preaching to everyone the importance of Facebook User IDs being 64 bit values; in other words, in Objective-C parlance, long longs.

Too bad, ’cause with FB Connect for iPhone what this turns out to be is “do as I say, not as I do.”

Here’s where the busted code exists in the FBConnect code, and what needs to be done to correct:

FBSession.m

in the -(void)save method, replace the line that has [defaults setInteger:uid] forKey:@"FBUserId"] with the following:

[defaults setObject:[NSString stringWithFormat:@"%qi", _uid] forKey:@"FBUserId"];

in the -(BOOL)resume method, replace the line that has FBUID uid = [defaults intForKey:@"FBUserId"] with the following:

long long uid = 0;
NSString * uidString = [defaults objectForKey:@"FBUserId"];
if (uidString!=nil) {
NSScanner* scanner = [NSScanner scannerWithString:uidString];
if([scanner scanLongLong:&uid] == YES) {}
}

For all the razzing I’m giving Facebook here, really the broken parts of the classes are centered around moving data in and out of the settings bundle. Yeah – the uid’s are integers – big ass integers, to be sure – but the int functions for the settings bundle are no good for long longs – you got to convert them into NSStrings, and then back to long long again (%qi in sprintf-speak) when using the bundle functions.

Hope this helps those of you trying to explain why your iPhone apps work most of the time with Facebook Connect.

Sometimes, it really isn’t you – it IS the software (with apologies to Nick Burns, the Computer Guy).

In Which I Detail The Differences Between the Android and iPhone SDKs…

In Which I Detail The Differences Between the Android and iPhone SDKs…

I’ve had the pleasure of working on two side-by-side streams of development for the same app on two different platforms since January 1 – the new TweetPhoto applications for the iPhone and for the Android (see screen grabs below).

TweetPhoto for the Android
TweetPhoto for iPhone

First of all, let me say that from a developer perspective, I’m one of the most platform agnostic people you’ll find anywhere.  My philosophy is that money is green, spends the same anywhere, and if someone wants to build something on their favorite platforms of choice and needs someone to develop for them, then that platform is now my de facto “favorite development environment ever.”

Fact is, if one sticks around long enough, you outlive whole sets of tools, operating systems, and methodologies.  That certainly has been true in my career; while I lovingly still have my IBM 360 yellow book and can still recognize a fair subset of Hollerith on sight, it’s almost as embarrassing as watching The Who play the Super Bowl to talk about it.

Wandering sidebar aside, the past six weeks have given me a fantastic opportunity to compare iPhone and Android, side by side, on the same problem set and see how the two environments stack up.

The verdict?

There are things about each SDK I really like… and there are things I really dislike equally.

In fairness, my experience on the iPhone SDK is much more extensive than on Android.

Even so, putting together the exact same feature set on the two platforms points out where one platform shines over the other in terms of ease of implementation and execution.

All of the opinions below are 100% subjective and are my own.  If you have a contrary opinion, fine.  I don’t care.  Call a talk show or write your own post.

Development Environment

For the iPhone, you really only have one choice for developing apps – XCode.  For Android, I chose to use Eclipse and the Android plugins for Eclipse.

The Apple experience is tight, feels integrated out of the box, and just works.  The iPhone emulator is fantastic and works almost like the real thing.

Eclipse, since it is the Swiss Army Knife of Java Development, is not as integrated – out of the box – as XCode with regard to it’s Android implementation.  The Android Eclipse plug ins work great and are powerful.  I really dig Eclipse’s “Quick Fix” feature for including imports I need and recommending ways to fix broken code.  What I don’t dig is starting and re-starting the various emulators when trying to debug a running Android app.  Which I did tens of times each day when Eclipse lost full connection with the emulator I happen to be running.

Even so, I found the tools for controlling the features of the emulator (or should I say, emulators – you can configure an emulator AVM for each targeted device type and OS level you want) on Android to be much more powerful than the control one has – or doesn’t – over the iPhone emulator.

In a nutshell, someone proficient with Eclipse and who is a half-way decent open source / Java developer – who has never developed a mobile app – will be able to jump in and crank out working code quickly.  Apple, especially to a newcomer to Objective-C, is a pretty tough slog for the beginner.

Honestly, it took me about six weeks before I really felt a mastery over Objective-C.  From a standing start to finished app, it took me under three weeks to develop a professional grade Android app.  Enough said.

Getting Apps to Devices

The whole process of getting certificates, creating provisioning profiles, and the whole Apple approval process is pretty daunting to a newcomer.  Remembering back to when I first started to developing iPhone apps, and the time it took me to get up and sprinting, Apple’s setup is a pretty damn big barrier to entry.  Heck – I still run into provisioning and certificate problems switching between developer accounts when developing for multiple customers on the same development system.

Without going through the Apple App store, your only option is to use Ad Hoc provisioning, which limits you to 50 devices per Ad Hoc provisioning profile and you have to collect device UUIDs to make it work.  Factor in having to distribute new profiles each time a new device is added to keep everyone in sync and it quickly can become a real pain, especially if many QA testers are involved in a project.

For Android, you basically sign a .apk file and you’re off to the races.  You still have an approval process to get on Android Market – the analogue to Apple’s App Store – but there is absolutely nothing standing in the way of you distributing your app on your own without Google standing in your way.

Advantage to Google on this.  It’s worlds easier to get an app out and onto devices under Android than iPhone.

SDK Feature Comparisons

OK.  Getting the nuts and bolts of making apps for each platform behind us, let’s take a look at how coding for each device stacks up.

Documentation: Android’s documentation is there… but woefully lacks useful examples in a place handy to the API call I want to make.  My least favorite sentence to read in any discussion / support forum of any ilk is “well, if you check the sample apps, you’d see…” You know what?  I don’t have time to read every line of every sample application.  You want to see an easy to use document with ready-to-use sample code should be done?  Look at the PHP.net site.  It should be that easy to see how to use API calls.  Apple is not much better in this regard, but on the whole the iPhone is documented much better than the Android developer site.  But I’m not a huge fan of either, truth be told.

And while I’m screeding here (is that the proper pluperfrect subjunctive?) let me just say that it’s a good thing that Google exists, because I had to Google a shit-load of example code from about a hundred different places for stuff that should have been trivial to find – but wasn’t.  Like camera operation.  See the next section.

Using the Camera: The iPhone has a well documented interface for using it’s phone, and for selecting photos from it’s library – the image picker.  Android doesn’t have a pre-packaged all-in-one interface for doing this.  There is a bare bones example of how one ties into the camera, and a poorly documented – as in I had to find the solution in the wilds of the interwebs – of how one browses the camera’s gallery for photos already taken.

Let me go on the record as saying that in terms of overall code, it took a lot less code to do camera operations under Android than it did under Apple to do the same task – but that for Apple, it took me maybe two hours to get right while under Android it took me the better part of a weekend to find the right solution, and found somewhere other than on the Android developer site.  The image gallery code turned out to be like two lines of Java to implement, but is almost criminally under-documented.

Screen Design: Apple – hands down – has a better screen design tool in Interface Builder than the screen design tools available under the Android SDK.  I wound up doing most of my screen design using the underlying XML for each screen anyway.  Having said that, creating screens to display suitably for portrait or landscape mode under Android is ridiculously easy when compared to what you have to do under iPhone.  And before someone says, “well, it just works under iPhone as well”… child, please.  Only if you finagle each design element individually under Interface Builder, subclass the tab controller, capture device orientation for subviews that don’t respond to the “should change orientation” messages.  By a mile, this is something that Android did very well compared to Apple.

Maps: this is something that Google failed miserably at, in my very humble opinion.  Under iPhone, you add a MapView and it works with just a little additional setup for scaling and setting your location.  I got my first MapView in an Apple app working in like 10 minutes.  On Android, you have to get the MD5 fingerprint of your keystore (where the certificate you sign your app is stored) and apply for a Google Maps key.  And not just one key, but two keys – one for your debug keystore and one for your production keystore.  Which means that you have to manually change your keys between the debug key (so you can see your maps on the emulator) and between the production key (so you can see maps on real devices).  This is a total pain in the ass and is one more thing to forget before staging an app going out the door.  All in all, it took me an hour before figuring this all out and getting my first map working on Android.

Permissions: Android has many permissions that can be set, many of which you don’t know you need until you try and run your app and have it fail inexplicably.  For example, internet access is disabled by default (on a mobile device? yeah.  I know).  While I understand the beauty of this type of control, and that they are being safe rather than sorry, having to accidentally discover which permission is keeping my app from working is a not so pleasant experience.  The iPhone SDK really doesn’t have anything like this in terms of hobbling application capabilities on such a wholesale scale (push notifications being the one notable exception).

Application Manifest: Under Android, each screen (or Activity) has to be explicitly declared in the Application Manifest – more or less a map of your application’s permissions and activities – or your app will bomb when you try to invoke that screen.  I would say that this is the cause of most accidental crapping out that happens when putting apps together for the first time – forgetting to put a new screen into the manifest.  Under iPhone, you typically run into a similar situation by trying to load the wrong type of view controller into an improperly defined variable – but it’s not really the same thing.  Under iPhone, you know you need to load a view a certain way, and if you’ve defined the view it should work.  it’s not at all intuitive that you have to remember to add your new screen class to the application’s manifest file.  Or else.

Overall verdict: each SDK excels or fails because of the elements upon which they are built (amazing grasp of the obvious duly noted).

Because Objective-C is, at its heart of black hearts, just C with some objects bolted on, everything basically has to be built from scratch, especially tying UI elements together.  Because Android is Java based, and most of the screen design and integration “just done” for you, the time from zero-to-application running is much smaller under Android than under Apple.

My one huge complaint about Android is that a developer has to manually bring several pieces together to do anything productive, whereas with Apple it’s download and code.

In truth, if you’re a Java developer, you’re gonna be right at home with Android and curse the Apple fan boys.  If you’re an Apple fan boy, you’re gonna poo-poo the open source dweebs who have to support all of the forks that the Android environment will be challenged with as set makers try to differentiate features between their competitors.

And if you’re a developer like me, who’s just trying to pay the mortgage, feed the kids, and keep tuition current, you just hope to stay even.

Video

Amigo Sorting Change

A Little More on NSDateFormatter

A Little More on NSDateFormatter

NSDateFormatter (as mentioned previously) is tremendously handy.

Unfortunately, it is rather sparsely documented.

I’ve had the opportunity to use it in a number of different projects; however, in each one, I’ve had to use a bit of trial and effort to actually get strings to decode into a valid NSDate object.  All the while wishing for better documentation.

Today, I ran across this blog post that does a good job of consolidating format strings for NSDateFormatter in one place.  I thought about adding my own comments to this post, but I believe the original does the topic enough justice.  Check it out, then see my version of a helper function which I use in the TweetPhoto iPhone App.

If you plan on writing an iPhone interface to any of the Social Media APIs (Twitter, FriendFeed, etc.), you’re gonna wind up (at some point) needing to implement some helper function for NSDateFormatter.

You Can Have It In Any Color – As Long As It’s Black

You Can Have It In Any Color – As Long As It’s Black

I’ve been banging my head up against a wall today over something that should be easy.

What was I trying to do?

Well, I’ve been working this week on a new app for TweetPhoto.com that will allow you to post photos from your iPhone to the TweetPhoto web service.  In fact I wrote about it here.

I thought I had one problem licked – sending geo tag information to the API call.  As it turns out, I had only HALF the problem licked.

Initially, I was using the phone’s CoreLocation services to determine location and upload that to the API.  This is totally cool when you are using the camera.

Not so much when uploading photos taken elsewhere.  Ouch.

OK.  No problem.  I know the phone stores EXIF (exchangeable image file format) information with the JPEGS it takes, so it should be a slam dunk to grab that from th image picker control, right?

Wrong, grasshopper (with props to the recently deceased David Carradine).

The image picker STRIPS all EXIF information from photos passed in from the camera roll, so you, Mr. and / or Mrs. Developer, are screwed.

OK.  I’m a fart smeller.  I should be able to figure this out.

Hey – what if I can find where the camera roll is on the iPhone, enumerate it directly and read the image files from there?

Oh.  Heavenly Days.  That is a GREAT idea.

So, I trot out this gem, feeling like I have the problem close to being solved:

-(void)getCoords:(UIImage *)image lat:(float*)latAddr lon:(float*)lonAddr {

NSDirectoryEnumerator *enumerator  = [[NSFileManager defaultManager] enumeratorAtPath:  @”var/mobile/Media/DCIM/100APPLE”];
NSAutoreleasePool *innerPool = [[NSAutoreleasePool alloc] init];
id curObject;

*latAddr = 0;
*lonAddr = 0;

while ((curObject = [enumerator nextObject])) {
if ([[curObject pathExtension] isEqualToString:@”JPG”]) {

NSData * fileContents = [NSData dataWithContentsOfFile:[NSString stringWithFormat:@”var/mobile/Media/DCIM/100APPLE/%@”, curObject]];
UIImageView * seeMe = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 480)];
seeMe.image = [UIImage imageWithData:fileContents];

// First, we’ll get a JPEG representation of the image
EXFJpeg* jpegScanner = [[EXFJpeg alloc] init];
[jpegScanner scanImageData:fileContents];
EXFGPSLoc * lat    = [jpegScanner.exifMetaData tagValue:[NSNumber numberWithInt:EXIF_GPSLatitude]];
NSString * latRef = [jpegScanner.exifMetaData tagValue:[NSNumber numberWithInt:EXIF_GPSLatitudeRef]];
EXFGPSLoc * lon    = [jpegScanner.exifMetaData tagValue:[NSNumber numberWithInt:EXIF_GPSLongitude]];
NSString * lonRef = [jpegScanner.exifMetaData tagValue:[NSNumber numberWithInt:EXIF_GPSLongitudeRef]];

float flat = [[NSString stringWithFormat:@”%f”, lat.degrees.numerator + ((float)lat.minutes.numerator / (float)lat.minutes.denominator) / 60.0] floatValue];
float flon = [[NSString stringWithFormat:@”%f”, lon.degrees.numerator + ((float)lon.minutes.numerator / (float)lon.minutes.denominator) / 60.0] floatValue];

if ([[latRef substringToIndex:1] isEqualToString:@”S”]) {
flat = -flat;
}
if ([[lonRef substringToIndex:1] isEqualToString:@”W”]) {
flon = -flon;
}

// Does the image match???
if (seeMe == image) {
*latAddr = flat;
*lonAddr = flon;
return;
}

[jpegScanner release];
[seeMe release];
}

[innerPool release];
innerPool = [[NSAutoreleasePool alloc] init];
}
[innerPool release];
innerPool = nil;
}

NOW I’m rolling.  This works GREAT.  I can read images, extract EXIF information… feeling good.

Until I realize that I have NO way to associate the files that I am reading directly from what the image picker returns to me.  Did I just say “shit” out loud?  Because that is what I’m swimming in.

The image picker simply hands back an UIImage with some editing information.  That’s it.

OK, OK, OK.  Maybe I can compare NSData elements… or UIImage elements against what I read from disk and what the picker sends back… so far, neither of those approaches is working.

And now I’m sitting here, realizing that I can accurately describe ANY image’s geo tagging information.  I just can’t pick a SPECIFIC image from the bunch, well, at least using Apple’s APIs.

I’m coming to the sad realization that I might have to write my own bastardized version of the image picker, at least for scrolling through the camera roll.

I ain’t skeered to do it.  I just wish I didn’t have to.

But that looks like that’s exactly what I’m gonna have to do.  Damn it.