Well, over the weekend, I made some progress with my woes of getting geo-tagged info from pictures on the iPhone.
I wound up writing my own analog version of the camera roll picker, and it came out really nice – I was able to read pics from the phone’s DCIM directories for pictures and screen grabs.
But more importantly, I was able to read the EXIF information from the photos, which the API picker strips from view.
Happy, I soon found myself to be deluded – because any image I took with the image picker controller’s camera setting also stripped EXIF information when saved to disk.
After much pain (which I’ll detail in a future post) my solution was the following:
1) Write the picture taken with a camera to disk using the Apple API. This allows the API to find out what the next picture should be named (not as easy as just looking for the highest named file – for example, what if there are no pics on the phone? You would think that the number would be IMG_0001.JPG, but the iPhone “knows” – somehow – what the last picture taken actually was. By calling the API, you get the phone to always tell you what the next file name should be.
2) Now that you have written a JPEG using the API (and really, also a Thumbnail 75×75 JPEG as well), delete that image.
3) Write your version of that image, with the appropriate geo tags from the CoreLocation services.
That is the 10,000 foot version of things. In reality, this little feat of mimicking the built in camera controllers turned out to be a royal pain in the ass.
But at the end of the day, I can take pics, store geo tags, and read geo tags from the iPhone in a way that looks just like the documented uiimagepickercontroller interfaces do.