24 fps: Where Does It Come From?

Back in the day (turn of the last century) there was no such thing as camera batteries or sound men. Men were men and cameras were hand cranked. As they had evolved from still cameras they were sort of still camera Gatling guns, capturing still frames as fast as you cared to crank. Somewhere past 14fps something magical happened and persistence of vision started to fuse the images so rather than a fast slide show it started to look like motion. So cameras were built to move one linear foot of film per two cranks, which meant if you cranked at “coffee grinder” speed you hit 60 feet per minute, which comes out to 16fps, just north of that 14fps effect. Cranking faster improved the persistence of vision thing, but producers didn’t like you blowing through all that expensive film, and besides there was really only one stock available and it was slow, about 24asa. Cranking faster meant less light per frame. Sometimes you cranked even less than 14fps to squeeze a bit more exposure out of it.

This is where it gets a little weird. Projectionists, who were also often hand cranking their projectors had a habit of cranking faster. Faster meant faster turnaround in seating, which meant more $ and even better persistence of vision without annoying flicker. Sure, action was sped up, but the whole thing was new, no one seemed to complain. In fact, by 1925 The Society Of Cinema Engineers (now known as SMPTE) had codified it recommending

60 feet per minute (16fps) for camera speeds and projecting at 80 feet per minute (21.3fps) seems weird now to pick a different speed for display from capture, but to review, faster cameras cost more money, and faster projectors made money, and after all, producers are paying for everything.

proposed standard cranking and projection speed circa 1927 from SMPE

proposed standard cranking and projection speed circa 1927 from SMPE

Anyway, someone decided it would be a great idea to add sound. How hard could it be? In fact, several companies tried to be the first to bring sound to the movies, hoping to capture the market. Funny thing is they all insisted on capturing at the same frame rate they displayed at. If you didn’t, the pitch would be all wrong and everybody would sound silly. And forget about music. Some picked 80 feet per minute (the already established speed for projection), some picked 85 feet per minute, and some picked 90 feet per minute. First one to get a working system was Warner Brothers Vitaphone. It was used in the 1927 “The Jazz Singer” which was the first feature length film with sync dialog and is considered the official start of the “Talkies.”

IMG_0582

Western Electric’s Bell Telephone Laboratories (and their Vitaphone system) as well as other systems listed taking speed and projection speed (SMPTE 1927)

 

The Vitaphone engineers had picked 90 feet per minute, or 24fps as their capture and projection speed. If one of the others had been first, we easily could be shooting 21.33fps or 22.66fp as a standard today. So sometimes you get lucky.
Except the Vitaphone system was terrible. It sounded good but that’s all that could be said about it. The sound was recorded on 16″ disk records separate from the film. They could only be played 20-30 times before they were no good, and they could break, so you had to send lots of duplicate disks with each roll of film to the projectionist. A disk only covered one reel so every reel change you at to cue up another record. And synchronizing the needle with the head of the roll was a pain in the ass. And if you broke the film for some reason and spliced it back, everything past that point was out of sync. During recording, the camera had to be motor powered from the mains, and the disks had to be made in recording booth adjacent to the set. In fact it was such a bad system that it was abandoned 5 years after it was implemented. And it only lasted that long because all the theaters that wanted to have sound had bought into that technology and had these crazy phonograph contraptions connected to their projectors and weren’t eager in throwing them away just after having bought them. Movietone, which used technology that put the audio as an optical track on the film had many advantages, but it was a little late out of the gate. Because Vitaphone was first, the engineers of Movietone decided to match the Vitaphone frame rate.

“Originally we recorded at a film speed of 85 feet per minute. After Affiliation with the Western Electric Company, this was changed to 90 feet per minute in order to use the controlled motors already worked out and used in the Vitaphone system.  There are a large number of both Vitaphone and Movietone installations scheduled and in operation, and sufficient apparatus is involved to make it impractical to change the present practice of sound reproducing.  In connection with the Society’s standard, I have been unable to find any New York theater which is running film at 85 feet a minute; the present normal speed is 105 feet and on Sundays often 120 feet per minute is used in order to get in an extra show”

Earl L Sponable, Technical Director, Fox-Case Corporation, New York City (“Some Techincal Apects of the Movietone” S.M.P.E. #31 September 1927, Page 458)

Soon enough Movietone lost ground as well, as technology changed but all subsequent sound systems stuck with the now established 24fps. So blame a sound man. Or thank him. Your choice.

Vitaphone

One of the first sound men checking a Vitaphone recording with a microscope while recording. Sort of a human playback head. (page 308 from Transactions of S.M.P.E. August 1927)  It turns out this man is George Groves.

 

Postscript: Now of course, we often mean 23.976 fps when we say 24 fps.  This one we can’t blame on sound.  23.976 fps as a camera frame rate can be blamed on the introduction of color to standard-def television broadcasts in the 1950’s, and the death of film as a capture medium, and by extension the death of telecine as a post process.

When TV started, it did not match the 24 fps established by film.  This is because engineers wanted to use the 60Hz cycle from our 110v 60Hz household power to drive frame rate.  60Hz meant 60 fields, or 30 frames per second, and was pretty easy to implement.  Once color came along in the 1950’s they wanted a standard that would be backwards compatible with black and white TVs.  Engineers could no longer use the 60hz rate of the household electricity to drive frame rates and keep the color and luminance signals to play nice so they settled on a very close one of 59.94hz.  This resulted in a frame rate of 29.97 fps, from the previous 30 fps, something the black and white receivers would still work with.

Telecine: in order to get film onto TV you had to do a step called telecine.  the film was played back and captured essentially by a video camera.  Getting 24 fps to fit into 30 fps was done via a clever math solution by what is called 3:2 pulldown.   There are two fields to a standard-def frame, and thus 60 fields per second, and 3:2 pulldown would use one film frame to make three fields (1.5 frames) of video.  Then the second film frame made two fields (1 frame) of video, and the third frame made 3 fields again and so on.  Doing this, 24 fps fits quite nicely into 30 fps broadcast.  and anything shot 24 fps but shown at 29.97 fps system would look like it had been shot at 23.976 fps, even though the camera had been running at 24 fps, as anything that ran through the telecine went through a 01% slowdown to conform to the 29.97fps broadcast standard.  Somewhere in the transition to High Definition 23.976 became codified as a standard, not only for broadcast, but a capture speed. As cameras more and more were digital and not film, they would choose 23.976 as the actual camera frame rate, rather than 24fps and expect the 0.1% slowdown to happen upon transfer from film to video, as had happened to film in telecine rooms.  No telecine? no slowdown, which meant it had to be implemented in actual camera speed.

So, hate 23.976 fps? blame a Sound Man, color TV, the death of film and the whole accidental way we pick our standards.

 

For those interested in reading more, I highly recommend reading online records of the Journal of Society of Motion Picture Engineers, made available by the Media History Project. http://mediahistoryproject.org/technical/

 

 

Why I hate UAV copters

Drones. UAVs. Octocopters. Call them what you want. They are the new disruptive technology in a lot of applications, but I am specifically going to talk about them as they apply to my industry, as a camera platform for dramatic, narrative, or commercial work. You can see the allure- it lets you get shots that otherwise would be difficult or in some cases impossible via traditional methods. And camera movement is the best way to add production value to your shoot.
And I hate them. I hate them like I hate steadicam. What’s that you say? Hate steadicam? What kind of Luddite or backwards filmmaker are you? Let me explain. I do not hate the steadicam device per se, and I completely agree that steadicam allows for shots that could not be obtained any other way. I might even be convinced to use one some day. Here is what I hate about steadicam. People act like it is the solution to everything and will make everything awesome. A Steadicam is not awesome sauce you get to spread all over your shoot. It has weaknesses, just like any camera platform. Let’s review. A Steadicam can not provide a stable horizon on a static shot,
especially after it has been moving, which is why you use Steadicam in the first place. There are operators that can mitigate this, but it is inherently difficult on this system, yet directors insist on designing shots completely blind to the weaknesses of the platform and Steadicam operators struggle to make the shot work.
Another misconception is that steadicam systems are fast. Tracks don’t need to be laid, it has the freedom of handheld, you can just go. The fact is often quite the opposite. Steadicam can cause the shoot to slow way down. First of all, the whole system of a steadicam requires that the rig be balanced. This means a lens change, an addition of a filter, adding a timecode box, all require time out to balance the rig. If you are dealing with a shoot with only one camera body, to go from tripod to Steadicam can be a very involved process and ties up the camera during that process.
Once you have the camera balanced and on the rig, another thing to keep in mind is that a Steadicam rig with a camera on it is quite heavy. Between the camera, the post, the counterweight, wireless transmitters, arm and vest, it can tax the best operator. This means the operator needs to park it on a stand or docking station when not actually executing the shot. This makes blocking and lighting the shot a bit more difficult as it is best done while the operator is wearing the sled, which you want to keep to a minimum to keep him or her fresh.
Also, as the camera has potentially 360 degree movement, lighting can be a challenge. Nowhere is safe, and lights need to either rig directly in the ceiling if possible, be hidden somehow, or travel with the camera. Again, all this can be done, but none of it in the category of “fast.”
So, lets review: I hate steadicam because people think it is secret sauce to make their shoot better but are completely ignorant of its weaknesses. There is one other thing I don’t like about steadicam, and that occurs even when people understand its weaknesses, and that is the urge to do the “trick shot” which is an exercise in “look what I can do” rather than filmmaking to drive the story. Sometimes you can do a trick shot and it move the story at the same time, and people like me can enjoy both aspects of it. But showing off you know how to use a tool doesn’t mean you have made a great story.
Flying camera platforms may not be Steadicams but they might as well be. They do and will give you shots that otherwise would have been at the very least difficult, or possibly impossible before. And those shots have the potential to be amazing. And just like Steadicams, people will misunderstand and assume that as long as you use a drone the shot will then therefore automatically be amazing. Misunderstanding the tool you are using will result in wasted time, frustrated crew, and mediocre filmmaking, just as it alway has. But with drones there are two new aspects. One, is something that is generally happening with all gear in the industry, something that optimistically is called the “democratization of filmmaking” but on a practical terms means that good working gear can be purchased for prices more approaching a car than a house. There is good and bad with this, but one side effect is there are a lot more players in the market. Generally this shakes out as those who have skill, or have the potential to learn skills adeptly end up on top, as it has always been, but without money being a gatekeeper that it once was, which means the entry level of the market is crowded, like the beginning of a marathon.
Drones, especially seem to fit this category. A few years ago the technology just wasn’t even there to make a working drone at any cost. Now parts and information is out there that a professional rig can be built from parts ordered online at a very reasonable cost. In fact turnkey solutions even exist under $800. Back in the 90’s a Steadicam probably cost upwards of $40,000-60,000, and that didn’t include the camera, just the platform. So, a lot more people are getting into drones than ever were into Steadicam. Drones are so new that there are no “old hands” at it. Everyone is at the start of the marathon and it’s crowded.
The other new aspect to drones as a camera platform is the safety issue. This is what really makes me dislike them. Back in the day, a careless Steadicam operator could possibly hurt themselves, damage their rig and the camera, and possibly the nearest person, be that an assistant or actor, although this was quite rare. I know of no stories of this happening directly, although I always think of the emergency ripcord on the Steadicam vests of guys I would assist for, which when pulled would cause the vest to split open and fall away, allowing the operator to shed the rig in seconds in case of a catastrophic event like falling into a large body of water with 80lbs of gear strapped to them. Again, I never heard of anyone having to exercise that option, but it was there.
Drones, on the other hand often are 20-40 pounds of flying danger, often with eight very high speed sharp rotors being driven by high energy high capacity lightweight batteries, all being controlled by wireless control. Often built from scratch by the operator. Some of them have fail safes, where if wireless control is lost, they will return to original launch site and descend. That’s great, but only if those automated systems are solid. Again, many of these things are being built from scratch, and the code being written or at least tweaked by the builder. If the drone loses flight stability be it from a large gust of wind, operator error, or hardware or software malfunction, you have a potentially lethal falling object that can kill you and others by either just plain blunt trauma 20lbs falling on your head, or cutting you open with its eight high velocity Ginsu knives it uses to fly, or burn you when one of the high capacity batteries rupture and spew a jet of flames and energy. Look on YouTube and you will find several UAV/ drone failures, often triggered by a gust of wind, and possibly complicated by navigational hazards like nearby buildings the drone can hit on its way down so that its structural integrity is compromised well before it hits you. Now imagine that the price of entry is so low, people with only a passing interest get into it. Before you know it the sky is dark with flying lawn mowers being driven by mediocre do it yourselfers, who think they have the secret sauce to awesome filmmaking.

This is an evolving topic, and the good news is that there has been some attempt to regulate them in a way I approve. Up until recently there was a big question mark on whether all kinds of drones were illegal, and where the FAA stood on it. It was like the Wild West. It seemed like before the rules got codified, it was “anything goes” approach which seems very dangerous to me.
Making them illegal seemed untenable. They were so cheap and offered the allure to so many people, enforcement seemed almost impossible. Also if they are illegal, there would be no regulatory control on them. Just this month the FAA has been authorizing individual companies to be certified for flight, excepting them from normally required regulations as long as they fit a certain category of flight, including flying only over a “sterile” environment, i.e. the controlled set. Licenses, permits and special rules are the way to go. And prosecution of those who refuse to play by the rules. Individual drone operators need to apply for “certification” in order to be legal. This is because the technology is cheap, readily available, and dangerous.
Drone camera platforms need to be safe, legal, and somewhat rare. I don’t hate drones, as much as I hate the idea of people flying homemade unregulated rigs over my head because that will somehow make the shot “cool.” By making them sensibly regulated they then will (in most cases) be operated by sensible, trained operators, and only when they are the appropriate tool for the job.

P.S. don’t get me started on Movis or other gimbal handheld systems.

How to Quiet a Noisy Dragon

My last post about my quick test with the new Dragon sensor had a bit of a surprise with the Dragon footage looking noisy, especially compared to the previous non-Dragon MX chip.  RED suggested that a fix was on the way, and soon.  They hinted at around a week.  That was June 21st.  My rule of thumb for RED target dates for delivering a product is: take the stated time period, double it, and add two months.  So, in this case that would mean somewhere around the first week of September. Well, RED beat the odds, while still completely missing their target of a week, and August 6, 2014, a mere 7 weeks, they released the fix.   The fix is a different way to debayer the RAW footage, selectable in their new beta release of REDCINE. The feature is called “DEB” or “Dragon Enhanced Blacks” although it could easily be called the “Anti Red Speckle Filter” as it gets rid of the red noise. it is a checkbox you select right below the Gamma settings in REDCINE.

DEB checkbox just below Gamma Settings in REDCINE

DEB checkbox just below Gamma Settings in REDCINE.  Here it is not selected, the current default.

The good news is it is retroactive and can be applied to footage you have already shot.  This is great for my purposes as I don’t have t re-do the test.  Here are some quick screen grabs.

same shot from prior test, with DEB allied and not.

same shot from prior test, with DEB applied and not.

DEB dragon Vs Epic-X

RED Epic MX on left, Lower right is Dragon with DEB applied.

So it definitely improves things. Further testing is warranted though. I hope to test it against a different camera, like Arri Alexa or Amira in the future.  RED is talking about making user swappable OPLFs but no time frame on that yet.  Once they do put a date on it, don’t forget to double it and add two months!

Epic MX vs Epic Dragon.

I just recently got my RED Epic back after RED installed the new “Dragon” chip.  I borrowed an Epic that still had the MX chip and shot side by side tests to see how much better the Dragon chip actually was.  I found a few surprises.

First off, I updated both cameras to the current release builds 5.1.51 and black shaded both of them after cameras had reached operating temp at adaptive at 65c.

IMG_4748the weather was a bit unsettled (as it always seems to be when I seem to have time to do these tests) so I decided to put on a 35mm Red Pro prime on the Epic MX and a 50mm RPP on the Dragon and move the cameras a bit to mimic same frame size of midground so I could roll both at the same time so exposure would be identical.  I set them for the same stop, which if I recall was something like f 8 1/3.  according to the false color overlay the Dragon had more info in the highlights before clipping.  In fact a small patch of white sky which clipped on the MX was apparently not clipping in the Dragon. I shot myself in my garage workspace that has diffused top light and a nice window for testing overexposure.  If I shoot in the afternoon there is a piece of an apartment building across the street that gets hit with sun, providing an excellent detail/ overexposure test element.  People have complained in the past about RED’s rendition of skin tones.  I think this is wildly overblown.  I have never had an issue with it, at least under daylight color temps, which is how I shoot the majority of my stuff.  The Dragon is supposed to be much better.  I did not notice much of a difference, nor did I see any problems with the old one.  One note is that with the new chip comes some new settings in REDCine, the app that lets you “develop” the Raw footage from the camera.  before Dragon there was “Redcolor 3” for color rendition, and “REDgamma 3” for gamma.  With Dragon there now is “Dragon Color” and “REDgamma 4.”  while I think you should use Dragoncolor for rendition off a dragon chip, don’t be fooled into thinking REDgamma 4 is automatically better.  It tends to be a bit crunchier than REDgamma 3 which under normal circumstances helps make a punchier image, but when testing over under exposure range, may fool you into thinking the Dragon chip has even less range than the Regular Epic.  Of course, professional graders probably would just use REDlogfilm (which is flat and holds onto the most range) and make their own curve based on the scene. but in this case I wanted to do a relatively unbiased side by side test.

screenshot from REDcine.

screenshot from REDcine.

Below are some interesting screen grabs:

COLOR RENDITION:

Side by Side Dragoncolor Vs REDcolor3

Side by Side Dragoncolor Vs REDcolor3

Dragon on the right, Epic-MX on left. Dragon set to Dragoncolor Epic MX set to REDcolor3  More detail in Dragon, and slightly less red tint to the skin.

HIGHLIGHTS:

Now here is a detail of the highlights: Dragon on the right, Epic on the left.  Both set to REDgamma3. slightly more highlight detail on the Dragon chip as expected.

RG3 Dragon vs MX
Don’t be fooled into using REDgamma 4 just because it is better.DRAGON RG4 VS EPIC RG3 Here is the same footage

but Dragon chip is rendered at REDgamma4.  It is very hard to tell which chip has the advantage in this scenario.brick looks about the same, maybe slight advantage Dragon, but leaves look hotter on the Dragon.

Now here is the same thing in REDlogfilm. You will see the Epic MX chip clips magenta showing the white is clipping.  Not so in the Dragon. as you might expect.

Epic-MX vs Dragon in REDlogfilm

Epic-MX vs Dragon in REDlogfilm

BIG SURPRISE: NOISE LEVEL:

This is Epic MX on left and Dragon on the right

This is Epic MX on left and Dragon on the right. both at 5600K, 800 ISO, 8:1 compression.  Dragon at 6KHD and Epic-MX at 5KHD.

 

This was the big surprise. The Dragon Epic was sharper, as was to be expected as it’s max resolution was better than the MX (6KHD in this case vs 5KHD) but there was more noise and pattern in the Dragon footage.  This was alarming as a year ago the Dragon chip had been advertized as 2000 ISO was as quiet as 800 ISO in the MX chip. What the hell? I paid over 8 grand to get MORE noise?

I went over to REDUSER.net and fund someone else had done essentially the same test as I had done and got the same results. if you are into flogging yourself, here is the link:

http://www.reduser.net/forum/showthread.php?117701-DRAGON-vs-EPIC-MX-Noise-NOT-GOOD

CONCLUSION:

Anyway the long and short of it was this: It is the new OPLF, along with black shading that is to blame.  When Dragons first started shipping to anybody (maybe December 2013?) they had what I will call version 1 of the OPLF (Optical Low Pass Filter) which is essentially a piece of custom class/filter in front of the sensor proper to improve performance, reject IR and limit moire.  All cameras have some kind of OPLF.  after logging some hours with it in the hands of users the following characteristics were found.  It did in fact seem pretty clean at 2000 ISO.  But there were weird magenta flares, which were very visible in low light situations.  A new OPLF was designed that got rid of the magenta flare, improved highlight performance by at least a stop (I have no idea how they did that, possibly in conjunction with new black shading algorithm?) and even better IR performance.  Downside? now 2000ISO is noisy.  And the whole thing blew up on REDUSER once everyone was getting new Dragons in numbers with the new OPLF.

The good news is that RED will, if you wish, put the old OPLF back in your camera, if that is what you wish.  And they claim to have a firmware build in the works (Don’t they always)  that will tweak black shading implementation that will address this problem, and that it will come out soon, possibly in a week.  But knowing RED that could mean a month.  They claim that the new OPLF brings so much to the table that they think the future is in the new OPLF with software tweaks. There even has been talk of user replaceable OPLFs in the future. I can confirm that the IR contamination on this new arrangement is quite good.  I had a shoot with an n1.2 and pola (6 stops of ND essentially) and a black dress Marine uniform in full sun and the uniform stayed black.  unfortunately I had shot it at 2000 ISO because I had believed that 2000 was the new 800, and that shooting at a higher ISO protected your highlights more. This had been true of the RED One, but is no longer true if RED Dragon.

Also, I rendered both the Dragon and the MX footage to a 1080 ProRes file.  the noise difference was indiscernible, which was a pleasant surprise.  I did not use any special noise reducing options on either.  So for the moment I am content to wait for the new firmware and black shading.  Even if it makes all the testing above obsolete.  But as always, test test test.  Sometimes you find a surprise.

 

ProRes Camera Test: BMPCC, F3, F5, and RED Epic MX

Recently I decided to test some of the cameras I often use and or have access to.  This included the Blackmagic Pocket Cinema Camera, a Sony F3, a Sony F5, A RED Epic MX.  Obviously these cameras record natively in different codecs and resolutions, but I decided to even the playing field somewhat by using an external recorder (A Convergent Design Odyssey 7Q) so that everything is recorded in ProRes HQ, except for the Pocket Cinema Camera, which both records ProRes HQ natively, and also only outputs via HDMI (Highly Dodgy Media Interface) which I hate and prefer not to use. In all the tests I used the same lens, a RED 18-50mm short zoom, which was PL mount so it fit on all the cameras, which all had PL mounts on.  Only for the Pocket Cinema camera did that significantly change the filed of view, as it is more of a super 16mm sensor size compared to 35mm sensor size of the others, which I compensated by zooming out to approximate the same field of view.  The RED Epic of could natively can record up to 5K, but in this case I was just recording the 1080 output from a 4KHD recording, as 4KHD most closely matches 35mm format size.  Also I did not record LOG on the F3 or F5, or output REDLog on the Epic.  I did do both “Film” Rec and “Video” modes in the Pocket Cinema camera, but I did not run them through Resolve and color correct them as I might if I were actually using the camera.  I wanted to look at the footage without doing any alterations other than what I might do in camera.  For the F3 and F5 which have scene files that affect the look of the camera, I arbitrarily picked Abel Cine’s JR45Cine scene file for both, anticipating that those would be the scene files I would use were I to shoot with these cameras.  For the Epic, I usually shoot with saturation set to 1.2 rather than the default 1.0, and that is what I did here. White balance was preset 5600 for all cameras. it was a somewhat unsettled day weather-wise, so take any ability to hold detail outside the window with a grain of salt as these shots were not all done simultaneously, as I was using one lens for all the cameras, and I had only two tripods in play.

Click on any below for full frame.

No surprises here.  Epic and F5 look the best.  the F5 skin tones look a bit too red, but the whites look a little truer I think. the Black Magic Pocket Cinema Camera looks like it needs to be graded some whether you shoot video or film mode, and therefore makes me think if I am going to have to grade it anyway to make it look its best, I would shoot in “Film” mode in the future and take it through Resolve.

Below are enlargements of the focus target. again, no surprises.  Sony F5 and RED Epic are the best for resolving and lack of moire.  I was surprised slightly at the F3 moire, and not surprised but slightly disappointed on the Black Magic Pocket Cinema Camera’s sharpness performance. Again, this is off a screen grab from the ProRes HQ 1080 recording.

If I were to pursue this test further, I would explore scene files in the F3 and F5 to see if I could get truer skin tones out of it while maintaining their ranges.  The RED I might play with white balance a little bit, maybe a little lower to get rid of some of the tones that seem a bit too warm.  My next test I had planned though was to test my RED Epic after it had it’s sensor upgraded to the “Dragon” sensor.

Christmas cards and vintage stereo cameras

My 2013 Christmas card photo. Anaglyph glasses required

My 2013 Christmas card photo. Anaglyph glasses required

My current obsession with vintage stereo cameras really caught fire because of my 2013 Christmas card.  I had been interested in stereo cameras, dabbling here and there over the years. I also like to tinker and do things with my hands.  I had done a staged, lit Christmas card in 2012 but it was a very stressful experience, what with nice clothes, and herding cats and my then pregnant wife and kid.  Everyone was grumpy for hours afterward.  I vowed 2013 would be different.  Something like an impromptu snap of the family.  But how could I still have something special about it? Due to my interest of old stereo cameras, I had two Russian “Sputnik” medium format stereo cameras.  One working, and one not, mainly for parts.   But I hadn’t used them in a few years. I also had some expired black and white medium format film in a drawer. Snow was forecast later that week, which is unusual in mid December in the DC area. I hatched a plan. I would take a picture in stereo of us outside the house either during the snowfall or shortly after that. I planned to make the Christmas card an anaglyph stereo print, which means the print would require viewing with “those funny glasses.” There are several ways to view a stereo image but they all boil down to one thing- sending the left image to the left eye and the right image to the right eye, while also blocking the image from the non-corresponding eye. Anaglyph is the cheapest way to do it, as I could get cheap paper glasses for .20¢ each, and include them with the picture in the envelope. The anaglyph system uses color to control the images to the eyes. The most common use today is red/ cyan anaglyph. You have the red filter over your left eye and cyan (the opposite of red) on the right eye. Take the left image, destined for the left eye, and tint it so there is no cyan in it, which will make it appear red. The red filter will do nothing to the red image. The cyan filter on the right eye, however will only let cyan images show up and as there is no cyan in the left image, it will appear a black frame to the right eye. Do the opposite to the right image, and you have an anaglyph print! It will look like a poorly registered color print without glasses, but 3D with the glasses. Anaglyph is well suited for black and white images as it is using color to encode the stereo information. You can do it with color images, but color reproduction may suffer a little. I mainly used black and white as a) it is better for anaglyph and b) it was what I had on hand, and what with film loosing ground as a format, I suspect medium format is only available via the mail now, and this whole idea hit me inside of a standard weather forecast, so ordering fresh stock for this seemed unlikely to work out.

I had never made an anaglyph before, so there was a reasonable possibility this would not work out. But the deadline and a goal became an incentive to teach myself how to do it. And if it didn’t work out, I always would have one half of the stereo pair, which could serve as a “normal” picture for the Christmas card. I had the camera and film, but I would need more elements. I found a source for cheap glasses and ordered them. I investigated getting Photoshop Elements, which shockingly, I didn’t already own. I decided not to buy it until I got a good “negative report” from my shoot, i.e. A picture actually worth printing either due to poor content or a malfunction on the camera or it’s operator (me!).
The day came, the snow arrived on schedule, I loaded the film and got the camera on a tripod and waited for the best moment. We went shopping for a Christmas tree up the street. We struck out on getting a tree but we did get a wreath. When we got home, since everyone was already dressed for outside, and it was still snowing, I got everyone to stand in the front yard, trying to compose a shot that both was well framed as well as using the 3D space. This meant my daughter, who is shorter than my wife, became the foreground element. My wife, the mid ground. And I, with my son on my shoulders would be the background. There were two wrinkles to this. Although I had 400asa Tri-X black and white film, it was pretty overcast due to the snow, and I ended shooting at around f5.6 which meant we couldn’t hold focus over the whole depth of the scene. I picked a point a little past my daughter as the focus point and hoped for the best. The second problem to be overcome was the twitchy-ness of the Sputnik camera I was using. The viewing system takes some getting used to, the film advance system is about as analog as you can get, and worst of all, the shutter trigger and timer were very hit or miss. First you have to cock the shutter. Then pull the timer lever down. Then push the shutter release. Doesn’t sound too bad. Except the shutter timer lever doesn’t wait till you hit the shutter to start running.  A common problem in Sputnik cameras. So you pull the timer down and let go and it immediately starts running. If you don’t cock the shutter before you start it AND trip the shutter after you have started the timer it won’t take a picture. If you hit the shutter release in the wrong order you take a picture without benefit of the timer. Only good thing is you can always re-set the timer after you have triggered the shutter as the shutter won’t fire until the timer runs out. So, cock shutter, start timer, trigger shutter, pull timer back to max, then run as fast as you can in wet snow with your 11 month old on your shoulders to be in the rear most part of the frame. Fortunately all that went wrong was early firing of the shutter. The roll had 12 exposures, which meant 6 stereo pairs. Two were ruined by early triggering of the shutter, two were screwed up by me mis-understanding the manual advance, one was ok, and one was good. A word on the advance. It is completely manual, to the point that the only way you know you have advanced it properly is a little window on the back of the camera that lets you look at the paper backing of the film that has a number on it when you are in the right position. If you overshoot, oh well, just have to advance to the next number. Also since you are shooting stereo, you have to advance it by two, at least when you don’t overshoot your target number. So if your first exposure is “1” you have to advance it to “3” to be ready for another stereo pair.

this is what happens when the shutter timer fires prematurely.

this is what happens when the shutter timer fires prematurely.

I scanned the negative, brought the shots into Photoshop Elements, aligned them and applied the anaglyph red/cyan to the images.  when aligning them you want the verticals to align perfectly, but the horizontal offset determines where in the 3D space the images lie.  I put my daughter slightly forward of the frame ,so she appeared to “pop” out slightly, and the rest of us fell deeper in the frame. It turned out quite well, overall.  based on this experience, and others, my tips for medium format stereo photography are as follows:

I love medium format- the negative is 4-5 time bigger than comparable 35mm (purpose built 35mm stereo cameras use a non-standard 35mm frame that is a bit smaller than the standard 35mm frame.) This means any dirt on the negative when you scan it in can be fixed relatively easily and is minor compared to 35mm.  Also, even with old Russian triplet lenses, the image has a lot of detail, and detail is important for good stereo photography.  The downside to medium format is on any given field of view  you end up using a higher millimeter lens.  The Sputniks use a 75mm lens (ok, a pair of them!) which in 35mm format would be telephoto.  Not so in medium format.  the side effect is a shallower depth of field.  Normally, that is a desired thing for portrait photography. but in stereo photography you want as much in focus as possible.   So ideally you want to be shooting more like an f11-22 if you can do it.  But you also don’t want a lot of noise, so you don’t want too fast a film stock.  so this generally means outdoor photography, or be willing to deal with some out of focus elements.

Also,  here are some tips for framing.  these Russian lenses are not really very good on the edges. so don’t put anything too important on the edge. generally put your most important element more towards the center.  This is also true for stereo photography.  Due to reasons I won’t get into too much here, anything you want to have pop forward of the frame should not break the edges of the frame.  there is a little flexibility with the bottom edge, but top and sides, if anything breaks that edge, it is going to have to play deeper than what I will call zero plane, which is where the frame sits in the 3D space. this is another reason to put anything in the foreground in the center, or near the center.

Also, contrary to what you might think, you don’t necessarily want to try and use your entire potential depth.  just because you are outside, doesn’t mean I should see anything out to infinity.  This is especially true if you want a fairly close foreground element.  too much range in depth can put background elements so far out of alignment that the eyes have trouble fusing them back into a 3D image.  You get ghosting and loss of 3D effect.  In this picture the house and plants define background at maybe 30-40 feet maximum, which is good, since my daughter is maybe 7 feet from the camera.

All in all, it came out quite well.  I got my daughter to pop out a bit, and the edge violation of her legs is not bad enough to break the 3D effect, and I staged my wife, me and my son on different planes of depth, and had the background defined by the house, limiting my total depth budget.  I used as much of the fame as possible although if I had to do it over again I would not have framed in the gate.  It is more forward than my daughter, and is on the edge, so there is a bit of edge violation and retinal rivalry there.  Some of that can be fixed with tricks called “floating window” but I didn’t have the time or the inclination to learn how to actually create one in time for the card. See for yourself if you have glasses and see if the gate gives you some problems.  your brain doesn’t know where to put it in the scene.  Only my daughter is in sharp focus, which technically not so desirable in 3D, but it didn’t bother me as much as I thought it might.  Also, I had no choice in the matter as I was limited by my film stock and ambient light as to what f stop to shoot at.  All in all, not a bad “snap” for a family photo, and my first serious attempt at actually executing a staged stereo photo.

 

 

Using a Hole Saw On my Apple Airport Time Capsule

Base of my Airport Time Capsule and me freehand drilling a hole in it.

Base of my Airport Time Capsule and me freehand drilling a hole in it.

A few weeks ago my Apple Airport Time Capsule up and died. The little green light went out and it just stopped working. For anyone who doesn’t know what an Airport Time Capsule” is it is a combination wi-fi router and wireless backup device. So when it went dead, I lost wi-fi in the house. Anyway, I was tempted to buy a new one at $300 a pop, but then I did a little research. Turns out these things have had a bad track record of going belly up, and the culprit generally is the power supply cooking itself to death, mainly because these devices don’t have adequate cooling. The components get overheated and over time you eventually have a failure. So I rolled the dice and spent $15 on a “repair kit” on eBay to fix it. We had a secondary wireless router that I was able to press into service in the meantime not making it a crisis at home.

A few days later the repair kit came in the mail – consisting of $5 in capacitors and instructions on how to effect the repair yourself. I could have researched the components myself but this seemed a fair price for the intellectual property on how to do it yourself. I had failed to realize when I bought the kit how much in the guts I had to get to fix it. This was no plug and play repair, I had to open up the power supply and re-solder new capacitors to the motherboard. No matter. I was game.

Getting My Tools Ready

Getting My Tools Ready

Sure enough after following all the directions (Do not try this without following someone’s directions, or at the very least let it sit unplugged for several days or manually discharge the capacitors so you don’t kill yourself) I found the offending capacitors. There were 4 in the replacement kit, but since my soldering skills are lackluster I opted for just replacing the ones that were clearly bulging and failed, which in this case meant the pair at the bottom of the board. I now recommend anyone doing this to just replace all four capacitors. The other two are easy to replace and even if they haven’t failed yet, I guarantee they are out of spec. But more on that later.

Motherboard of Power Supply.

Motherboard of Power Supply. My finger pointing to the pair of blown capacitors

 

After some cursing I managed to get the old ones removed and new ones installed. I carefully put the whole thing back together, went upstairs, plugged it in, and all the Ethernet cables, and after a minute the light went from amber to green and lo and behold the system was back up and running.

Also, now that I had the system cracked open it was a simple matter to pull the existing 1TB drive in there and replace it with a 2TB drive. Literally unplug two cables and peel off heat sensor attached with adhesive. The drive itself is not secured in any way. It just has nowhere to go, and I guess Apple assumes these things aren’t going to be moved around a lot while running. There was a Western Digital Black drive in there and I went with a new WD drive as well. You have three choices, a WD “green” drive which uses less power, “blue” drive which is for general computing, or “black” drive which is more “enterprise” class and will stand up to heavy use. It makes sense Apple would put a Black version in there. The thing is on 24/7 as it is also your wireless router, and the drive is not user serviceable. Or at least they discourage it. My feeling is that as this thing is really only a backup and not a NAS I don’t care too much if the drive goes bad in two years or something, by then drives will be even cheaper and I now know how to get in it, so replacing it is a minor inconvenience. So I went with Blue as a replacement. A very reasonable choice would have been Green, as the device has a known problem with heat dissipation a drive that conserves energy also conserves heat. Also, any performance loss from a Green drive not spinning as fast is going to be invisible due to the bottleneck of wi-fi anyway. I didn’t put in Green because it cost $10 more and I am a cheap skate.
So now I have a new drive in the Time Capsule and a functioning power supply. Time to put it all back together. I plug everything back together and then screw the aluminum bottom back on. Now, I didn’t explain how I opened the Time Capsule so I will review the first steps to get in. You get in from the bottom. The big featureless rubbery bottom that keeps it from sliding on whatever it sits on is glued directly to this aluminum base. With a good hair dryer or heat gun you can peel back (slowly!) the whole rubbery foot, revealing this cheese plate aluminum base with a bunch of small screws recessed it it that holds the aluminum base to the rest of the unit. So, here I am putting this thing together, thinking that all I did was fix the symptoms, not the cause. What is to prevent this thing from overheating in a year or two and me going back in to re-solder new components in? I really don’t want to put this suffocating rubber mat on top of this nice thin aluminum heat sink with a bunch of ventilation holes on it. And since heat rises, I decide to just put the aluminum base on, and then call it the top and put the Time Capsule upside down so the aluminum can dissipate heat off the top of the unit. A bit ugly, but hey it works.

Can you say "heat Sink?" Upside down and no rubber on the base

Can you say “heat Sink?”
Upside down and no rubber on the base

Apparently the rubber base is “Thermal Rubber” or something and unlike regular rubber which is an insulator, this rubber does conduct heat. Or at least that is Apple’s claim. Even so, it can’t be as good as aluminum with a bunch of holes in it.

Further Mods:
After feeling pretty smug, I do a little research on other people’s solutions for cooling. Well, turns out the Time Capsule is “double insulated” which is why the plug doesn’t need a grounding third pin. But one of the rules of double insulating is there can be no exposed metal that the user might touch. That way even if there is an electrical fault, if say, I don’t know, some guy decides to do a homemade repair on the power supply, nobody is going to get juiced touching the outside. At this point I have visions of one of my cats stepping on the damn thing and getting full current and at best, dead cat, and at worst, house burnt down. So I decide it is time to revisit my cooling technique.
After doing some research on other blogs I find that several people have been doing one of two things: removing the power supply entirely and putting a 3rd party external one in which fixes cooling by quite a lot, or by modifying the fan. I opted for the second choice.

If you are thinking of modding your time capsule, I strongly suggest you read these websites of guys who have done it before and from whom I learned a lot in prepping mine. Both of them either sell kits or will do the repair for you.
http://www.fackrell.me.uk/
https://sites.google.com/site/lapastenague/time-capsule-power-supply-repair-kits

First, lets talk about the stock fan placement. It butts up directly to the hard drive. It appears to suck air up and then blow it to one side, that side is directly onto one portion of the hard drive. I am no engineer, but wouldn’t you want the fan to blow on the hottest component? Which in this case is the power supply, hands down. And unlike the hard drive which is fairly sealed, the power supply is insulated on 4 sides, with two ends open, sort of like an open-ended burrito, so you could easily blow air through it. You just need to rotate the fan 90 degrees to get it to blow in the right direction. But I guess it doesn’t really matter because the fan doesn’t even come on unless there is a near meltdown in the device. One possibility I didn’t fully explore is moving the temp sensor that is on the drive to the power supply, but that is mainly because I decided to follow the advice of other hackers that disable the MOBO control of the fan entirely and just manually make it spin at a low-level all the time. Again, more on that in a little bit. But first the fan mod.
So, the plan is to rotate the fan 90 degrees. Turns out the best way to do this is to remove it from the aluminum bottom cheese plate and flip it over and THEN rotate it so the exhaust points to the power supply. This gets a little involved. First lets talk about the stock ventilation on the Time Capsule.

IMG_0110new fan placement

New Fan Placement: Notice exhaust now facing to the right (again, where the foam is) and that the fan had been inverted with the text now facing us.

In standard Apple procedure there appears to be no ventilation whatsoever. This is not the case. The thin groove along the upper part of the side hides the upper ventilation ports. And if you look very carefully at your rubber foot along the edge there are some holes in the aluminum base along the edge that are open to the air. So, there IS ventilation, just very minimal, and mostly passive.

IMG_0104

the ventilation of the Airport. The holes along the side are hidden but not plugged by the rubber base that is currently removed. The exhaust ports are hidden in the seam around the device, where the black screwdriver is pointing. Apple went through great pains to hide any visible cooling elements.

 

The fan only comes on in emergencies. And the fan has no direct access to any of the vents. It just cycles air around and I guess they hope convection moves hot air out the top and in from the bottom. So mostly passive. But since it is Apple I would call it more “passive aggressive” cooling.

Since the Time Capsule has failed once due to its heat load, its time to put the equivalent of a hood scoop on this thing and get that air moving. Flip the fan over, point it’s exhaust at the power supply, cover one of the intakes of the fan with a paper “plug” to force it to only use one side and cut a hole in the chassis of the Time Capsule so that the fan has access to the outside world. Plus the mod to make it spin all the time. All righty then.
The fan is held by rubber insulator/ suspension “feet” to the aluminum cheese plate. Some people have suggested removing them, flipping them, removing 5mm from them and crazy-gluing them back together to make everything fit properly as the other side of the fan is a different thickness. Way to fiddly for my taste. I just flipped it, left the rubber feet on, which now contact a circuit board and keep the fan off that, and use silicone as the glue, insulator, vibration absorber all rolled into one to the aluminum cheese plate. This is so much easier. All I have to do is cut a hole in the chassis. I mark the center of the three points where the rubber feet go though the cheese plate, find a suitable hole saw (I was determined not to buy one for this job, so the hole was going to be “best available” size.) and start drilling.

I want a matching hole through the rubber base, but I don’t want a bunch of metal shavings stuck to the remaining adhesive on it so I put a layer of wax paper between the aluminum and rubber. I make a nice neat round hole which I file the edges so it is smooth. The hole through the rubber is pretty smooth too, and cleans up nicely. And the wax paper did a great job of keeping the metal shavings off of the glue.

Is this a bad idea?

I am feeling pretty proud of myself, and my neat round hole. I then dry fit the pieces together with the fan in the new position. Did I mention I didn’t measure twice, cut once? At this time I decide that perhaps oval would be an excellent shape after all, and drill an additional hole into the side of the existing hole so that the fan intake will actually line up with this new port I am making. Despite that setback, I am back on track.

Oval hole

Er, oval is just as good as round right? At least now the fan has a clear access to the outside world now.

 

IMG_4073

Dry fitting paper over axial fan, and below, taping it into place.IMG_4075

 

Again, since I want the fan to draw from only the port I have just cut, I have to cover over the other side of the axial fan. Easiest way to do this would be with a bit of clear packing tape, but that would leave sticky bits facing inside that would eventually gather dust and disable the fan. So the best solution is to cut a template out of paper and then tape that to the side of the fan that will now face into the device.

So, so far I have the hole for the fan in the chassis, the fan’s intake adjusted. I now want to install the fan to the aluminum base, and mod its power control.

 

 

 

 

 

The fan has a 4 cable wire harness. Cut wires 2 and 4, and you have cut the wires that communicate from the MOBO to turn on and at what speed. If you only cut wire 2 the fan will run at 100% all the time, which will cool it quite well, it will be too loud. The other wire that controls the speed, you will cut and put a resistor in line to slow it. It seems different models of Time Capsules have different requirements, but I got a 33 ohm one which seems to work fine for me. Strip the wires, solder the resistor on and either use shrink wrap or in my case, electrical tape to insulate any exposed metal. Then plug the harness back in and place the fan in the new orientation into the Time Capsule.
One last step before putting it all together. For style points, you don’t want the fan guts directly accessible via the new hole you cut, even if it is going to be the underside of the unit. I took a piece of screen from an old screen window and cut it to size and ran a bead of caulk around the inside of the opening and gently press it in. I then run a fatten bit of caulk around the fan chassis where I think it will come in contact with the aluminum base and just a  bit more on top of the screen edges.

IMG_0044

silicone in place ready to be assembled

Then press the aluminum base on and screw it down. I left the unit off for 12 hours so the silicone can set up and dry. This is also a good time to buy some self adhesive rubber feet and put these on the base. That way the unit will sit a little higher off the ground and let the fan intake easier access to airflow.

I did several thermal tests during different stages of my reconstruction. With the power supply back up and running and the unit right side up I got readings close to 120 degrees F on the top above the power supply. And more than 20 degrees lower on the top above the hard drive side.

IMG_0025

temp reading off the power supply side of the unit

 

I did do a test with the fan running at 100% before I cut the speed control wire and I can confirm it is too loud. With the resistor in place the fan is much quieter. If the windows are open to normal summer bug and bird sounds it is inaudible. If the windows are shut, you can hear it faintly. So if you wish to have an even quieter fan you could explore different resistors.

And one last note. You know how I recommend you replace all the capacitors while you have the thing open? While mine was back up and running, it still had a problem where it would run for a week to ten days and then power off. Re-patching the power plug rebooted it. But 10 days later same thing. I figured it was the two remaining capacitors. So I had to crack it open again, open up the power supply again, discharge the high power capacitors, and swap out the remaining parts I had and re-assemble. It has worked like a champ since then.  Although you can now hear a faint whirr when the room is quiet.  I can live with that.

My Love/Hate Relationship with Apple and Straying Outside the Walled Garden

The Unrest

A few months ago I was in the market for a small tablet. I was feeling a little claustrophobic in the apple world, as my laptop, my phone, my wireless router, etc were all Apple products. I had heard some good things about the new Android operating system. The family has an iPad2 wireless only that mostly lives in the house. I was annoyed that Apple doesn’t put a GPS in the wireless only version of the iPads, this meant that when we took the iPad out on a road trip and tethered it to my phone, the maps program didn’t work a worth a damn. Really, Apple? Would that have been so hard? Other Android wi-fi only tablets do that and come in cheaper. One that rose to the surface was the Google Nexus 7. Hell, if you are going to jump ship from Apple, Google seemed a good bet. Now, they don’t make the tablet, Asus does, but Google writes the code for the software on the Nexus 7. Ok, looks good. Better resolution than the iPad and a narrower profile (almost giant iPhone like) and more open source. This was also big draw. I have felt that tablets are artificially handicapped. Why can’t I use them as a phone? my over 40 eyes would love a giant iPhone. And I have no shame, I would absolutely hold an a tablet to my head to make a phone call. Why not? It’s not like talking to no one and waving my arms like a crazy person looks any better, as people with Bluetooth ear buds do.

Fine. I did the research and settled on a Nexus 7 with both wi-fi and cell service. Ordered it, got it set up from AT&T and settled in buying all the apps that had sister apps over from the Apple world. I was able to download apps so I could make free phone calls to and from my Nexus using Google voice. Very cool. And of course I had Google maps, not the travesty that Apple maps has made for itself. (I once had apple maps insist I drive into Boston bay to get to the waste treatment facility, which while on the coast is attached to the mainland.) Great. Maybe I had the new “One Device” it could be my phone, my GPS, my calendar, etc, and it was big enough to use and just small enough I could, depending on what I was wearing, get it into a pocket. Now all I had to do is learn the operating system.

The Bumpy Ride

Android OS is…. interesting. There would be a few cool new things, then some inexplicable dumbness. I was able to get apps that synced all the calendars on my iphone and Mac Book Pro, as well as my contacts. Not too hard. So far so good. The Nexus being a Google product it tries really hard to shove Gmail down your throat. Ok fine. I Set up a Gmail account. But around here I realize I have left the serenity of the “walled garden” that Apple provides. There is a Gmail only email app on the Nexus that seems nice enough, but that is not my primary email, so I am never going to use that app. There is also a generic email app but it seems to be really limited in functionality. After doing some research, I determine that “Aquamail” is the best most flexible powerful email app out there. I get it, set it up for my multiple email addresses. It has lots of “under the hood” settings about layout etc, most of which were bad. I kept thinking “this tablet is bigger than my iPhone, why is it harder to check my email on this thing than it is on that?” everything was busy, hard to read, hard to keep track of emails, etc. after much futzing I got it to where I was “OK” with it, but not really happy. Plus Aquamail is made by some Russian developer, and I couldn’t get out of my mind that he could have put a back door in it and now some Russian hacker has access to all my emails. Also now I have 3 different email apps all not great all checking emails and downloading them. Battery life suffers. I mess around with various settings and improve it somewhat.

On the Road

I have a big trip planned where I will drive up the east coast, vacation with my family, then bus and fly back to work, then fly and bus back and rejoin the vacation already in progress. I figure it is going to be a big “sea trial” for this new Nexus 7. We drive up the east coast using the Nexus 7 as our GPS. Even plugged in it cannot keep up with the power demands of the tablet. Battery levels drop during the whole day. If we had driven another hour we would have lost navigation as it couldn’t charge itself fast enough. Something has got to be wrong I think. I jigger with email fetch settings, and some other settings. The next day it seems a little better, although we don’t drive as far. Not good. One of the allures of a tablet is longer battery life than my phone. I know GPS use taxes the thing, but IT WAS PLUGGED IN. into a 2 ah USB port so it should have been fine.

During my vacation I decide the following things: although a bit busy, I like widgets. I like Google Chrome insofar as it integrates with the GPS of the tablet and gives you a welcome web page with info about things to do in your area and tracks travel plans as long as the tickets were booked via your Gmail account. Voice recognition is surprisingly good. Battery life remains poor. And surprisingly, I miss having a front facing camera. The Nexus has a back facing camera, but ships with no apps to use it, which is odd. Of course Skype and other apps can use it, so it is not like it is useless.

Airplane mode

I then bus then fly back to DC. On the bus I read “Treasure Island” a free ebook installed on the Nexus to get me hooked on that feature. I do this because I couldn’t figure out how to download a movie via the “Google Play” store. Turns out you can do it, but the setup is counter-intuitive. When I get to the airport I have several hours to kill, so I decide to watch a movie, streaming “Zero Dark Thirty” via the 3G network. Taking a huge hit in my monthly use, but whatever, I chalk it up to the learning curve. On the plane I go back to reading the e-book. The rest of the travel is uneventful. “Zero Dark Thirty” excepted, I finally realize that by not buying an Apple product not only am I locked out of the “iTunes” library I am locked into the Google library, which is pretty weak. Movie selections are not great.

Anyway, during my stay in DC I figure out how to download movies, rather than stream them, so I download two for viewing on my travel back. I watch part of one on the plane. When we land I turn my tablet out of “airplane mode” and that is where the trouble begins. The screen starts blinking and flashing at first I assume as all the apps come online and try to call out to the world. I deplane. At baggage claim the tablet is still stuck in some sort of subroutine, and unresponsive. I reboot the thing, thinking that will fix it. It takes an extremely long time to boot. In fact it never finishes. It just gives me the Google Nexus logo which in this tablet is suspiciously like a giant “X”. I restart the boot. Nothing. I start using my iPhone to look up how to troubleshoot a Nexus. I boot in safe mode. Still hangs. I do more research. My bus comes. The entire 2 hour bus ride I spend killing my iPhone battery trying to troubleshoot, chat w Asus technicians etc, on how to get this thing running again. I finally pull the nuclear option and opt to wipe all my data on it. This does not fix it. Let me repeat, in just outside of two weeks, without downloading any exotic apps I manage to brick my Nexus to the point where even a hard reset cannot save it. the only hope apparently is to connect it to a PC. A PC?! That is it. I am done. I get an RMA from both Asus and B&H, which is where I bought it. I get my money back. And start over.

IMG_3964

Nexus “X” of death

The Aftermath

It turns out the Nexus going belly up was a blessing in disguise. I ran out and bought an iPad Mini no longer upset about the price or feeling claustrophobic living within the iOS environment. It was like coming home to an old friend. As Steve Jobs said “It just works.” No buggy email apps, good battery life, excellent library in iTunes, nice size. I have even come around to apple maps, as it seems to have improved somewhat. The one thing I lost is the ability to call in or out like a phone on the mini. Some version of this can be done i think by jailbreaking it, or maybe calls within wi-fi area only. I have found I don’t care so much, as long as everything else works. My one regret is not making one phone call on my Nexus 7 in a public place when it was still working, preferably surrounded by hipsters in a bar or coffee-house, to see the ensuing confusion in their eyes as to whether it was lame or cool.

 

Color Charts, Calibration Targets and Mars

All cameras lie. At least a little bit. For that matter so do our eyes. What we want in most cases is a camera to perform in the same characteristics of our eye, although even that can be subjective.

Gretag-Macbeth_ColorCheckerBack when I would work as a Camera Assistant we would often use a “Macbeth Colorchart” at the head of each scene, and sometimes at each roll of film. It would act as a known “control” that the post production people would know, and therefore be able to tweak color and contrast so that the chart looked like it did to our eye. This was important because color negative film meant that a positive needed to be made and at that step changes could be introduced and of course the goal would to be not to introduce any unwanted changes. Essentially It was “re-exposing” the film, and the chart gave the film lab something to go by on what the cinematographer wanted. Sometimes these charts were shot under “white” light (i.e. light color balanced to the film stock) and then only after it was photographed were gels to change the color of the light applied for the scene to be shot. The goal here was to communicate with the lab, “Just because I put blue gel on the lights doesn’t mean it is a mistake I need you to fix, I want you to “time” your color to the light I shot the chart under, so that my desired color cast is achieved” resulting in this case to a blue cast to he scene.

As the shift to video happened, Macbeth Charts began to lose ground to charts like DSC labs Chroma Du Monde which was more useful when used with a Waveform and Vectorscope, video engineering devices not used in film, but prevalent in video production. according to the DSC labs Website these charts were originally designed by the “US Space Program” which made me think of a color chart I had seen recently photographed in a fairly remote location. “Bradbry Landing” to be exact. No, this isn’t a BBC sequel to “Downtown Abby”, it is the landing site of the Mars Rover Curiosity, or more accurately Mars Science Laboratory (MSL.) It has 17 cameras onboard, some for hazard navigation, some for scientific research. The HazCams are black and white so color reproduction is irrelevant. But some of the cameras shoot in color. We all know Mars is the “Red Planet” but what if you want to correct out that color cast? Well, you shoot a color chart of known colors. NASA calls this a Calibration Target as it does more than just color. Nevertheless it is a pretty simple device. It has 6 color samples. Red, Green, Blue (3 primary colors) 40% gray and 60% gray, and a fluorescent pigment that glows red when hit with ultraviolet light. Pretty simple, especially when you look at the complex charts that DSC produces. I imagine the heavy engineering of the camera’s performance was done here on earth, and this simple chart is just to analyze color cast like that on Mars. The descending bar graphic is adapted from the US

Mahli Calibration Target

Mahli Calibration Target

Air force for judging camera resolution. And below that is a 1909 Vdb Penny. what is a coin doing on the chart, you might ask. This chart is mainly for the MALHI camera, which is essentially for close up work, essentially a geologist’s eyes. The penny is a nod to the common practice of a geologist placing a known object within the frame to show scale of object being examined. Rulers work, and are perhaps more scientific, but in choosing a penny NASA is showing a bit of whimsy, something not that bad for a big governmental science and engineering branch to have. Perhaps something we should all keep in mind.

Why a 1909 Vdb penny? The first year the “Lincoln head” penny was produced was 1909 and 2009 was originally the launch date for the rover, and the 100th anniversary made it a good choice apparently. Ultimately the Rover’s launch got delayed to 2011, but by then the decision was made. “VDB” are initials on the bottom of the coin indicating the initials of the designer: Victor D. Brenner.

I find this especially interesting as when I was a kid I dabbled in numismatics, or coin collecting. I got started with some silver quarters my parents gave me, but I remember vividly looking through all the pennies I got over the course of however many months and finding three or four 1909 Vdb pennies myself. At the time they were valued at about $2. Now they are about $15 on ebay for average condition. There is also a 1909 Vdb S penny, which was minted in the San Francisco mint and has a “wheat” back. Those are quite rare and are worth at least a thousand dollars today. Who knows, you might have one in your pocket right now. And you thought pennies were worthless.

 

MastCam

MastCam with fixed 34mm f 8.0 lens Notice the Swiss Army knife for perspective. Much like putting a penny next to the object for scale

Now it is not clear to me whether this Calibration Target is available to the other cameras on the Rover, but I think so. The other main camera on the Rover is the “Mast Cam” which provides a human height perspective from Mars and can even capture footage in stereo. The Mast Cam uses the same sensors as the MAHLI does.
The sensors are 1200×1200 pixel (2 megapixel) Bayer pattern sensors. The Mast Cam has two cameras, a “wide angle” (15 degree field of view) 34mm f8 lens with a minimum focus of 2.1 meters and a “telephoto” (5.2 degree field of view) 100mm F10 lens with same minimum focus. Together they can shoot stereo, although with the mismatch in focal lengths means this is only a bonus feature rather than a primary function. Each camera can do 720p video at about 10fps. And for us camera nerds, it has ND filtration as well as IR cut filters (for study of specific wave lengths more than IR contamination I suspect.)

One thing that instantly occurred to me when I first was listening to news reports of Curiosity’s landing, was how the hell do they keep dust of the lenses? Dust has an affinity for front elements here on Earth and I imagine it is only worse on Mars. I don’t know if Curiosity has the ability to blow off dust of its lenses, but I did find out they have an ingenious design to their lens caps. They are transparent. This means that they can take pictures through them if conditions are unfavorable. The optical quality suffers, but the lens is protected. When conditions are clear, they can remove the lens caps for clearer pictures. And like any good photographer, the Rover probably keeps her lenses capped when not in use.

What does this have to do with Calibration Targets and color charts? Well, it helps us get images like the two below. One is “un-white balanced”, and the other is “white balanced” and represents what the surface would look like under Earth lighting. By shooting the Calibration Target first, the engineers can “dial out” any color cast created by the Martian atmosphere. The uncorrected shot, if it had a calibration chart in the frame, would have colors that would not look correct. In the “white balanced” shot, all the colors on the target should look true and accurate, the same as they did back at NASA before it launched.

Uncorrected "RAW" panorama of Mars

Uncorrected “RAW” panorama of Mars would look like if it were on Earth.

Corrected "White Balanced" version of the same shot

Corrected “White Balanced” version of the same shot

If these pictures are too small to your liking, here is the link to NASA’s page that has links to some super high res versions. Now I don’t know about you, but I find high res panoramic photos of a foreign planet pretty cool.

So what does this all mean? Well, for one, I plan to tape a 1909 Vdb penny to my Chroma Du Monde chart. Why? there are several reasons. I want to be reminded that whenever I pull my color chart out that somewhere on a different planet millions of miles away, there is a similar chart being used to aid in photography. The penny also reminds me that in the most ordinary mundane things, like a penny, there can be surprises if you just look carefully enough. A good thing to keep in mind in life, as well as photography. Who would have thought that the lowly penny would be the first currency to arrive at another planet? And lastly, no matter how big or important the job is there is always room for some whimsy.

I feel getting a copy of the only coin (of any currency) that is on another planet for $15 on Ebay is a deal if ever I saw one. And in the meantime I am going to start checking my pockets more often. Who knows, there could be a 1909 Vdb S in there with all that loose change.

Jeffrey Brown’s eyeball

cu wideThe current almost beaten to death fad in cameras these days is trying to get a shallow depth of field, i.e. shallow focus look.  Of course people are crazy about it because it looks great for portrait stuff or when run by a professional crew that can maintain the critical focus throughout the shot.  But when budgets get tight, and someone whips out a Canon 5D instead of a true video or film camera, and isn’t properly staffed, you end up with constant hunting for focus and lots of blurry shots and lots of justification that “it’s a look.”

In controlled environments it is much less of an issue, or so I thought.  I recently shot some last minute promos for PBS News hour.  They were to be the various correspondents against black limbo delivering one line while standing. Budget was of course tight, but these were really just head shots, nothing too complicated.  The director was interested in using primes, as they tend to be sharper and tend to be able to give you shallower depth of field due to faster f stops.  Fine by me.  Of course, since it is black limbo, any shallow depth of field look you are after will have to play out exclusively on their face as there is literally nothing else to see in the frame. Mainly I was interested in the sharpness that primes could provide.  They were interested in both 4K acquisition, so we shot on my RED Epic camera using two RED Pro Primes, a 50mm and a 100mm. Again, nothing too complicated.  We set up the lighting, and had the talent come in and do their bit to camera.

At 100mm at just under f2.8 focus was razor-thin.  Even with the talent standing still, keeping it in focus was very hard.  I did not have a First Assistant, partially because of budget and because I had not pushed for one, as everything was static, and I had not seen a need for one.  I did have someone to deal with the data downloading, but no one skilled in pulling focus except me. I ended up staring at a 17″ monitor from as close as I could get, with my visual world shrinking down to the key side eyeball and trying to keep it sharp.  One consistent thing I noticed is that even standing still people will lean forward ever so slightly upon delivering a line, as a sort of body language emphasis.  It is something you don’t normally notice, unless you have less than a 1/4 inch of useable focus. About three people in we had Jeffrey Brown as talent.

A002_C007_0424ZZ.0002435 In addition to being a good journalist he has very distinctively slate blue eyes.  Everyone always looks to the eyes for focus, but with his eyes it seemed especially critical as they really “snapped” when in focus, which of course meant that they went soft, even for a moment everybody would notice.  And he was a “rocker,” leaning just a bit more than some of the other talent, making focus an all-consuming job.  At one point the director had to point out that the framing of the shot was going to hell, as I had ceased to be monitoring it, devoting all my time to keeping that damned eyeball in focus.  I apologized for the poor framing saying, “I was distracted by the eyeball” which got a few laughs, especially since Jeffrey Brown thought it was because it was because of his blue eyes, which he is somewhat sensitive to as he (rightly so) considers himself a journalist not a “pretty face” for TV.  We had to explain that it was mainly about keeping those eyes in focus on his slightly moving head that was the distraction.  The still below is a crop from the 4K image.

cu wide Just look and you can see the focus fall off from the bridge of the nose to his sideburns.  In fact it appears that I have the front of his eyeball in focus but the edge, being a sphere, was just that much further away that it was not in focus. and as I was shooting 4K or 4 times the resolution of HD there was nowhere to hide.  you had it or you didn’t.

eye ecuWe got through the day, but boy, it was a wake-up call for how hard even the simplest job can bring unexpected challenges.  No dolly shot no intentional talent movement, yet focus was the biggest challenge of the day.  We weren’t even shooting wide open.

Moral to the story: be careful what you ask for.  There can be a thing as too shallow a depth of field.  Always ask for an assistant. And 4K, and beyond can be very, very unforgiving if you get it wrong. And maybe Victorian head clamps may come back in to style.