How To Load and Operate a Wilart, Part 1: Backstory

Me using the Wilart on a job in 2018.

I have had a Wilart 35mm hand crank camera for a few years now, given to me from the former dean of the film school (thanks Glen!) where I learned to make my way in this business.  It even came with a manual of sorts.  But as was the practice in those days, it was more like a sales catalogue, telling you how wonderful it was without actually going through details of actual operation.  I muddled through and managed to get it loaded with only a single picture of it threaded to go on, and ran some film through it to prove it still worked, and had no light leaks.  I would then occasionally practice, and do some research on the camera and also trot it out once a semester for students of my own to show them a 90 year old camera can still produce decent images, which is more than anyone will say for 90 year old digital cameras. (Once they reach that age!)

Anyway, I thought I had learned the basics of it’s operation and loading, and had even made some improvements, like making some 3D printed adapters that allowed modern cores to be loaded on the older style spindles.  I thought I more or less had it down.  Then I had an actual job come up where they wanted a hand crank camera in the mix.  Great! Finally get to use it for real, as it were.  But like many production jobs it came up suddenly, I had to travel back in town for it, and we essentially were relying on whatever short ends I had in the fridge as it was too short notice to get a few fresh loads from Kodak.

Well, let me tell you, it is a far different condition to load such a camera in a well lit space with no pressure versus in a live concert environment, where it is dark, and loud, and you have time sensitive material to shoot, with no re-takes and all you have are short ends and precious few mags to put those short ends in.

All in all it went well, at least when it absolutely had to, although at non-critical moments I had some jams, and other issues, but no show stopper problems.  Anyway, I thought, if I want this to happen again, I should formalize some notes to myself how to load the damn thing for the next time, so I can move faster.  Then I thought perhaps I could also share it, because, you never know, it could be useful to someone else, or at least interesting.

But first, some Cinema History, and where the Wilart fits in it:

(note: this is by no means a comprehensive list of hand crank cameras and events during this era, just the ones I find most interesting)

1889: George Eastman invents flexible celluloid film, as opposed to glass plates, paving the way for the development of motion picture cameras.

Charles Kayser of the Thomas Edison laboratory with an early version of the Kinetograph. (Photo from National Park Service/Wikipedia)

1892: Edison’s Kinetograph. Around 1892 Thomas Edison invents the Kinetograph, a camera created to make content for his “peephole” viewing device called the Kinetoscope.  both devices used “4 perf”   or four perforations per image, on both sides of the image area, and with a few minor tweaks regarding sprocket hole shape, is essentially the standard gauge and sprocket placement we use today.  But Edison didn’t get everything right.  His camera was driven by DC power (another of Edison’s inventions) and as such was more stationary than later cameras.  He even built a studio for it called “The Black Maria” which was a structure with blackout walls and window and roof bits that could open to let light in. The whole thing was on a turntable so he could position it in best position based on the sun’s position.  As such, anything that was to be filmed had to come to Edison.  And when done, the footage was put in a Kinetoscope, and was only viewable by one person at a time.

By Albert Tissandier - Originally published as an illustration to "Le Kinétoscope d'Edison" by Gaston Tissandier in La Nature: Revue des sciences et de leurs applications aux arts et à l'industrie, October 1894: Vingt-deuxième année, deuxième semestre : n. 1096 à 1121, pp. 325–326. Republished with "Mechanism of the Kineto-Phonograph" by Arthur E. Bostwick (Science editor) in The Literary Digest. v.X No.4 (24 November 1894), p. 15 (105). Image file uploaded from [1]., Public Domain, https://commons.wikimedia.org/w/index.php?curid=602117

Illustration of Kinetoscope, circa 1894.   Note there is a viewport on top for only one person at a time.  (Originally published as an illustration to “Le Kinétoscope d’Edison” by Gaston Tissandier in La Nature: Revue des sciences et de leurs applications aux arts et à l’industrie, October 1894)

Edison’s studio the “Black Maria” (note the curved bit in the ground, that was the turntable to turn the whole building to match the sun’s location.

 

 

 

 

 

 

 

 

 

 

 

 

1895: Lumière Cinematographe. About the same time Edison was doing this, The Lumière brothers were working on their own camera,the “Cinematographe.”  They patented it in 1895. Theirs also used 35mm but only one perf per image area, (that is, one on each side), and the perfs were rounded.  There is a small possibility that despite the different sprocket placement, it might work with modern film stocks, but just be a bit chattery and uncooperative, as the holes were a different shape, but they appear to be approximately same place relative to the width of the film.

Front view of the Lumière Cinematographe Note small 60′ feed mag on top.(photographed by Richard Edlund, ASC and collaborator Dave Inglish, ASC collection)

the side view of the Lumière Cinematographe   Note the brass top part used for holding already developed film for projecting. (photographed by Richard Edlund, ASC and collaborator Dave Inglish, ASC collection)

back of the Cinematographe. (photographed by Richard Edlund, ASC and collaborator Dave Inglish, ASC collection)

Nevertheless, the Lumière brothers made several important contributions.  Their little Cinematographe (essentially a wooden box with a lens and a crank) could shoot about 50-60′ of film, and was un-tethered to the requirements of DC power, as it was hand cranked.  Additionally, after the film was developed, the camera could then load the developed film and unexposed negative together and make a contact positive print.   Develop that and you had a print you could show audiences.  And coincidentally, get a big light source and shine it into the back of the little camera and viola! it was now a projector!  Suddenly, instead of only one person viewing at a time, as in Edison’s Kinetoscope, hang a sheet up in a venue and everyone in the room could view it.  (Discounts were given for those who had to view the image through the sheet as opposed to on it due to their less than ideal seats in the venue.)  Now rather than bringing the subjects to the camera, as Edison did, the camera came to the subjects.  Lumière cinematographers could come into town, film local events and screen it the same day in the evening at a local venue.

Despite it’s versatility, the Cinematographe was a pretty simple camera.  It had no viewfinder.  It was a wooden box with removable doors front and back and a smaller wooden box that sat on top that served at the feed magazine.  The crank was in the back. Take-up occurred inside the camera body.  Max load was about 60′.  The only way to frame something was to open up the back and either with a ground glass placed in the image plane, or by using the film itself, a dim, upside down and backwards image could be seen.   Focus using that, then close up the camera, and crank away, hopefully without shaking the camera too much or being uneven in your speed.  under these conditions, no panning was going to happen, as the operator would be just guessing what he was pointed at.

 

1902: Pathé Professionelle.  While this was going on, four French brothers, (Charles, Émile, Théophile and Jacques) tried to get in on the action.  Charles Pathé had seen the Kinetoscope, and presumably the Lumière’s work, and got patent rights for Eastman Kodak stock in Europe. by December 1897, Société Pathé Frères was formed, and they got heavily into production, lab work, and distribution of film.  Initially they used cameras derived from Lumière patents.

Pathé Professionelle ((photographed by Richard Edlund, ASC and collaborator Dave Inglish, ASC collection)

Pathé obtained the the rights to the Lumière patents and set about designing their own “studio” camera, expanding upon those patented designs.  by 1903 they had the “Pathé Studio” or “Pathé Professionnelle” camera.  (some data indicate 1907-1908, but this seems late)

back of Pathé (photographed by Richard Edlund, ASC and collaborator Dave Inglish, ASC collection)

by the 1910’s All major Hollywood studios were using Pathé Professional cameras.  They improved upon the Cinematographe design by having lots of extra features.  They had 400′ capacity magazines, an actual viewfinder, albeit parallax viewing, a footage counter, a focus adjustment knob in back as well as an iris control knob.  Later models had a fade up & fade down capability using that adjustable shutter.  (Film lab work was not that sophisticated, and the more effects you could do in the camera the better.)  They even had a single frame capability.   In addition to the parallax viewer, there was a peephole that was light tight to the film gate, and you could look through that between takes to precisely frame up your subject, using the film stock as a ground glass.  It was dark, and upside down and backwards, but it would show you what exactly what the camera was seeing, including focus.  (Unfortunately if you tried this with modern color stock, it will not work, as the anti-halation back makes the stock base too dense to see through.)

That said, the camera was not without its problems.  Despite all these improvements, most cameras only came with a 50mm lens, and the external focus knob was calibrated to that alone.  The body was made of leather covered wood, as were the magazines.  The non-conductive properties of wood combined with the fast moving celluloid nitrate film in dry environments could cause static discharges that would silently ruin takes, only to be discovered later when it was developed.  As this was a new industry, professionals formed informal groups to share information, troubleshoot and tell stories.  In California, that group was called the “Static Club,” presumably after their most vexing problem.   It is worth noting that in 1919 the “Static Club” (based in LA) joined with “The Cinema Camera Club” (In NY, formed by Edison) to form the American Society of Cinematographers (ASC) which is still very much active today.

One working solution to prevent static was to put a damp sponge inside the camera body to help alleviate the static buildup.  Perhaps due to the Pathé Professional’s heavy use in the 1910’s or the fact that wet sponges were being put in the wooden cabinets that were the camera bodies, the cameras developed a reputation for always needing repair & additional light-proofing with electrical tape, presumably as the wood joinery started to come apart.  It didn’t help the the body was called a “crackerbox” due to either it’s shape or lack of durability.

If you are interested in more, check out the ASC post about the camera owned by the cinemeatographer who photographed the “Perils of Pauline”: Arthur C. Miller, ASC’s Pathé donated to the ASC.

 

1908: Debrie Parvo. The beginning of the evolution away from the Pathé Professional as the gold standard for working cameras.  André Debrie, previously a manufacturer of film perforation machines in France, finished working of the Debrie Parvo camera.

an early Debrie Parvo made of wood. Note one of the round 400′ mags that almost look like a film can.

It was his attempt to make a more portable, compact, versatile camera. The design was compact wooden box with internal 400′ metal magazines.  The Parvo also had the crank mounted on the side, instead of the back.  It was an improvement from the Pathé Professional, in that it was more compact and had a better viewfinder system. You could look through the viewfinder in the back, and look at the image form the lens to determine framing and critical focus.  But for framing while cranking, you would have to use the side parallax finder, or have your eye pressed firmly against the very very dim image in the eyepiece.  But at least showed the exact frame as it was being exposed, as you were looking through the film as it went through the camera, while rolling.  of course any light leaks from the eyepiece would ruin the film.  and now, modern film stocks with their remjet backing are too dense to view through via this method.

Also it was not uncommon to have a selection of wide angle and telephoto lenses for the Parvo, as focus was done “through the lens” there was not a single calibrated scale on the side dialed in for only one lens size, thus making it less cumbersome to change lenses.

By Fletcher6 - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=22433099

a later Debrie Parvo, possibly an “L” model, made of metal.

There were several Parvo models over the years, and by the 1920’s they switched to metal for the body, as many other manufacturers had, and had various other improvements as well.  One was an ingenious method of viewing through the taking lens by way of a sort of swing-away gate where the gate with the loaded film could be pivoted away and swing in an identical “gate” that consisted of a ground glass.  This could be done without opening the camera or molesting the loaded film. This meant that, at least between takes and even when using more modern opaque film stocks, you could look through the lens to check focus and framing.

Another addition was an optional DC motor.  This and other innovations kept Debrie making them well past the silent era as an excellent MOS camera.

Frame from “Man With A Movie Camera” 1929 directed by Dziga Vertov. cameraman operating a Parvo camera from a precarious position, being himself filmed by another cameraman, presumably from a similar precarious position, while both are underway. Don’t try this at home.

It was the first European camera that was noticeably better than the Pathé, and as such was adopted by such filmmakers as Sergei Eisenstein, Dziga Vertov and Leni Riefensthal.

 

1909:  A formal standard for 35mm motion picture film. Edison formalized the standard of 35mm motion picture film.  He formed a trust, The Motion Picture Patents Company, which agreed in 1909 to what would become the standard: 35 mm gauge, with Edison perforations and a 1.33:1 (4:3) aspect ratio.  The only difference in the Lumière standard was the perfs.  It is worth noting that before this, most people bought the film un-perforated, and perforated it themselves to whatever standard they needed for their camera.

It’s worth noting that Edison, while being a prolific inventor and a businessman, was a bit of an asshole.  After forming the Motion Picture Patents Trust, he felt that meant anyone using a camera with 35mm film owed him some cash.  He tried various ways to enforce this on the east coast, even resorting to thugs to disrupt independent filmmakers and even smash their cameras.  At least one filmmaker in Philadelphia resorted to sending out “decoy” crews to distract the thugs while the real crew worked unmolested.  Rumor has it that this was one motivation for Hollywood becoming a location for film making, as Edison’s east coast goons were far far away.  But that’s another story.

In the adoption of standardization,Donald Bell and Albert Howell came out the winners in the film perforation business.  Bell had been a movie projectionist, and Howell was a machinist and together they initially got in the business of repairing and improving cinema equipment.  They designed and manufactured much of the perforators that many, including George Eastman, used to perforate film from 1909 on.

1912: The Bell and Howell Standard Cinematograph 2709 camera.  Just as the Debrie Parvo was the European improvement on the venerable Pathé, the Bell and Howell 2709 was the American answer for a better camera.

Bell and Howell 2709 camera. Note on left image both a hand crank and electric motor are installed.  Also, it appears that the feed part of the mag is loaded with emulsion-out film, which is not how modern camera film stocks come.  But since the feed side is freewheeling you could load emulsion-in film stock, just mounted “9” vs “p” in the mag.  In both cases though the take-up is emulsion-out as it is driven by a pulley.  (Also see in detailed photos from Adam Wilt on a modern production from 2012 that used a 2709 that Art Adams  (acting as AC) loaded it emulsion-in no problem.)  Above photo from Chicagology.

Initially Bell and Howell made cameras the way most others were made, of wood.  But after one of their cameras suffered mildew and termite damage on an African safari they decide to go with a cast aluminum body.  This added durability, reduced static discharge, although it added cost.  This new camera also had several other vast improvements.  It had a 4 lens turret, a rack-over system for better framing and focusing for those four lenses, and registration pins to better hold the film still while it was exposed.  The hand crank was on the side.  They named it the It the “Standard Cinematograph Type 2709.”  It was vastly superior to the Pathé Professional, but it also was very expensive. it cost over four times the cost of a Pathé.  Initially only movie studios could afford to buy them.  It took a while for their popularity to take off, but by 1919 all major studios owned them.  Even Charlie Chaplin bought one, in 1918, for about $2,000, which is about $32,000 in today’s money.   To this day, when people do the “universal sign language” of movie-making by peering with one eye and making a cranking motion with their right hand by their head, or drawing a silhouette of a camera with “mickey mouse ears” magazine on top, they are mimicking a 2709.

Charlie Chaplin with a Bell and Howell 2709 circa 1925

The 2709 had an advanced movement that had registration pins as well as pull-down claws, which made for a rock steady transport of the film.  But the camera was not perfect.  The Bell & Howell finder still showed images upside-down.  As a consequence, many replaced it by the 1920’s with a Mitchell finder that righted the image.  Same went for the Bell &Howell Matte box, as well as tripods, according to Richard Edlund, ASC.  Even after silent films were no more, Bell and Howell 2709’s managed to survive as MOS or title sequence cameras well into the sound era.

If you want even more info on the 2709 operation, head over to Adam Wilt’s experience working with a 2709 in 2012.

1914-1918 “The Great war” /World War I.  Initially a European war, it eventually directly involved the United States in 1917.  This mattered in film-making because the Pathé and Debrie were French design, and during the war, getting additional supplies from overseas was difficult, incentivizing US manufacturers to make their own models stateside.

 

Akeley camera.  Photo from ReelChicago.com 

Akeley #265 from the side. from samdodge.com

Akeley opened up. inside is a rather conventional looking 200′ magazine. photo from samdodge.com

1915 Akeley “Pancake” Motion Picture Camera: Carl Akeley was not a professional filmmaker.  He was actually a taxidermist by trade.  But that is kind of like saying Indiana Jones was a college professor.  When Akeley felt something could be better, he more often than not he ended up revolutionizing what ever he tried to improve.  Before Akeley, taxidermy mainly consisted of stuffing a skin with sawdust and sewing it up, often by people who had never seen the animal alive.  This seemed foolhardy to Akeley, and in 1896 (as the Field Museum’s Chief Taxidermist) took his first of 5 safaris to Africa to collect specimens and to see the animals alive and in the wild.  His work and approach revolutionized Taxidermy.  In 1909, in order to better study lions, he brought a British Urban camera (Urban Bioscope/ Charles Urban Trading Co.) and although the details are sparse, it was probably a wooden box affair, and like many cameras of that era, not very easy to use.  In any case, the native hunters cornered and killed the lion before Akeley could get his camera pointed, leveled, focused and framed properly.   He swore he could do better and that he would design a “naturalist’s camera” that would fare better against fast moving action under difficult circumstances.

In 1911 he formed the Akeley Camera Company.  by 1915 he patented the “Akeley Motion Picture Camera.”It was unlike any other camera of it’s time.  The tripod head, which on every other camera was a separate part, was integral to the design of the camera body.  the “pancake” round design was both the camera and the head.  This meant that with a simple pan handle on the back you could drive the camera position in what was almost a nodal head.  The viewfinder was articulated and, while still a parallax finder, it had matching lenses to the taking lens, it showed the image right side up, and when you adjusted the focus on the viewfinder lens, gears adjusted the taking lens focus.  the viewfinder also could remain stationary while the camera was tilted, a huge improvement if you were following action. The lens pairs were very quickly interchangeable, and kits often included telephoto lenses, due to the nature of what the cameras were asked to film.  He appears to have been the first one to invent the ball leveling head as well, which as any cameraman knows is essential to quickly leveling a camera in uneven terrain. (Next time you level your ball head, thank a taxidermist!)  The shutter on the camera was 230 degrees rather than the standard 180, which let more light in (good in challenging lighting conditions), and the shutter itself was an innovative spinning cloth arrangement that traveled the inside of the round drum of the camera body.   Like the Bell and Howell 2709, it’s body was all metal.

An Akeley 200′ magazine. serial #203E to be exact. Akeley cameras are rare and expensive, but apparently their magazines are not. This one I got off Ebay for pretty cheap. That’s just a dummy bit of film. Central roller has sprockets on it, so loop size is set in the bag.

The only conventional element in appearance was the magazine, which held 200′ and went inside the round drum of the body.   The loop was pre-set in the mag, so re-loading was fairly quick in the field, provided you had a spare loaded mag waiting.   It truly was a camera for quick moving action, as it’s designer intended.The camera quickly became adopted in Hollywood a specialty camera  filming action sequences.  It would be called for specifically in shooting scripts (“Akeley shot”), and directors would say “Get me an Akeley man!” if he had an action sequence that needed filming.  It also was quite popular with documentarians.  Robert Flaharty used two Akeley cameras when filming “Nanook of the North.” Akleleys were also used in “Wings” as well as chariot sequences of “Ben Hur” to name a few examples.

Akeley with his camera, Photo: American Museum of Natural History , photo 260071

Not only did the Akeley camera make art, sometimes it was the subject of art itself.  The machining and build quality of the camera was such that Paul Strand, days after buying one, took stills of various parts of the camera and those photos are now considered art and are part of the Metropolitan Museum of Art collection.  An original print of one of the interior photos can go for $40,000.

The Field Museum, as of this writing, has an exhibit of the camera itself on display which runs until March 2019.

The Akeley camera was used during the Field Museum’s 1928–29 Crane Pacific expedition.  Note the ball level put to good use on uneven terrain.  Also integrated head/camera design.  Photo from the Field Museum.

If you want detailed info on the operation of an Akeley Pancake camera, check out Sam Dodge’s detailed walk through an Akeley.

Carl Akeley:1 Leopard:0. Circa 1896

Carl Akeley’s life seems fantastical at times.  He was the “Father of Modern Taxidermy.” He invented the most innovative action camera of its time, and even invented spray-able concrete after seeing the facade of one of the museums he worked for falling in disrepair.  He killed a leopard with his bare hands (partially because he was a bad shot) survived getting trampled and left for dead by an elephant, hung out with Teddy Roosevelt in Africa, and his wife left him because of a monkey. He was a big game hunter, but also a conservationist.  He is responsible for the biggest gorilla preserve in Africa.  He even wrote a book about some of his adventures:“In Brightest Africa.”  He died in 1926 in the Democractic Republic of Congo of a hemorrhagic fever, shortly after taking George Eastman on safari.

Modern photo of Wilart. Photo Nate Clapp Circa 2014

1919: The WIlart (Whew!) The Wilart Instrument Company in New Rochelle NY started making what was essentially a clone of the Pathé Professional, except the body was made of metal, like the Bell and Howell 2709, and pretty much all subsequent camera models after the 2709.  Pictures of the interior film transport/ gate area are indistinguishable from the Pathe.

Presumably the metal body eliminated any static discharge, but about this time film stocks were adding an anti-static backing as an option which also alleviated this problem.  But this backing was too dense to use the peephole option for framing through the back of the film between takes as many early cameras relied on (like Pathe & Wilart.)

 

An early version of Wilart. Note the parallax viewfinder integrated with the body. Also either iris  indicator in on front panel.

It is unclear to me if The Wilart Instrument Company licensed the Pathé design or just out-and-out stole it.  Curiously, they did not reference the similarities to the Pathé in advertising, which suggests they perhaps didn’t ask permission to copy it.

In any case, The Wilart hoped to cash in on an affordable American made camera in the post-war boom.  Granted, it’s technology was from 1903-1907 (evolution of the Pathe features) and as such,  was 15 years old, but was an affordable, proven design, but now in metal.  It ended up a camera used more in industrials and 2nd tier productions, as by this point the cutting-edge cameras in Hollywood were the Bell and Howell 2709 and the Akeley Pancake camera.  Nevertheless Wilart Instrument Company seemed to achieve success, working on further designs and even planning a large film storage facility in Baltimore.  But by 1926 the “talkies” came and the need for additional hand cranked cameras fell through the floor.  The Wilart company seems not to have weathered this storm and disappeared without a trace.

Pathé insides. Notice a similarity? Photo: ASC collection

Threading on Wilart from Wilart Manual

Ok! that was the evolution of how we got to the Wilart.  Next let’s learn how to load it.

“How to Load and Operate a Wilart Part 2: Loading” coming shortly……

 

24 fps: Where Does It Come From?

Back in the day (turn of the last century) there was no such thing as camera batteries or sound men. Men were men and cameras were hand cranked. As they had evolved from still cameras they were sort of still camera Gatling guns, capturing still frames as fast as you cared to crank. Somewhere past 14fps something magical happened and persistence of vision started to fuse the images so rather than a fast slide show it started to look like motion. So cameras were built to move one linear foot of film per two cranks, which meant if you cranked at “coffee grinder” speed you hit 60 feet per minute, which comes out to 16fps, just north of that 14fps effect. Cranking faster improved the persistence of vision thing, but producers didn’t like you blowing through all that expensive film, and besides there was really only one stock available and it was slow, about 24asa. Cranking faster meant less light per frame. Sometimes you cranked even less than 14fps to squeeze a bit more exposure out of it.

This is where it gets a little weird. Projectionists, who were also often hand cranking their projectors had a habit of cranking faster. Faster meant faster turnaround in seating, which meant more $ and even better persistence of vision without annoying flicker. Sure, action was sped up, but the whole thing was new, no one seemed to complain. In fact, by 1925 The Society Of Cinema Engineers (now known as SMPTE) had codified it recommending

60 feet per minute (16fps) for camera speeds and projecting at 80 feet per minute (21.3fps) seems weird now to pick a different speed for display from capture, but to review, faster cameras cost more money, and faster projectors made money, and after all, producers are paying for everything.

proposed standard cranking and projection speed circa 1927 from SMPE

proposed standard cranking and projection speed circa 1927 from SMPE

Anyway, someone decided it would be a great idea to add sound. How hard could it be? In fact, several companies tried to be the first to bring sound to the movies, hoping to capture the market. Funny thing is they all insisted on capturing at the same frame rate they displayed at. If you didn’t, the pitch would be all wrong and everybody would sound silly. And forget about music. Some picked 80 feet per minute (the already established speed for projection), some picked 85 feet per minute, and some picked 90 feet per minute. First one to get a working system was Warner Brothers Vitaphone. It was used in the 1927 “The Jazz Singer” which was the first feature length film with sync dialog and is considered the official start of the “Talkies.”

IMG_0582

Western Electric’s Bell Telephone Laboratories (and their Vitaphone system) as well as other systems listed taking speed and projection speed (SMPTE 1927)

 

The Vitaphone engineers had picked 90 feet per minute, or 24fps as their capture and projection speed. If one of the others had been first, we easily could be shooting 21.33fps or 22.66fp as a standard today. So sometimes you get lucky.
Except the Vitaphone system was terrible. It sounded good but that’s all that could be said about it. The sound was recorded on 16″ disk records separate from the film. They could only be played 20-30 times before they were no good, and they could break, so you had to send lots of duplicate disks with each roll of film to the projectionist. A disk only covered one reel so every reel change you at to cue up another record. And synchronizing the needle with the head of the roll was a pain in the ass. And if you broke the film for some reason and spliced it back, everything past that point was out of sync. During recording, the camera had to be motor powered from the mains, and the disks had to be made in recording booth adjacent to the set. In fact it was such a bad system that it was abandoned 5 years after it was implemented. And it only lasted that long because all the theaters that wanted to have sound had bought into that technology and had these crazy phonograph contraptions connected to their projectors and weren’t eager in throwing them away just after having bought them. Movietone, which used technology that put the audio as an optical track on the film had many advantages, but it was a little late out of the gate. Because Vitaphone was first, the engineers of Movietone decided to match the Vitaphone frame rate.

“Originally we recorded at a film speed of 85 feet per minute. After Affiliation with the Western Electric Company, this was changed to 90 feet per minute in order to use the controlled motors already worked out and used in the Vitaphone system.  There are a large number of both Vitaphone and Movietone installations scheduled and in operation, and sufficient apparatus is involved to make it impractical to change the present practice of sound reproducing.  In connection with the Society’s standard, I have been unable to find any New York theater which is running film at 85 feet a minute; the present normal speed is 105 feet and on Sundays often 120 feet per minute is used in order to get in an extra show”

Earl L Sponable, Technical Director, Fox-Case Corporation, New York City (“Some Techincal Apects of the Movietone” S.M.P.E. #31 September 1927, Page 458)

Soon enough Movietone lost ground as well, as technology changed but all subsequent sound systems stuck with the now established 24fps. So blame a sound man. Or thank him. Your choice.

Vitaphone

One of the first sound men checking a Vitaphone recording with a microscope while recording. Sort of a human playback head. (page 308 from Transactions of S.M.P.E. August 1927)  It turns out this man is George Groves.

 

Postscript: Now of course, we often mean 23.976 fps when we say 24 fps.  This one we can’t blame on sound.  23.976 fps as a camera frame rate can be blamed on the introduction of color to standard-def television broadcasts in the 1950’s, and the death of film as a capture medium, and by extension the death of telecine as a post process.

When TV started, it did not match the 24 fps established by film.  This is because engineers wanted to use the 60Hz cycle from our 110v 60Hz household power to drive frame rate.  60Hz meant 60 fields, or 30 frames per second, and was pretty easy to implement.  Once color came along in the 1950’s they wanted a standard that would be backwards compatible with black and white TVs.  Engineers could no longer use the 60hz rate of the household electricity to drive frame rates and keep the color and luminance signals to play nice so they settled on a very close one of 59.94hz.  This resulted in a frame rate of 29.97 fps, from the previous 30 fps, something the black and white receivers would still work with.

Telecine: in order to get film onto TV you had to do a step called telecine.  the film was played back and captured essentially by a video camera.  Getting 24 fps to fit into 30 fps was done via a clever math solution by what is called 3:2 pulldown.   There are two fields to a standard-def frame, and thus 60 fields per second, and 3:2 pulldown would use one film frame to make three fields (1.5 frames) of video.  Then the second film frame made two fields (1 frame) of video, and the third frame made 3 fields again and so on.  Doing this, 24 fps fits quite nicely into 30 fps broadcast.  and anything shot 24 fps but shown at 29.97 fps system would look like it had been shot at 23.976 fps, even though the camera had been running at 24 fps, as anything that ran through the telecine went through a 01% slowdown to conform to the 29.97fps broadcast standard.  Somewhere in the transition to High Definition 23.976 became codified as a standard, not only for broadcast, but a capture speed. As cameras more and more were digital and not film, they would choose 23.976 as the actual camera frame rate, rather than 24fps and expect the 0.1% slowdown to happen upon transfer from film to video, as had happened to film in telecine rooms.  No telecine? no slowdown, which meant it had to be implemented in actual camera speed.

So, hate 23.976 fps? blame a Sound Man, color TV, the death of film and the whole accidental way we pick our standards.

 

For those interested in reading more, I highly recommend reading online records of the Journal of Society of Motion Picture Engineers, made available by the Media History Project. http://mediahistoryproject.org/technical/

 

 

Why I hate UAV copters

Drones. UAVs. Octocopters. Call them what you want. They are the new disruptive technology in a lot of applications, but I am specifically going to talk about them as they apply to my industry, as a camera platform for dramatic, narrative, or commercial work. You can see the allure- it lets you get shots that otherwise would be difficult or in some cases impossible via traditional methods. And camera movement is the best way to add production value to your shoot.
And I hate them. I hate them like I hate steadicam. What’s that you say? Hate steadicam? What kind of Luddite or backwards filmmaker are you? Let me explain. I do not hate the steadicam device per se, and I completely agree that steadicam allows for shots that could not be obtained any other way. I might even be convinced to use one some day. Here is what I hate about steadicam. People act like it is the solution to everything and will make everything awesome. A Steadicam is not awesome sauce you get to spread all over your shoot. It has weaknesses, just like any camera platform. Let’s review. A Steadicam can not provide a stable horizon on a static shot,
especially after it has been moving, which is why you use Steadicam in the first place. There are operators that can mitigate this, but it is inherently difficult on this system, yet directors insist on designing shots completely blind to the weaknesses of the platform and Steadicam operators struggle to make the shot work.
Another misconception is that steadicam systems are fast. Tracks don’t need to be laid, it has the freedom of handheld, you can just go. The fact is often quite the opposite. Steadicam can cause the shoot to slow way down. First of all, the whole system of a steadicam requires that the rig be balanced. This means a lens change, an addition of a filter, adding a timecode box, all require time out to balance the rig. If you are dealing with a shoot with only one camera body, to go from tripod to Steadicam can be a very involved process and ties up the camera during that process.
Once you have the camera balanced and on the rig, another thing to keep in mind is that a Steadicam rig with a camera on it is quite heavy. Between the camera, the post, the counterweight, wireless transmitters, arm and vest, it can tax the best operator. This means the operator needs to park it on a stand or docking station when not actually executing the shot. This makes blocking and lighting the shot a bit more difficult as it is best done while the operator is wearing the sled, which you want to keep to a minimum to keep him or her fresh.
Also, as the camera has potentially 360 degree movement, lighting can be a challenge. Nowhere is safe, and lights need to either rig directly in the ceiling if possible, be hidden somehow, or travel with the camera. Again, all this can be done, but none of it in the category of “fast.”
So, lets review: I hate steadicam because people think it is secret sauce to make their shoot better but are completely ignorant of its weaknesses. There is one other thing I don’t like about steadicam, and that occurs even when people understand its weaknesses, and that is the urge to do the “trick shot” which is an exercise in “look what I can do” rather than filmmaking to drive the story. Sometimes you can do a trick shot and it move the story at the same time, and people like me can enjoy both aspects of it. But showing off you know how to use a tool doesn’t mean you have made a great story.
Flying camera platforms may not be Steadicams but they might as well be. They do and will give you shots that otherwise would have been at the very least difficult, or possibly impossible before. And those shots have the potential to be amazing. And just like Steadicams, people will misunderstand and assume that as long as you use a drone the shot will then therefore automatically be amazing. Misunderstanding the tool you are using will result in wasted time, frustrated crew, and mediocre filmmaking, just as it alway has. But with drones there are two new aspects. One, is something that is generally happening with all gear in the industry, something that optimistically is called the “democratization of filmmaking” but on a practical terms means that good working gear can be purchased for prices more approaching a car than a house. There is good and bad with this, but one side effect is there are a lot more players in the market. Generally this shakes out as those who have skill, or have the potential to learn skills adeptly end up on top, as it has always been, but without money being a gatekeeper that it once was, which means the entry level of the market is crowded, like the beginning of a marathon.
Drones, especially seem to fit this category. A few years ago the technology just wasn’t even there to make a working drone at any cost. Now parts and information is out there that a professional rig can be built from parts ordered online at a very reasonable cost. In fact turnkey solutions even exist under $800. Back in the 90’s a Steadicam probably cost upwards of $40,000-60,000, and that didn’t include the camera, just the platform. So, a lot more people are getting into drones than ever were into Steadicam. Drones are so new that there are no “old hands” at it. Everyone is at the start of the marathon and it’s crowded.
The other new aspect to drones as a camera platform is the safety issue. This is what really makes me dislike them. Back in the day, a careless Steadicam operator could possibly hurt themselves, damage their rig and the camera, and possibly the nearest person, be that an assistant or actor, although this was quite rare. I know of no stories of this happening directly, although I always think of the emergency ripcord on the Steadicam vests of guys I would assist for, which when pulled would cause the vest to split open and fall away, allowing the operator to shed the rig in seconds in case of a catastrophic event like falling into a large body of water with 80lbs of gear strapped to them. Again, I never heard of anyone having to exercise that option, but it was there.
Drones, on the other hand often are 20-40 pounds of flying danger, often with eight very high speed sharp rotors being driven by high energy high capacity lightweight batteries, all being controlled by wireless control. Often built from scratch by the operator. Some of them have fail safes, where if wireless control is lost, they will return to original launch site and descend. That’s great, but only if those automated systems are solid. Again, many of these things are being built from scratch, and the code being written or at least tweaked by the builder. If the drone loses flight stability be it from a large gust of wind, operator error, or hardware or software malfunction, you have a potentially lethal falling object that can kill you and others by either just plain blunt trauma 20lbs falling on your head, or cutting you open with its eight high velocity Ginsu knives it uses to fly, or burn you when one of the high capacity batteries rupture and spew a jet of flames and energy. Look on YouTube and you will find several UAV/ drone failures, often triggered by a gust of wind, and possibly complicated by navigational hazards like nearby buildings the drone can hit on its way down so that its structural integrity is compromised well before it hits you. Now imagine that the price of entry is so low, people with only a passing interest get into it. Before you know it the sky is dark with flying lawn mowers being driven by mediocre do it yourselfers, who think they have the secret sauce to awesome filmmaking.

This is an evolving topic, and the good news is that there has been some attempt to regulate them in a way I approve. Up until recently there was a big question mark on whether all kinds of drones were illegal, and where the FAA stood on it. It was like the Wild West. It seemed like before the rules got codified, it was “anything goes” approach which seems very dangerous to me.
Making them illegal seemed untenable. They were so cheap and offered the allure to so many people, enforcement seemed almost impossible. Also if they are illegal, there would be no regulatory control on them. Just this month the FAA has been authorizing individual companies to be certified for flight, excepting them from normally required regulations as long as they fit a certain category of flight, including flying only over a “sterile” environment, i.e. the controlled set. Licenses, permits and special rules are the way to go. And prosecution of those who refuse to play by the rules. Individual drone operators need to apply for “certification” in order to be legal. This is because the technology is cheap, readily available, and dangerous.
Drone camera platforms need to be safe, legal, and somewhat rare. I don’t hate drones, as much as I hate the idea of people flying homemade unregulated rigs over my head because that will somehow make the shot “cool.” By making them sensibly regulated they then will (in most cases) be operated by sensible, trained operators, and only when they are the appropriate tool for the job.

P.S. don’t get me started on Movis or other gimbal handheld systems.

How to Quiet a Noisy Dragon

My last post about my quick test with the new Dragon sensor had a bit of a surprise with the Dragon footage looking noisy, especially compared to the previous non-Dragon MX chip.  RED suggested that a fix was on the way, and soon.  They hinted at around a week.  That was June 21st.  My rule of thumb for RED target dates for delivering a product is: take the stated time period, double it, and add two months.  So, in this case that would mean somewhere around the first week of September. Well, RED beat the odds, while still completely missing their target of a week, and August 6, 2014, a mere 7 weeks, they released the fix.   The fix is a different way to debayer the RAW footage, selectable in their new beta release of REDCINE. The feature is called “DEB” or “Dragon Enhanced Blacks” although it could easily be called the “Anti Red Speckle Filter” as it gets rid of the red noise. it is a checkbox you select right below the Gamma settings in REDCINE.

DEB checkbox just below Gamma Settings in REDCINE

DEB checkbox just below Gamma Settings in REDCINE.  Here it is not selected, the current default.

The good news is it is retroactive and can be applied to footage you have already shot.  This is great for my purposes as I don’t have t re-do the test.  Here are some quick screen grabs.

same shot from prior test, with DEB allied and not.

same shot from prior test, with DEB applied and not.

DEB dragon Vs Epic-X

RED Epic MX on left, Lower right is Dragon with DEB applied.

So it definitely improves things. Further testing is warranted though. I hope to test it against a different camera, like Arri Alexa or Amira in the future.  RED is talking about making user swappable OPLFs but no time frame on that yet.  Once they do put a date on it, don’t forget to double it and add two months!

Epic MX vs Epic Dragon.

I just recently got my RED Epic back after RED installed the new “Dragon” chip.  I borrowed an Epic that still had the MX chip and shot side by side tests to see how much better the Dragon chip actually was.  I found a few surprises.

First off, I updated both cameras to the current release builds 5.1.51 and black shaded both of them after cameras had reached operating temp at adaptive at 65c.

IMG_4748the weather was a bit unsettled (as it always seems to be when I seem to have time to do these tests) so I decided to put on a 35mm Red Pro prime on the Epic MX and a 50mm RPP on the Dragon and move the cameras a bit to mimic same frame size of midground so I could roll both at the same time so exposure would be identical.  I set them for the same stop, which if I recall was something like f 8 1/3.  according to the false color overlay the Dragon had more info in the highlights before clipping.  In fact a small patch of white sky which clipped on the MX was apparently not clipping in the Dragon. I shot myself in my garage workspace that has diffused top light and a nice window for testing overexposure.  If I shoot in the afternoon there is a piece of an apartment building across the street that gets hit with sun, providing an excellent detail/ overexposure test element.  People have complained in the past about RED’s rendition of skin tones.  I think this is wildly overblown.  I have never had an issue with it, at least under daylight color temps, which is how I shoot the majority of my stuff.  The Dragon is supposed to be much better.  I did not notice much of a difference, nor did I see any problems with the old one.  One note is that with the new chip comes some new settings in REDCine, the app that lets you “develop” the Raw footage from the camera.  before Dragon there was “Redcolor 3” for color rendition, and “REDgamma 3” for gamma.  With Dragon there now is “Dragon Color” and “REDgamma 4.”  while I think you should use Dragoncolor for rendition off a dragon chip, don’t be fooled into thinking REDgamma 4 is automatically better.  It tends to be a bit crunchier than REDgamma 3 which under normal circumstances helps make a punchier image, but when testing over under exposure range, may fool you into thinking the Dragon chip has even less range than the Regular Epic.  Of course, professional graders probably would just use REDlogfilm (which is flat and holds onto the most range) and make their own curve based on the scene. but in this case I wanted to do a relatively unbiased side by side test.

screenshot from REDcine.

screenshot from REDcine.

Below are some interesting screen grabs:

COLOR RENDITION:

Side by Side Dragoncolor Vs REDcolor3

Side by Side Dragoncolor Vs REDcolor3

Dragon on the right, Epic-MX on left. Dragon set to Dragoncolor Epic MX set to REDcolor3  More detail in Dragon, and slightly less red tint to the skin.

HIGHLIGHTS:

Now here is a detail of the highlights: Dragon on the right, Epic on the left.  Both set to REDgamma3. slightly more highlight detail on the Dragon chip as expected.

RG3 Dragon vs MX
Don’t be fooled into using REDgamma 4 just because it is better.DRAGON RG4 VS EPIC RG3 Here is the same footage

but Dragon chip is rendered at REDgamma4.  It is very hard to tell which chip has the advantage in this scenario.brick looks about the same, maybe slight advantage Dragon, but leaves look hotter on the Dragon.

Now here is the same thing in REDlogfilm. You will see the Epic MX chip clips magenta showing the white is clipping.  Not so in the Dragon. as you might expect.

Epic-MX vs Dragon in REDlogfilm

Epic-MX vs Dragon in REDlogfilm

BIG SURPRISE: NOISE LEVEL:

This is Epic MX on left and Dragon on the right

This is Epic MX on left and Dragon on the right. both at 5600K, 800 ISO, 8:1 compression.  Dragon at 6KHD and Epic-MX at 5KHD.

 

This was the big surprise. The Dragon Epic was sharper, as was to be expected as it’s max resolution was better than the MX (6KHD in this case vs 5KHD) but there was more noise and pattern in the Dragon footage.  This was alarming as a year ago the Dragon chip had been advertized as 2000 ISO was as quiet as 800 ISO in the MX chip. What the hell? I paid over 8 grand to get MORE noise?

I went over to REDUSER.net and fund someone else had done essentially the same test as I had done and got the same results. if you are into flogging yourself, here is the link:

http://www.reduser.net/forum/showthread.php?117701-DRAGON-vs-EPIC-MX-Noise-NOT-GOOD

CONCLUSION:

Anyway the long and short of it was this: It is the new OPLF, along with black shading that is to blame.  When Dragons first started shipping to anybody (maybe December 2013?) they had what I will call version 1 of the OPLF (Optical Low Pass Filter) which is essentially a piece of custom class/filter in front of the sensor proper to improve performance, reject IR and limit moire.  All cameras have some kind of OPLF.  after logging some hours with it in the hands of users the following characteristics were found.  It did in fact seem pretty clean at 2000 ISO.  But there were weird magenta flares, which were very visible in low light situations.  A new OPLF was designed that got rid of the magenta flare, improved highlight performance by at least a stop (I have no idea how they did that, possibly in conjunction with new black shading algorithm?) and even better IR performance.  Downside? now 2000ISO is noisy.  And the whole thing blew up on REDUSER once everyone was getting new Dragons in numbers with the new OPLF.

The good news is that RED will, if you wish, put the old OPLF back in your camera, if that is what you wish.  And they claim to have a firmware build in the works (Don’t they always)  that will tweak black shading implementation that will address this problem, and that it will come out soon, possibly in a week.  But knowing RED that could mean a month.  They claim that the new OPLF brings so much to the table that they think the future is in the new OPLF with software tweaks. There even has been talk of user replaceable OPLFs in the future. I can confirm that the IR contamination on this new arrangement is quite good.  I had a shoot with an n1.2 and pola (6 stops of ND essentially) and a black dress Marine uniform in full sun and the uniform stayed black.  unfortunately I had shot it at 2000 ISO because I had believed that 2000 was the new 800, and that shooting at a higher ISO protected your highlights more. This had been true of the RED One, but is no longer true if RED Dragon.

Also, I rendered both the Dragon and the MX footage to a 1080 ProRes file.  the noise difference was indiscernible, which was a pleasant surprise.  I did not use any special noise reducing options on either.  So for the moment I am content to wait for the new firmware and black shading.  Even if it makes all the testing above obsolete.  But as always, test test test.  Sometimes you find a surprise.

 

ProRes Camera Test: BMPCC, F3, F5, and RED Epic MX

Recently I decided to test some of the cameras I often use and or have access to.  This included the Blackmagic Pocket Cinema Camera, a Sony F3, a Sony F5, A RED Epic MX.  Obviously these cameras record natively in different codecs and resolutions, but I decided to even the playing field somewhat by using an external recorder (A Convergent Design Odyssey 7Q) so that everything is recorded in ProRes HQ, except for the Pocket Cinema Camera, which both records ProRes HQ natively, and also only outputs via HDMI (Highly Dodgy Media Interface) which I hate and prefer not to use. In all the tests I used the same lens, a RED 18-50mm short zoom, which was PL mount so it fit on all the cameras, which all had PL mounts on.  Only for the Pocket Cinema camera did that significantly change the filed of view, as it is more of a super 16mm sensor size compared to 35mm sensor size of the others, which I compensated by zooming out to approximate the same field of view.  The RED Epic of could natively can record up to 5K, but in this case I was just recording the 1080 output from a 4KHD recording, as 4KHD most closely matches 35mm format size.  Also I did not record LOG on the F3 or F5, or output REDLog on the Epic.  I did do both “Film” Rec and “Video” modes in the Pocket Cinema camera, but I did not run them through Resolve and color correct them as I might if I were actually using the camera.  I wanted to look at the footage without doing any alterations other than what I might do in camera.  For the F3 and F5 which have scene files that affect the look of the camera, I arbitrarily picked Abel Cine’s JR45Cine scene file for both, anticipating that those would be the scene files I would use were I to shoot with these cameras.  For the Epic, I usually shoot with saturation set to 1.2 rather than the default 1.0, and that is what I did here. White balance was preset 5600 for all cameras. it was a somewhat unsettled day weather-wise, so take any ability to hold detail outside the window with a grain of salt as these shots were not all done simultaneously, as I was using one lens for all the cameras, and I had only two tripods in play.

Click on any below for full frame.

No surprises here.  Epic and F5 look the best.  the F5 skin tones look a bit too red, but the whites look a little truer I think. the Black Magic Pocket Cinema Camera looks like it needs to be graded some whether you shoot video or film mode, and therefore makes me think if I am going to have to grade it anyway to make it look its best, I would shoot in “Film” mode in the future and take it through Resolve.

Below are enlargements of the focus target. again, no surprises.  Sony F5 and RED Epic are the best for resolving and lack of moire.  I was surprised slightly at the F3 moire, and not surprised but slightly disappointed on the Black Magic Pocket Cinema Camera’s sharpness performance. Again, this is off a screen grab from the ProRes HQ 1080 recording.

If I were to pursue this test further, I would explore scene files in the F3 and F5 to see if I could get truer skin tones out of it while maintaining their ranges.  The RED I might play with white balance a little bit, maybe a little lower to get rid of some of the tones that seem a bit too warm.  My next test I had planned though was to test my RED Epic after it had it’s sensor upgraded to the “Dragon” sensor.

Christmas cards and vintage stereo cameras

My 2013 Christmas card photo. Anaglyph glasses required

My 2013 Christmas card photo. Anaglyph glasses required

My current obsession with vintage stereo cameras really caught fire because of my 2013 Christmas card.  I had been interested in stereo cameras, dabbling here and there over the years. I also like to tinker and do things with my hands.  I had done a staged, lit Christmas card in 2012 but it was a very stressful experience, what with nice clothes, and herding cats and my then pregnant wife and kid.  Everyone was grumpy for hours afterward.  I vowed 2013 would be different.  Something like an impromptu snap of the family.  But how could I still have something special about it? Due to my interest of old stereo cameras, I had two Russian “Sputnik” medium format stereo cameras.  One working, and one not, mainly for parts.   But I hadn’t used them in a few years. I also had some expired black and white medium format film in a drawer. Snow was forecast later that week, which is unusual in mid December in the DC area. I hatched a plan. I would take a picture in stereo of us outside the house either during the snowfall or shortly after that. I planned to make the Christmas card an anaglyph stereo print, which means the print would require viewing with “those funny glasses.” There are several ways to view a stereo image but they all boil down to one thing- sending the left image to the left eye and the right image to the right eye, while also blocking the image from the non-corresponding eye. Anaglyph is the cheapest way to do it, as I could get cheap paper glasses for .20¢ each, and include them with the picture in the envelope. The anaglyph system uses color to control the images to the eyes. The most common use today is red/ cyan anaglyph. You have the red filter over your left eye and cyan (the opposite of red) on the right eye. Take the left image, destined for the left eye, and tint it so there is no cyan in it, which will make it appear red. The red filter will do nothing to the red image. The cyan filter on the right eye, however will only let cyan images show up and as there is no cyan in the left image, it will appear a black frame to the right eye. Do the opposite to the right image, and you have an anaglyph print! It will look like a poorly registered color print without glasses, but 3D with the glasses. Anaglyph is well suited for black and white images as it is using color to encode the stereo information. You can do it with color images, but color reproduction may suffer a little. I mainly used black and white as a) it is better for anaglyph and b) it was what I had on hand, and what with film loosing ground as a format, I suspect medium format is only available via the mail now, and this whole idea hit me inside of a standard weather forecast, so ordering fresh stock for this seemed unlikely to work out.

I had never made an anaglyph before, so there was a reasonable possibility this would not work out. But the deadline and a goal became an incentive to teach myself how to do it. And if it didn’t work out, I always would have one half of the stereo pair, which could serve as a “normal” picture for the Christmas card. I had the camera and film, but I would need more elements. I found a source for cheap glasses and ordered them. I investigated getting Photoshop Elements, which shockingly, I didn’t already own. I decided not to buy it until I got a good “negative report” from my shoot, i.e. A picture actually worth printing either due to poor content or a malfunction on the camera or it’s operator (me!).
The day came, the snow arrived on schedule, I loaded the film and got the camera on a tripod and waited for the best moment. We went shopping for a Christmas tree up the street. We struck out on getting a tree but we did get a wreath. When we got home, since everyone was already dressed for outside, and it was still snowing, I got everyone to stand in the front yard, trying to compose a shot that both was well framed as well as using the 3D space. This meant my daughter, who is shorter than my wife, became the foreground element. My wife, the mid ground. And I, with my son on my shoulders would be the background. There were two wrinkles to this. Although I had 400asa Tri-X black and white film, it was pretty overcast due to the snow, and I ended shooting at around f5.6 which meant we couldn’t hold focus over the whole depth of the scene. I picked a point a little past my daughter as the focus point and hoped for the best. The second problem to be overcome was the twitchy-ness of the Sputnik camera I was using. The viewing system takes some getting used to, the film advance system is about as analog as you can get, and worst of all, the shutter trigger and timer were very hit or miss. First you have to cock the shutter. Then pull the timer lever down. Then push the shutter release. Doesn’t sound too bad. Except the shutter timer lever doesn’t wait till you hit the shutter to start running.  A common problem in Sputnik cameras. So you pull the timer down and let go and it immediately starts running. If you don’t cock the shutter before you start it AND trip the shutter after you have started the timer it won’t take a picture. If you hit the shutter release in the wrong order you take a picture without benefit of the timer. Only good thing is you can always re-set the timer after you have triggered the shutter as the shutter won’t fire until the timer runs out. So, cock shutter, start timer, trigger shutter, pull timer back to max, then run as fast as you can in wet snow with your 11 month old on your shoulders to be in the rear most part of the frame. Fortunately all that went wrong was early firing of the shutter. The roll had 12 exposures, which meant 6 stereo pairs. Two were ruined by early triggering of the shutter, two were screwed up by me mis-understanding the manual advance, one was ok, and one was good. A word on the advance. It is completely manual, to the point that the only way you know you have advanced it properly is a little window on the back of the camera that lets you look at the paper backing of the film that has a number on it when you are in the right position. If you overshoot, oh well, just have to advance to the next number. Also since you are shooting stereo, you have to advance it by two, at least when you don’t overshoot your target number. So if your first exposure is “1” you have to advance it to “3” to be ready for another stereo pair.

this is what happens when the shutter timer fires prematurely.

this is what happens when the shutter timer fires prematurely.

I scanned the negative, brought the shots into Photoshop Elements, aligned them and applied the anaglyph red/cyan to the images.  when aligning them you want the verticals to align perfectly, but the horizontal offset determines where in the 3D space the images lie.  I put my daughter slightly forward of the frame ,so she appeared to “pop” out slightly, and the rest of us fell deeper in the frame. It turned out quite well, overall.  based on this experience, and others, my tips for medium format stereo photography are as follows:

I love medium format- the negative is 4-5 time bigger than comparable 35mm (purpose built 35mm stereo cameras use a non-standard 35mm frame that is a bit smaller than the standard 35mm frame.) This means any dirt on the negative when you scan it in can be fixed relatively easily and is minor compared to 35mm.  Also, even with old Russian triplet lenses, the image has a lot of detail, and detail is important for good stereo photography.  The downside to medium format is on any given field of view  you end up using a higher millimeter lens.  The Sputniks use a 75mm lens (ok, a pair of them!) which in 35mm format would be telephoto.  Not so in medium format.  the side effect is a shallower depth of field.  Normally, that is a desired thing for portrait photography. but in stereo photography you want as much in focus as possible.   So ideally you want to be shooting more like an f11-22 if you can do it.  But you also don’t want a lot of noise, so you don’t want too fast a film stock.  so this generally means outdoor photography, or be willing to deal with some out of focus elements.

Also,  here are some tips for framing.  these Russian lenses are not really very good on the edges. so don’t put anything too important on the edge. generally put your most important element more towards the center.  This is also true for stereo photography.  Due to reasons I won’t get into too much here, anything you want to have pop forward of the frame should not break the edges of the frame.  there is a little flexibility with the bottom edge, but top and sides, if anything breaks that edge, it is going to have to play deeper than what I will call zero plane, which is where the frame sits in the 3D space. this is another reason to put anything in the foreground in the center, or near the center.

Also, contrary to what you might think, you don’t necessarily want to try and use your entire potential depth.  just because you are outside, doesn’t mean I should see anything out to infinity.  This is especially true if you want a fairly close foreground element.  too much range in depth can put background elements so far out of alignment that the eyes have trouble fusing them back into a 3D image.  You get ghosting and loss of 3D effect.  In this picture the house and plants define background at maybe 30-40 feet maximum, which is good, since my daughter is maybe 7 feet from the camera.

All in all, it came out quite well.  I got my daughter to pop out a bit, and the edge violation of her legs is not bad enough to break the 3D effect, and I staged my wife, me and my son on different planes of depth, and had the background defined by the house, limiting my total depth budget.  I used as much of the fame as possible although if I had to do it over again I would not have framed in the gate.  It is more forward than my daughter, and is on the edge, so there is a bit of edge violation and retinal rivalry there.  Some of that can be fixed with tricks called “floating window” but I didn’t have the time or the inclination to learn how to actually create one in time for the card. See for yourself if you have glasses and see if the gate gives you some problems.  your brain doesn’t know where to put it in the scene.  Only my daughter is in sharp focus, which technically not so desirable in 3D, but it didn’t bother me as much as I thought it might.  Also, I had no choice in the matter as I was limited by my film stock and ambient light as to what f stop to shoot at.  All in all, not a bad “snap” for a family photo, and my first serious attempt at actually executing a staged stereo photo.

 

 

Using a Hole Saw On my Apple Airport Time Capsule

Base of my Airport Time Capsule and me freehand drilling a hole in it.

Base of my Airport Time Capsule and me freehand drilling a hole in it.

A few weeks ago my Apple Airport Time Capsule up and died. The little green light went out and it just stopped working. For anyone who doesn’t know what an Airport Time Capsule” is it is a combination wi-fi router and wireless backup device. So when it went dead, I lost wi-fi in the house. Anyway, I was tempted to buy a new one at $300 a pop, but then I did a little research. Turns out these things have had a bad track record of going belly up, and the culprit generally is the power supply cooking itself to death, mainly because these devices don’t have adequate cooling. The components get overheated and over time you eventually have a failure. So I rolled the dice and spent $15 on a “repair kit” on eBay to fix it. We had a secondary wireless router that I was able to press into service in the meantime not making it a crisis at home.

A few days later the repair kit came in the mail – consisting of $5 in capacitors and instructions on how to effect the repair yourself. I could have researched the components myself but this seemed a fair price for the intellectual property on how to do it yourself. I had failed to realize when I bought the kit how much in the guts I had to get to fix it. This was no plug and play repair, I had to open up the power supply and re-solder new capacitors to the motherboard. No matter. I was game.

Getting My Tools Ready

Getting My Tools Ready

Sure enough after following all the directions (Do not try this without following someone’s directions, or at the very least let it sit unplugged for several days or manually discharge the capacitors so you don’t kill yourself) I found the offending capacitors. There were 4 in the replacement kit, but since my soldering skills are lackluster I opted for just replacing the ones that were clearly bulging and failed, which in this case meant the pair at the bottom of the board. I now recommend anyone doing this to just replace all four capacitors. The other two are easy to replace and even if they haven’t failed yet, I guarantee they are out of spec. But more on that later.

Motherboard of Power Supply.

Motherboard of Power Supply. My finger pointing to the pair of blown capacitors

 

After some cursing I managed to get the old ones removed and new ones installed. I carefully put the whole thing back together, went upstairs, plugged it in, and all the Ethernet cables, and after a minute the light went from amber to green and lo and behold the system was back up and running.

Also, now that I had the system cracked open it was a simple matter to pull the existing 1TB drive in there and replace it with a 2TB drive. Literally unplug two cables and peel off heat sensor attached with adhesive. The drive itself is not secured in any way. It just has nowhere to go, and I guess Apple assumes these things aren’t going to be moved around a lot while running. There was a Western Digital Black drive in there and I went with a new WD drive as well. You have three choices, a WD “green” drive which uses less power, “blue” drive which is for general computing, or “black” drive which is more “enterprise” class and will stand up to heavy use. It makes sense Apple would put a Black version in there. The thing is on 24/7 as it is also your wireless router, and the drive is not user serviceable. Or at least they discourage it. My feeling is that as this thing is really only a backup and not a NAS I don’t care too much if the drive goes bad in two years or something, by then drives will be even cheaper and I now know how to get in it, so replacing it is a minor inconvenience. So I went with Blue as a replacement. A very reasonable choice would have been Green, as the device has a known problem with heat dissipation a drive that conserves energy also conserves heat. Also, any performance loss from a Green drive not spinning as fast is going to be invisible due to the bottleneck of wi-fi anyway. I didn’t put in Green because it cost $10 more and I am a cheap skate.
So now I have a new drive in the Time Capsule and a functioning power supply. Time to put it all back together. I plug everything back together and then screw the aluminum bottom back on. Now, I didn’t explain how I opened the Time Capsule so I will review the first steps to get in. You get in from the bottom. The big featureless rubbery bottom that keeps it from sliding on whatever it sits on is glued directly to this aluminum base. With a good hair dryer or heat gun you can peel back (slowly!) the whole rubbery foot, revealing this cheese plate aluminum base with a bunch of small screws recessed it it that holds the aluminum base to the rest of the unit. So, here I am putting this thing together, thinking that all I did was fix the symptoms, not the cause. What is to prevent this thing from overheating in a year or two and me going back in to re-solder new components in? I really don’t want to put this suffocating rubber mat on top of this nice thin aluminum heat sink with a bunch of ventilation holes on it. And since heat rises, I decide to just put the aluminum base on, and then call it the top and put the Time Capsule upside down so the aluminum can dissipate heat off the top of the unit. A bit ugly, but hey it works.

Can you say "heat Sink?" Upside down and no rubber on the base

Can you say “heat Sink?”
Upside down and no rubber on the base

Apparently the rubber base is “Thermal Rubber” or something and unlike regular rubber which is an insulator, this rubber does conduct heat. Or at least that is Apple’s claim. Even so, it can’t be as good as aluminum with a bunch of holes in it.

Further Mods:
After feeling pretty smug, I do a little research on other people’s solutions for cooling. Well, turns out the Time Capsule is “double insulated” which is why the plug doesn’t need a grounding third pin. But one of the rules of double insulating is there can be no exposed metal that the user might touch. That way even if there is an electrical fault, if say, I don’t know, some guy decides to do a homemade repair on the power supply, nobody is going to get juiced touching the outside. At this point I have visions of one of my cats stepping on the damn thing and getting full current and at best, dead cat, and at worst, house burnt down. So I decide it is time to revisit my cooling technique.
After doing some research on other blogs I find that several people have been doing one of two things: removing the power supply entirely and putting a 3rd party external one in which fixes cooling by quite a lot, or by modifying the fan. I opted for the second choice.

If you are thinking of modding your time capsule, I strongly suggest you read these websites of guys who have done it before and from whom I learned a lot in prepping mine. Both of them either sell kits or will do the repair for you.
http://www.fackrell.me.uk/
https://sites.google.com/site/lapastenague/time-capsule-power-supply-repair-kits

First, lets talk about the stock fan placement. It butts up directly to the hard drive. It appears to suck air up and then blow it to one side, that side is directly onto one portion of the hard drive. I am no engineer, but wouldn’t you want the fan to blow on the hottest component? Which in this case is the power supply, hands down. And unlike the hard drive which is fairly sealed, the power supply is insulated on 4 sides, with two ends open, sort of like an open-ended burrito, so you could easily blow air through it. You just need to rotate the fan 90 degrees to get it to blow in the right direction. But I guess it doesn’t really matter because the fan doesn’t even come on unless there is a near meltdown in the device. One possibility I didn’t fully explore is moving the temp sensor that is on the drive to the power supply, but that is mainly because I decided to follow the advice of other hackers that disable the MOBO control of the fan entirely and just manually make it spin at a low-level all the time. Again, more on that in a little bit. But first the fan mod.
So, the plan is to rotate the fan 90 degrees. Turns out the best way to do this is to remove it from the aluminum bottom cheese plate and flip it over and THEN rotate it so the exhaust points to the power supply. This gets a little involved. First lets talk about the stock ventilation on the Time Capsule.

IMG_0110new fan placement

New Fan Placement: Notice exhaust now facing to the right (again, where the foam is) and that the fan had been inverted with the text now facing us.

In standard Apple procedure there appears to be no ventilation whatsoever. This is not the case. The thin groove along the upper part of the side hides the upper ventilation ports. And if you look very carefully at your rubber foot along the edge there are some holes in the aluminum base along the edge that are open to the air. So, there IS ventilation, just very minimal, and mostly passive.

IMG_0104

the ventilation of the Airport. The holes along the side are hidden but not plugged by the rubber base that is currently removed. The exhaust ports are hidden in the seam around the device, where the black screwdriver is pointing. Apple went through great pains to hide any visible cooling elements.

 

The fan only comes on in emergencies. And the fan has no direct access to any of the vents. It just cycles air around and I guess they hope convection moves hot air out the top and in from the bottom. So mostly passive. But since it is Apple I would call it more “passive aggressive” cooling.

Since the Time Capsule has failed once due to its heat load, its time to put the equivalent of a hood scoop on this thing and get that air moving. Flip the fan over, point it’s exhaust at the power supply, cover one of the intakes of the fan with a paper “plug” to force it to only use one side and cut a hole in the chassis of the Time Capsule so that the fan has access to the outside world. Plus the mod to make it spin all the time. All righty then.
The fan is held by rubber insulator/ suspension “feet” to the aluminum cheese plate. Some people have suggested removing them, flipping them, removing 5mm from them and crazy-gluing them back together to make everything fit properly as the other side of the fan is a different thickness. Way to fiddly for my taste. I just flipped it, left the rubber feet on, which now contact a circuit board and keep the fan off that, and use silicone as the glue, insulator, vibration absorber all rolled into one to the aluminum cheese plate. This is so much easier. All I have to do is cut a hole in the chassis. I mark the center of the three points where the rubber feet go though the cheese plate, find a suitable hole saw (I was determined not to buy one for this job, so the hole was going to be “best available” size.) and start drilling.

I want a matching hole through the rubber base, but I don’t want a bunch of metal shavings stuck to the remaining adhesive on it so I put a layer of wax paper between the aluminum and rubber. I make a nice neat round hole which I file the edges so it is smooth. The hole through the rubber is pretty smooth too, and cleans up nicely. And the wax paper did a great job of keeping the metal shavings off of the glue.

Is this a bad idea?

I am feeling pretty proud of myself, and my neat round hole. I then dry fit the pieces together with the fan in the new position. Did I mention I didn’t measure twice, cut once? At this time I decide that perhaps oval would be an excellent shape after all, and drill an additional hole into the side of the existing hole so that the fan intake will actually line up with this new port I am making. Despite that setback, I am back on track.

Oval hole

Er, oval is just as good as round right? At least now the fan has a clear access to the outside world now.

 

IMG_4073

Dry fitting paper over axial fan, and below, taping it into place.IMG_4075

 

Again, since I want the fan to draw from only the port I have just cut, I have to cover over the other side of the axial fan. Easiest way to do this would be with a bit of clear packing tape, but that would leave sticky bits facing inside that would eventually gather dust and disable the fan. So the best solution is to cut a template out of paper and then tape that to the side of the fan that will now face into the device.

So, so far I have the hole for the fan in the chassis, the fan’s intake adjusted. I now want to install the fan to the aluminum base, and mod its power control.

 

 

 

 

 

The fan has a 4 cable wire harness. Cut wires 2 and 4, and you have cut the wires that communicate from the MOBO to turn on and at what speed. If you only cut wire 2 the fan will run at 100% all the time, which will cool it quite well, it will be too loud. The other wire that controls the speed, you will cut and put a resistor in line to slow it. It seems different models of Time Capsules have different requirements, but I got a 33 ohm one which seems to work fine for me. Strip the wires, solder the resistor on and either use shrink wrap or in my case, electrical tape to insulate any exposed metal. Then plug the harness back in and place the fan in the new orientation into the Time Capsule.
One last step before putting it all together. For style points, you don’t want the fan guts directly accessible via the new hole you cut, even if it is going to be the underside of the unit. I took a piece of screen from an old screen window and cut it to size and ran a bead of caulk around the inside of the opening and gently press it in. I then run a fatten bit of caulk around the fan chassis where I think it will come in contact with the aluminum base and just a  bit more on top of the screen edges.

IMG_0044

silicone in place ready to be assembled

Then press the aluminum base on and screw it down. I left the unit off for 12 hours so the silicone can set up and dry. This is also a good time to buy some self adhesive rubber feet and put these on the base. That way the unit will sit a little higher off the ground and let the fan intake easier access to airflow.

I did several thermal tests during different stages of my reconstruction. With the power supply back up and running and the unit right side up I got readings close to 120 degrees F on the top above the power supply. And more than 20 degrees lower on the top above the hard drive side.

IMG_0025

temp reading off the power supply side of the unit

 

I did do a test with the fan running at 100% before I cut the speed control wire and I can confirm it is too loud. With the resistor in place the fan is much quieter. If the windows are open to normal summer bug and bird sounds it is inaudible. If the windows are shut, you can hear it faintly. So if you wish to have an even quieter fan you could explore different resistors.

And one last note. You know how I recommend you replace all the capacitors while you have the thing open? While mine was back up and running, it still had a problem where it would run for a week to ten days and then power off. Re-patching the power plug rebooted it. But 10 days later same thing. I figured it was the two remaining capacitors. So I had to crack it open again, open up the power supply again, discharge the high power capacitors, and swap out the remaining parts I had and re-assemble. It has worked like a champ since then.  Although you can now hear a faint whirr when the room is quiet.  I can live with that.

My Love/Hate Relationship with Apple and Straying Outside the Walled Garden

The Unrest

A few months ago I was in the market for a small tablet. I was feeling a little claustrophobic in the apple world, as my laptop, my phone, my wireless router, etc were all Apple products. I had heard some good things about the new Android operating system. The family has an iPad2 wireless only that mostly lives in the house. I was annoyed that Apple doesn’t put a GPS in the wireless only version of the iPads, this meant that when we took the iPad out on a road trip and tethered it to my phone, the maps program didn’t work a worth a damn. Really, Apple? Would that have been so hard? Other Android wi-fi only tablets do that and come in cheaper. One that rose to the surface was the Google Nexus 7. Hell, if you are going to jump ship from Apple, Google seemed a good bet. Now, they don’t make the tablet, Asus does, but Google writes the code for the software on the Nexus 7. Ok, looks good. Better resolution than the iPad and a narrower profile (almost giant iPhone like) and more open source. This was also big draw. I have felt that tablets are artificially handicapped. Why can’t I use them as a phone? my over 40 eyes would love a giant iPhone. And I have no shame, I would absolutely hold an a tablet to my head to make a phone call. Why not? It’s not like talking to no one and waving my arms like a crazy person looks any better, as people with Bluetooth ear buds do.

Fine. I did the research and settled on a Nexus 7 with both wi-fi and cell service. Ordered it, got it set up from AT&T and settled in buying all the apps that had sister apps over from the Apple world. I was able to download apps so I could make free phone calls to and from my Nexus using Google voice. Very cool. And of course I had Google maps, not the travesty that Apple maps has made for itself. (I once had apple maps insist I drive into Boston bay to get to the waste treatment facility, which while on the coast is attached to the mainland.) Great. Maybe I had the new “One Device” it could be my phone, my GPS, my calendar, etc, and it was big enough to use and just small enough I could, depending on what I was wearing, get it into a pocket. Now all I had to do is learn the operating system.

The Bumpy Ride

Android OS is…. interesting. There would be a few cool new things, then some inexplicable dumbness. I was able to get apps that synced all the calendars on my iphone and Mac Book Pro, as well as my contacts. Not too hard. So far so good. The Nexus being a Google product it tries really hard to shove Gmail down your throat. Ok fine. I Set up a Gmail account. But around here I realize I have left the serenity of the “walled garden” that Apple provides. There is a Gmail only email app on the Nexus that seems nice enough, but that is not my primary email, so I am never going to use that app. There is also a generic email app but it seems to be really limited in functionality. After doing some research, I determine that “Aquamail” is the best most flexible powerful email app out there. I get it, set it up for my multiple email addresses. It has lots of “under the hood” settings about layout etc, most of which were bad. I kept thinking “this tablet is bigger than my iPhone, why is it harder to check my email on this thing than it is on that?” everything was busy, hard to read, hard to keep track of emails, etc. after much futzing I got it to where I was “OK” with it, but not really happy. Plus Aquamail is made by some Russian developer, and I couldn’t get out of my mind that he could have put a back door in it and now some Russian hacker has access to all my emails. Also now I have 3 different email apps all not great all checking emails and downloading them. Battery life suffers. I mess around with various settings and improve it somewhat.

On the Road

I have a big trip planned where I will drive up the east coast, vacation with my family, then bus and fly back to work, then fly and bus back and rejoin the vacation already in progress. I figure it is going to be a big “sea trial” for this new Nexus 7. We drive up the east coast using the Nexus 7 as our GPS. Even plugged in it cannot keep up with the power demands of the tablet. Battery levels drop during the whole day. If we had driven another hour we would have lost navigation as it couldn’t charge itself fast enough. Something has got to be wrong I think. I jigger with email fetch settings, and some other settings. The next day it seems a little better, although we don’t drive as far. Not good. One of the allures of a tablet is longer battery life than my phone. I know GPS use taxes the thing, but IT WAS PLUGGED IN. into a 2 ah USB port so it should have been fine.

During my vacation I decide the following things: although a bit busy, I like widgets. I like Google Chrome insofar as it integrates with the GPS of the tablet and gives you a welcome web page with info about things to do in your area and tracks travel plans as long as the tickets were booked via your Gmail account. Voice recognition is surprisingly good. Battery life remains poor. And surprisingly, I miss having a front facing camera. The Nexus has a back facing camera, but ships with no apps to use it, which is odd. Of course Skype and other apps can use it, so it is not like it is useless.

Airplane mode

I then bus then fly back to DC. On the bus I read “Treasure Island” a free ebook installed on the Nexus to get me hooked on that feature. I do this because I couldn’t figure out how to download a movie via the “Google Play” store. Turns out you can do it, but the setup is counter-intuitive. When I get to the airport I have several hours to kill, so I decide to watch a movie, streaming “Zero Dark Thirty” via the 3G network. Taking a huge hit in my monthly use, but whatever, I chalk it up to the learning curve. On the plane I go back to reading the e-book. The rest of the travel is uneventful. “Zero Dark Thirty” excepted, I finally realize that by not buying an Apple product not only am I locked out of the “iTunes” library I am locked into the Google library, which is pretty weak. Movie selections are not great.

Anyway, during my stay in DC I figure out how to download movies, rather than stream them, so I download two for viewing on my travel back. I watch part of one on the plane. When we land I turn my tablet out of “airplane mode” and that is where the trouble begins. The screen starts blinking and flashing at first I assume as all the apps come online and try to call out to the world. I deplane. At baggage claim the tablet is still stuck in some sort of subroutine, and unresponsive. I reboot the thing, thinking that will fix it. It takes an extremely long time to boot. In fact it never finishes. It just gives me the Google Nexus logo which in this tablet is suspiciously like a giant “X”. I restart the boot. Nothing. I start using my iPhone to look up how to troubleshoot a Nexus. I boot in safe mode. Still hangs. I do more research. My bus comes. The entire 2 hour bus ride I spend killing my iPhone battery trying to troubleshoot, chat w Asus technicians etc, on how to get this thing running again. I finally pull the nuclear option and opt to wipe all my data on it. This does not fix it. Let me repeat, in just outside of two weeks, without downloading any exotic apps I manage to brick my Nexus to the point where even a hard reset cannot save it. the only hope apparently is to connect it to a PC. A PC?! That is it. I am done. I get an RMA from both Asus and B&H, which is where I bought it. I get my money back. And start over.

IMG_3964

Nexus “X” of death

The Aftermath

It turns out the Nexus going belly up was a blessing in disguise. I ran out and bought an iPad Mini no longer upset about the price or feeling claustrophobic living within the iOS environment. It was like coming home to an old friend. As Steve Jobs said “It just works.” No buggy email apps, good battery life, excellent library in iTunes, nice size. I have even come around to apple maps, as it seems to have improved somewhat. The one thing I lost is the ability to call in or out like a phone on the mini. Some version of this can be done i think by jailbreaking it, or maybe calls within wi-fi area only. I have found I don’t care so much, as long as everything else works. My one regret is not making one phone call on my Nexus 7 in a public place when it was still working, preferably surrounded by hipsters in a bar or coffee-house, to see the ensuing confusion in their eyes as to whether it was lame or cool.

 

Color Charts, Calibration Targets and Mars

All cameras lie. At least a little bit. For that matter so do our eyes. What we want in most cases is a camera to perform in the same characteristics of our eye, although even that can be subjective.

Gretag-Macbeth_ColorCheckerBack when I would work as a Camera Assistant we would often use a “Macbeth Colorchart” at the head of each scene, and sometimes at each roll of film. It would act as a known “control” that the post production people would know, and therefore be able to tweak color and contrast so that the chart looked like it did to our eye. This was important because color negative film meant that a positive needed to be made and at that step changes could be introduced and of course the goal would to be not to introduce any unwanted changes. Essentially It was “re-exposing” the film, and the chart gave the film lab something to go by on what the cinematographer wanted. Sometimes these charts were shot under “white” light (i.e. light color balanced to the film stock) and then only after it was photographed were gels to change the color of the light applied for the scene to be shot. The goal here was to communicate with the lab, “Just because I put blue gel on the lights doesn’t mean it is a mistake I need you to fix, I want you to “time” your color to the light I shot the chart under, so that my desired color cast is achieved” resulting in this case to a blue cast to he scene.

As the shift to video happened, Macbeth Charts began to lose ground to charts like DSC labs Chroma Du Monde which was more useful when used with a Waveform and Vectorscope, video engineering devices not used in film, but prevalent in video production. according to the DSC labs Website these charts were originally designed by the “US Space Program” which made me think of a color chart I had seen recently photographed in a fairly remote location. “Bradbry Landing” to be exact. No, this isn’t a BBC sequel to “Downtown Abby”, it is the landing site of the Mars Rover Curiosity, or more accurately Mars Science Laboratory (MSL.) It has 17 cameras onboard, some for hazard navigation, some for scientific research. The HazCams are black and white so color reproduction is irrelevant. But some of the cameras shoot in color. We all know Mars is the “Red Planet” but what if you want to correct out that color cast? Well, you shoot a color chart of known colors. NASA calls this a Calibration Target as it does more than just color. Nevertheless it is a pretty simple device. It has 6 color samples. Red, Green, Blue (3 primary colors) 40% gray and 60% gray, and a fluorescent pigment that glows red when hit with ultraviolet light. Pretty simple, especially when you look at the complex charts that DSC produces. I imagine the heavy engineering of the camera’s performance was done here on earth, and this simple chart is just to analyze color cast like that on Mars. The descending bar graphic is adapted from the US

Mahli Calibration Target

Mahli Calibration Target

Air force for judging camera resolution. And below that is a 1909 Vdb Penny. what is a coin doing on the chart, you might ask. This chart is mainly for the MALHI camera, which is essentially for close up work, essentially a geologist’s eyes. The penny is a nod to the common practice of a geologist placing a known object within the frame to show scale of object being examined. Rulers work, and are perhaps more scientific, but in choosing a penny NASA is showing a bit of whimsy, something not that bad for a big governmental science and engineering branch to have. Perhaps something we should all keep in mind.

Why a 1909 Vdb penny? The first year the “Lincoln head” penny was produced was 1909 and 2009 was originally the launch date for the rover, and the 100th anniversary made it a good choice apparently. Ultimately the Rover’s launch got delayed to 2011, but by then the decision was made. “VDB” are initials on the bottom of the coin indicating the initials of the designer: Victor D. Brenner.

I find this especially interesting as when I was a kid I dabbled in numismatics, or coin collecting. I got started with some silver quarters my parents gave me, but I remember vividly looking through all the pennies I got over the course of however many months and finding three or four 1909 Vdb pennies myself. At the time they were valued at about $2. Now they are about $15 on ebay for average condition. There is also a 1909 Vdb S penny, which was minted in the San Francisco mint and has a “wheat” back. Those are quite rare and are worth at least a thousand dollars today. Who knows, you might have one in your pocket right now. And you thought pennies were worthless.

 

MastCam

MastCam with fixed 34mm f 8.0 lens Notice the Swiss Army knife for perspective. Much like putting a penny next to the object for scale

Now it is not clear to me whether this Calibration Target is available to the other cameras on the Rover, but I think so. The other main camera on the Rover is the “Mast Cam” which provides a human height perspective from Mars and can even capture footage in stereo. The Mast Cam uses the same sensors as the MAHLI does.
The sensors are 1200×1200 pixel (2 megapixel) Bayer pattern sensors. The Mast Cam has two cameras, a “wide angle” (15 degree field of view) 34mm f8 lens with a minimum focus of 2.1 meters and a “telephoto” (5.2 degree field of view) 100mm F10 lens with same minimum focus. Together they can shoot stereo, although with the mismatch in focal lengths means this is only a bonus feature rather than a primary function. Each camera can do 720p video at about 10fps. And for us camera nerds, it has ND filtration as well as IR cut filters (for study of specific wave lengths more than IR contamination I suspect.)

One thing that instantly occurred to me when I first was listening to news reports of Curiosity’s landing, was how the hell do they keep dust of the lenses? Dust has an affinity for front elements here on Earth and I imagine it is only worse on Mars. I don’t know if Curiosity has the ability to blow off dust of its lenses, but I did find out they have an ingenious design to their lens caps. They are transparent. This means that they can take pictures through them if conditions are unfavorable. The optical quality suffers, but the lens is protected. When conditions are clear, they can remove the lens caps for clearer pictures. And like any good photographer, the Rover probably keeps her lenses capped when not in use.

What does this have to do with Calibration Targets and color charts? Well, it helps us get images like the two below. One is “un-white balanced”, and the other is “white balanced” and represents what the surface would look like under Earth lighting. By shooting the Calibration Target first, the engineers can “dial out” any color cast created by the Martian atmosphere. The uncorrected shot, if it had a calibration chart in the frame, would have colors that would not look correct. In the “white balanced” shot, all the colors on the target should look true and accurate, the same as they did back at NASA before it launched.

Uncorrected "RAW" panorama of Mars

Uncorrected “RAW” panorama of Mars would look like if it were on Earth.

Corrected "White Balanced" version of the same shot

Corrected “White Balanced” version of the same shot

If these pictures are too small to your liking, here is the link to NASA’s page that has links to some super high res versions. Now I don’t know about you, but I find high res panoramic photos of a foreign planet pretty cool.

So what does this all mean? Well, for one, I plan to tape a 1909 Vdb penny to my Chroma Du Monde chart. Why? there are several reasons. I want to be reminded that whenever I pull my color chart out that somewhere on a different planet millions of miles away, there is a similar chart being used to aid in photography. The penny also reminds me that in the most ordinary mundane things, like a penny, there can be surprises if you just look carefully enough. A good thing to keep in mind in life, as well as photography. Who would have thought that the lowly penny would be the first currency to arrive at another planet? And lastly, no matter how big or important the job is there is always room for some whimsy.

I feel getting a copy of the only coin (of any currency) that is on another planet for $15 on Ebay is a deal if ever I saw one. And in the meantime I am going to start checking my pockets more often. Who knows, there could be a 1909 Vdb S in there with all that loose change.