24 fps: Where Does It Come From?

Back in the day (turn of the last century) there was no such thing as camera batteries or sound men. Men were men and cameras were hand cranked. As they had evolved from still cameras they were sort of still camera Gatling guns, capturing still frames as fast as you cared to crank. Somewhere past 14fps something magical happened and persistence of vision started to fuse the images so rather than a fast slide show it started to look like motion. So cameras were built to move one linear foot of film per two cranks, which meant if you cranked at “coffee grinder” speed you hit 60 feet per minute, which comes out to 16fps, just north of that 14fps effect. Cranking faster improved the persistence of vision thing, but producers didn’t like you blowing through all that expensive film, and besides there was really only one stock available and it was slow, about 24asa. Cranking faster meant less light per frame. Sometimes you cranked even less than 14fps to squeeze a bit more exposure out of it.

This is where it gets a little weird. Projectionists, who were also often hand cranking their projectors had a habit of cranking faster. Faster meant faster turnaround in seating, which meant more $ and even better persistence of vision without annoying flicker. Sure, action was sped up, but the whole thing was new, no one seemed to complain. In fact, by 1925 The Society Of Cinema Engineers (now known as SMPTE) had codified it recommending

60 feet per minute (16fps) for camera speeds and projecting at 80 feet per minute (21.3fps) seems weird now to pick a different speed for display from capture, but to review, faster cameras cost more money, and faster projectors made money, and after all, producers are paying for everything.

proposed standard cranking and projection speed circa 1927 from SMPE

proposed standard cranking and projection speed circa 1927 from SMPE

Anyway, someone decided it would be a great idea to add sound. How hard could it be? In fact, several companies tried to be the first to bring sound to the movies, hoping to capture the market. Funny thing is they all insisted on capturing at the same frame rate they displayed at. If you didn’t, the pitch would be all wrong and everybody would sound silly. And forget about music. Some picked 80 feet per minute (the already established speed for projection), some picked 85 feet per minute, and some picked 90 feet per minute. First one to get a working system was Warner Brothers Vitaphone. It was used in the 1927 “The Jazz Singer” which was the first feature length film with sync dialog and is considered the official start of the “Talkies.”


Western Electric’s Bell Telephone Laboratories (and their Vitaphone system) as well as other systems listed taking speed and projection speed (SMPTE 1927)


The Vitaphone engineers had picked 90 feet per minute, or 24fps as their capture and projection speed. If one of the others had been first, we easily could be shooting 21.33fps or 22.66fp as a standard today. So sometimes you get lucky.
Except the Vitaphone system was terrible. It sounded good but that’s all that could be said about it. The sound was recorded on 16″ disk records separate from the film. They could only be played 20-30 times before they were no good, and they could break, so you had to send lots of duplicate disks with each roll of film to the projectionist. A disk only covered one reel so every reel change you at to cue up another record. And synchronizing the needle with the head of the roll was a pain in the ass. And if you broke the film for some reason and spliced it back, everything past that point was out of sync. During recording, the camera had to be motor powered from the mains, and the disks had to be made in recording booth adjacent to the set. In fact it was such a bad system that it was abandoned 5 years after it was implemented. And it only lasted that long because all the theaters that wanted to have sound had bought into that technology and had these crazy phonograph contraptions connected to their projectors and weren’t eager in throwing them away just after having bought them. Movietone, which used technology that put the audio as an optical track on the film had many advantages, but it was a little late out of the gate. Because Vitaphone was first, the engineers of Movietone decided to match the Vitaphone frame rate.

“Originally we recorded at a film speed of 85 feet per minute. After Affiliation with the Western Electric Company, this was changed to 90 feet per minute in order to use the controlled motors already worked out and used in the Vitaphone system.  There are a large number of both Vitaphone and Movietone installations scheduled and in operation, and sufficient apparatus is involved to make it impractical to change the present practice of sound reproducing.  In connection with the Society’s standard, I have been unable to find any New York theater which is running film at 85 feet a minute; the present normal speed is 105 feet and on Sundays often 120 feet per minute is used in order to get in an extra show”

Earl L Sponable, Technical Director, Fox-Case Corporation, New York City (“Some Techincal Apects of the Movietone” S.M.P.E. #31 September 1927, Page 458)

Soon enough Movietone lost ground as well, as technology changed but all subsequent sound systems stuck with the now established 24fps. So blame a sound man. Or thank him. Your choice.


One of the first sound men checking a Vitaphone recording with a microscope while recording. Sort of a human playback head. (page 308 from Transactions of S.M.P.E. August 1927)  It turns out this man is George Groves.


Postscript: Now of course, we often mean 23.976 fps when we say 24 fps.  This one we can’t blame on sound.  23.976 fps as a camera frame rate can be blamed on the introduction of color to standard-def television broadcasts in the 1950’s, and the death of film as a capture medium, and by extension the death of telecine as a post process.

When TV started, it did not match the 24 fps established by film.  This is because engineers wanted to use the 60Hz cycle from our 110v 60Hz household power to drive frame rate.  60Hz meant 60 fields, or 30 frames per second, and was pretty easy to implement.  Once color came along in the 1950’s they wanted a standard that would be backwards compatible with black and white TVs.  Engineers could no longer use the 60hz rate of the household electricity to drive frame rates and keep the color and luminance signals to play nice so they settled on a very close one of 59.94hz.  This resulted in a frame rate of 29.97 fps, from the previous 30 fps, something the black and white receivers would still work with.

Telecine: in order to get film onto TV you had to do a step called telecine.  the film was played back and captured essentially by a video camera.  Getting 24 fps to fit into 30 fps was done via a clever math solution by what is called 3:2 pulldown.   There are two fields to a standard-def frame, and thus 60 fields per second, and 3:2 pulldown would use one film frame to make three fields (1.5 frames) of video.  Then the second film frame made two fields (1 frame) of video, and the third frame made 3 fields again and so on.  Doing this, 24 fps fits quite nicely into 30 fps broadcast.  and anything shot 24 fps but shown at 29.97 fps system would look like it had been shot at 23.976 fps, even though the camera had been running at 24 fps, as anything that ran through the telecine went through a 01% slowdown to conform to the 29.97fps broadcast standard.  Somewhere in the transition to High Definition 23.976 became codified as a standard, not only for broadcast, but a capture speed. As cameras more and more were digital and not film, they would choose 23.976 as the actual camera frame rate, rather than 24fps and expect the 0.1% slowdown to happen upon transfer from film to video, as had happened to film in telecine rooms.  No telecine? no slowdown, which meant it had to be implemented in actual camera speed.

So, hate 23.976 fps? blame a Sound Man, color TV, the death of film and the whole accidental way we pick our standards.


For those interested in reading more, I highly recommend reading online records of the Journal of Society of Motion Picture Engineers, made available by the Media History Project. http://mediahistoryproject.org/technical/



Christmas cards and vintage stereo cameras

My 2013 Christmas card photo. Anaglyph glasses required

My 2013 Christmas card photo. Anaglyph glasses required

My current obsession with vintage stereo cameras really caught fire because of my 2013 Christmas card.  I had been interested in stereo cameras, dabbling here and there over the years. I also like to tinker and do things with my hands.  I had done a staged, lit Christmas card in 2012 but it was a very stressful experience, what with nice clothes, and herding cats and my then pregnant wife and kid.  Everyone was grumpy for hours afterward.  I vowed 2013 would be different.  Something like an impromptu snap of the family.  But how could I still have something special about it? Due to my interest of old stereo cameras, I had two Russian “Sputnik” medium format stereo cameras.  One working, and one not, mainly for parts.   But I hadn’t used them in a few years. I also had some expired black and white medium format film in a drawer. Snow was forecast later that week, which is unusual in mid December in the DC area. I hatched a plan. I would take a picture in stereo of us outside the house either during the snowfall or shortly after that. I planned to make the Christmas card an anaglyph stereo print, which means the print would require viewing with “those funny glasses.” There are several ways to view a stereo image but they all boil down to one thing- sending the left image to the left eye and the right image to the right eye, while also blocking the image from the non-corresponding eye. Anaglyph is the cheapest way to do it, as I could get cheap paper glasses for .20¢ each, and include them with the picture in the envelope. The anaglyph system uses color to control the images to the eyes. The most common use today is red/ cyan anaglyph. You have the red filter over your left eye and cyan (the opposite of red) on the right eye. Take the left image, destined for the left eye, and tint it so there is no cyan in it, which will make it appear red. The red filter will do nothing to the red image. The cyan filter on the right eye, however will only let cyan images show up and as there is no cyan in the left image, it will appear a black frame to the right eye. Do the opposite to the right image, and you have an anaglyph print! It will look like a poorly registered color print without glasses, but 3D with the glasses. Anaglyph is well suited for black and white images as it is using color to encode the stereo information. You can do it with color images, but color reproduction may suffer a little. I mainly used black and white as a) it is better for anaglyph and b) it was what I had on hand, and what with film loosing ground as a format, I suspect medium format is only available via the mail now, and this whole idea hit me inside of a standard weather forecast, so ordering fresh stock for this seemed unlikely to work out.

I had never made an anaglyph before, so there was a reasonable possibility this would not work out. But the deadline and a goal became an incentive to teach myself how to do it. And if it didn’t work out, I always would have one half of the stereo pair, which could serve as a “normal” picture for the Christmas card. I had the camera and film, but I would need more elements. I found a source for cheap glasses and ordered them. I investigated getting Photoshop Elements, which shockingly, I didn’t already own. I decided not to buy it until I got a good “negative report” from my shoot, i.e. A picture actually worth printing either due to poor content or a malfunction on the camera or it’s operator (me!).
The day came, the snow arrived on schedule, I loaded the film and got the camera on a tripod and waited for the best moment. We went shopping for a Christmas tree up the street. We struck out on getting a tree but we did get a wreath. When we got home, since everyone was already dressed for outside, and it was still snowing, I got everyone to stand in the front yard, trying to compose a shot that both was well framed as well as using the 3D space. This meant my daughter, who is shorter than my wife, became the foreground element. My wife, the mid ground. And I, with my son on my shoulders would be the background. There were two wrinkles to this. Although I had 400asa Tri-X black and white film, it was pretty overcast due to the snow, and I ended shooting at around f5.6 which meant we couldn’t hold focus over the whole depth of the scene. I picked a point a little past my daughter as the focus point and hoped for the best. The second problem to be overcome was the twitchy-ness of the Sputnik camera I was using. The viewing system takes some getting used to, the film advance system is about as analog as you can get, and worst of all, the shutter trigger and timer were very hit or miss. First you have to cock the shutter. Then pull the timer lever down. Then push the shutter release. Doesn’t sound too bad. Except the shutter timer lever doesn’t wait till you hit the shutter to start running.  A common problem in Sputnik cameras. So you pull the timer down and let go and it immediately starts running. If you don’t cock the shutter before you start it AND trip the shutter after you have started the timer it won’t take a picture. If you hit the shutter release in the wrong order you take a picture without benefit of the timer. Only good thing is you can always re-set the timer after you have triggered the shutter as the shutter won’t fire until the timer runs out. So, cock shutter, start timer, trigger shutter, pull timer back to max, then run as fast as you can in wet snow with your 11 month old on your shoulders to be in the rear most part of the frame. Fortunately all that went wrong was early firing of the shutter. The roll had 12 exposures, which meant 6 stereo pairs. Two were ruined by early triggering of the shutter, two were screwed up by me mis-understanding the manual advance, one was ok, and one was good. A word on the advance. It is completely manual, to the point that the only way you know you have advanced it properly is a little window on the back of the camera that lets you look at the paper backing of the film that has a number on it when you are in the right position. If you overshoot, oh well, just have to advance to the next number. Also since you are shooting stereo, you have to advance it by two, at least when you don’t overshoot your target number. So if your first exposure is “1” you have to advance it to “3” to be ready for another stereo pair.

this is what happens when the shutter timer fires prematurely.

this is what happens when the shutter timer fires prematurely.

I scanned the negative, brought the shots into Photoshop Elements, aligned them and applied the anaglyph red/cyan to the images.  when aligning them you want the verticals to align perfectly, but the horizontal offset determines where in the 3D space the images lie.  I put my daughter slightly forward of the frame ,so she appeared to “pop” out slightly, and the rest of us fell deeper in the frame. It turned out quite well, overall.  based on this experience, and others, my tips for medium format stereo photography are as follows:

I love medium format- the negative is 4-5 time bigger than comparable 35mm (purpose built 35mm stereo cameras use a non-standard 35mm frame that is a bit smaller than the standard 35mm frame.) This means any dirt on the negative when you scan it in can be fixed relatively easily and is minor compared to 35mm.  Also, even with old Russian triplet lenses, the image has a lot of detail, and detail is important for good stereo photography.  The downside to medium format is on any given field of view  you end up using a higher millimeter lens.  The Sputniks use a 75mm lens (ok, a pair of them!) which in 35mm format would be telephoto.  Not so in medium format.  the side effect is a shallower depth of field.  Normally, that is a desired thing for portrait photography. but in stereo photography you want as much in focus as possible.   So ideally you want to be shooting more like an f11-22 if you can do it.  But you also don’t want a lot of noise, so you don’t want too fast a film stock.  so this generally means outdoor photography, or be willing to deal with some out of focus elements.

Also,  here are some tips for framing.  these Russian lenses are not really very good on the edges. so don’t put anything too important on the edge. generally put your most important element more towards the center.  This is also true for stereo photography.  Due to reasons I won’t get into too much here, anything you want to have pop forward of the frame should not break the edges of the frame.  there is a little flexibility with the bottom edge, but top and sides, if anything breaks that edge, it is going to have to play deeper than what I will call zero plane, which is where the frame sits in the 3D space. this is another reason to put anything in the foreground in the center, or near the center.

Also, contrary to what you might think, you don’t necessarily want to try and use your entire potential depth.  just because you are outside, doesn’t mean I should see anything out to infinity.  This is especially true if you want a fairly close foreground element.  too much range in depth can put background elements so far out of alignment that the eyes have trouble fusing them back into a 3D image.  You get ghosting and loss of 3D effect.  In this picture the house and plants define background at maybe 30-40 feet maximum, which is good, since my daughter is maybe 7 feet from the camera.

All in all, it came out quite well.  I got my daughter to pop out a bit, and the edge violation of her legs is not bad enough to break the 3D effect, and I staged my wife, me and my son on different planes of depth, and had the background defined by the house, limiting my total depth budget.  I used as much of the fame as possible although if I had to do it over again I would not have framed in the gate.  It is more forward than my daughter, and is on the edge, so there is a bit of edge violation and retinal rivalry there.  Some of that can be fixed with tricks called “floating window” but I didn’t have the time or the inclination to learn how to actually create one in time for the card. See for yourself if you have glasses and see if the gate gives you some problems.  your brain doesn’t know where to put it in the scene.  Only my daughter is in sharp focus, which technically not so desirable in 3D, but it didn’t bother me as much as I thought it might.  Also, I had no choice in the matter as I was limited by my film stock and ambient light as to what f stop to shoot at.  All in all, not a bad “snap” for a family photo, and my first serious attempt at actually executing a staged stereo photo.