The Good, Bad & Ugly

Special Report / 4k

The Good, Bad & Ugly

Special Report / 4k

4K has now moved out of the upper stratosphere and become a practical technology bringing with it everyday complications and new cinematic possibilities. Adrian Pennington surveys the good, the bad and the ugly of this digital format.


Why would a camera designed for scientific experimentation be paraded at a broadcast and film technology show like NAB 2013? Quite simply, to demonstrate what’s possible. Capable of HD imaging in sub-optimal light at 16,000fps, the optical core of Vision Research's astonishing camera is also built into the company's Flex 1,000fps 4K unit. Right now, it is, as Quantel marketing director Steve Owen declares, “a fantastic time to be a visionary in the industry and to think about how new capabilities can be employed to capture greater pixel density, luminance, frame-rates and colorimetry.”

Being visionary is one thing, but there are some practical commercial, technical and aesthetic considerations to deal with today as 4K looks inexorably set to become the new production standard in TV and feature production.

The Good

The chief benefit in a 4K pipeline, as outlined by Assimilate's VP of marketing, Steve Bannerman, lies in working with high-quality digital imagery “for a better-quality show today, while retaining the source files for future use.”

Whilst pictures will be future-proofed for 4K projection, UltraHD broadcasts and extensions of the Blu-Ray spec (for which a task force has been established), there are immediate pluses in 4K production. Broadcasters such as CBS Sports, for example, employed For-A's FT-One 4K super slow-motion camera during Super Bowl XLVII to extract HD content for replay analysis. Such techniques deliver enhanced detail, but can also reduce operator costs.

“In live TV, directors are looking for more coverage, so originating at a higher resolution provides the ability to pan or zoom into a picture and extract more content,” explains Andy Shelley, operations director at UK production and post services company Onsight.

RED claims that with its 5K Mysterium-X sensor, productions can zoom in to about 800% in post, a handy tool for visual effects and chromakeying work. As an example, David Fincher's 2011 RED Epic 5K shoot for The Girl With The Dragon Tattoo was finished in 10-bit 4K, at LA facility Light Iron, with a deliverable saved for later mastering as a 16:9 4K Blu-ray.

“Something [David] wanted to do was ‘centre extraction’,” explains facility co-founder Michael Cioni. “It gives us 20% headroom – or 3 million spare pixels – to play with, so that motion tracking, digital set extension, stabilisation and compositing can all be done without ever having to blow up the picture. If you were just working with a 2K original frame, you would lose part of the picture and so some fidelity in expanding the remaining image to fill the frame.”

There is currently a marked distinction between acquiring at 4K/5K and then finishing at a resolution which is more likely to be 2K due to the cost of post production, especially in CGI-intensive projects. Yet the ability to super-sample image information from a higher resolution master is being credited as a boon to a filmmaker's ability to tell stories.

"The question is not why we need 4K, 8K or beyond, but how can I manipulate images in production and in post to achieve effects that have never been seen before?” asserts Ted Schilowitz, marketing chief for RED which has plans beyond its 6K Dragon sensor for chips up to 28K (intended for super-large format production). “It's what stills photographers have been doing for years – creating other worldly experiences.”

If RED sometimes over-hypes its marketing, in this case Schilowitz's words chime with those of many DPs, including Vince Pace, technical partner to James Cameron.

“We've shifted from an analogue type of filming to something like data mining where we can capture pixel density or HDR [High Dynamic Range] and many other parameters, process it and achieve an experience beyond reality,” claimed Pace at NAB 2013. “We will get to a stage where you will experience live events and the imagination of the filmmaker in ways we haven't even imagined.”

ICG president Steven Poster ASC agrees: “Working with oversampled images is good in the sense that you have more detail and more information than you can use in the final image.”

Peter Suschitzky who shot After Earth on the Sony F65 at 4K says he is interested in capturing at the best resolution possible and that it's an advantage to have more information at the beginning of the process. “You can always degrade the images afterwards, but you can't improve it upwards after the event,” he says.

To Suschitzky's eye the F65 records “perceptibly more detail and is superior in contrast ratio to 35mm film. It gives me all that I have been striving for on film and never imagined possible.

“I know that film projected on film can be beautiful, but today's reality is that if you shoot on film the film will be digitised and there is a loss of quality that occurs in this process,” he adds. “I've done direct comparisons between digitised film and original digital images and it's immediately obvious that film loses out by comparison in terms of detail and with shadow areas becoming clogged up.”

Haris Zambarloukos BSC who shot Locke and parts of Jack Ryan at 5K, both with a 2K finish, says there are definite merits in capturing the most amount of information possible before extracting in post for blow-ups or reframes.

“Is a smaller 16mm-sized sensor rated 8K better than a larger 35mm sized sensor at 2K?,” he asks. “Personally I prefer the larger sensor because it captures more information per pixel count.”

Sufficient resolution is certainly a key parameter, according to ARRI, but it takes its guidance from cinematographers who view exposure latitude, highlight handling and natural skin tones as more important.

“We are working on technology that will offer a higher spatial resolution, but also pushing hard in terms of if a higher temporal resolution, without sacrificing the dynamic range we already deliver,” says ARRI MD Franz Kraus. “We don't want to produce one camera that has a high contrast and another with high detail.”

“What annoys me is that at trade shows HD quality is dumbed-down to make 4K look so much better,” adds Kraus. “In fact, perception is as much to do with the quality of display. A 2K image displayed on an HD OLED looks incredible because the active light source shows far higher contrast ratio in the picture.”

Cioni predicts that 4K 16-bit finishing will become passé by 2016. “That is why it is important to gauge your boundaries – so you know how to get past them. If you don’t learn it, you can’t break the rules safely, and we need to know how to prepare future clients for this.”

The Bad

Cinematographers are wary though of the look associated with too much information. “We are now reaching the point with digital where latitude, colorimetry and resolution go beyond film, but the idea of suspension of disbelief remains fundamental,” says Poster.

At heart is the feeling that higher resolutions, especially when combined with higher-than 24fps frame-rates, deliver a super-realistic look at odds with our innate ability to become immersed in the motion picture experience.

“The fact that we can display this kind of information at higher refresh rates means that, all of a sudden, the brain doesn't have to interpolate between frames,” says Poster. “If you are given too much visual information it’s like watching live TV. We have to be careful that our quest for more and more information is not going to take us away from our ability to suspend disbelief.”

For all the myriad menu options available in the on-board computers which comprise digital cinema cameras, cinematographers are currently preferring to craft the image using analogue tools in-camera. For example, it's the imperfections, unique to individual glass, which DPs are now really valuing.

Speaking on a recent AMPAS panel entitled Storytelling In The Digital Age, Tony Richmond BSC ASC admitted to concern that pictures might be getting “too sharp,” explaining that to address this, some cinematographers choose lenses such as the Cooke Panchro that produces a warm look.

“Digital cameras are a great equaliser giving the option for more and more TV networks to shoot 35mm-style productions, but the technology can also be a homogenising force,” argues Cooke Optics chairman Les Zellan. “The resurgence in demand for older lenses manufactured 30, 40 or 60 years ago, as well as Anamorphic, comes from wanting to give digital a personality.”

Lens manufacturers are lining up new Anamorphic ranges to sate demand. First out of the blocks are three focal lengths (35mm, 50mm, 75mm) of the ARRI/Zeiss 2x Master Anamorphic range. A collaboration between Cooke and Thales Angénieux is said to blend the 'Cooke Look' with Angénieux's zoom technology, shipping 2014, while Germany's Vantage Film has a partnership with Cameron Pace Group to develop an Anamorphic lens system for 3D production.

“Digital sensors give a much harsher, crisper, real-life look to the image than film,” explains Roberto Schaefer AIC ASC. “What the Anamorphic lens brings to the artist is the addition of discreet artefacts that most viewers don't really see, but feel. Now that so many manufacturers are making new Anamorphic lenses of varying types, the choices and the supply is greater. Some are crisper from edge-to-edge, while others embrace the quirkier 'defective' looks which seem to go well with digital sensors and software.”

Glass adds imperfections and artefacts associated with the random and organic quality of film grain, while filters, such as Tiffen's 4K-developed Pearlescent, can be used to soften the image further.

“When we get into close-ups of faces we take the hard edge off 4K and make it into a more romantic look allowing the audience to suspend their disbelief,” says Poster. “There are ways of enhancing the 4K image but they need to happen on-set because once in post we don't have the control.”

Carey Duffy, Tiffen's MPTV filter group consultant, holds a similar position: “There are tools in post that can change the latitude and how highlights are cropped, but it won't change the inherent resolution. There's a specific area in a cinema where 4K will be seen at optimum and at that point the resolution is so sharp it's a hyper-realism to the brain. That may be the goal of camera manufacturers, but it's not always the goal of the cinematographer, because cinematographic reality is different.”

The Ugly

In spite of the advantages of acquiring 4K, very few productions are actually being shot in the format currently. Increased storage and editing power requirements are, at the moment, causing producers to hesitate.

“Producers want 4K but nobody is willing to pay for it,” says Poster. “You are quadrupling the amount of information stored and transferred [over 2K] and that is more expensive on the technology infrastructure. Yet, producers think it's just 1s and 0s and wonder why they should have to pay more for that.”

According to Shelley, who has managed several 4K projects for 3D large screen deliverables at Onsight, “4K or 6K work soaks up storage, which is a cost. Certain elements are render-intensive, so if you need render farms you will pay for that. You need infrastructure to move files around with enough bandwidth and connectivity between processes, but we always design workflows that keep creative realtime.”

Haris Zambarloukos says he keeps requesting a 4K finish, “because you will have a finer product at the end, but it is usually declined due to cost. The cost is most noticeably a hindrance to VFX projects where 4K dramatically increases rendering times.”

Suschitzky agrees: “The cost of dealing with RAW files can be a problem because producers sometimes fear that it will be too expensive. Shooting RAW costs more than ProRes and 4K will cost even more than RAW, but if the budget can reach to it then the results will be better.”

The hidden costs of post production from 4K files are an issue that proponents of film would like to see exposed. Too often the upfront costs of film stock are compared unfavourably to the cheaper rental of digital film cameras/recorders, yet there can be many hiccups from lens to screen, which can bump up the price of digitally-originated projects.

“The issue with production budgets is that they rarely contain all the extra costs involved in digital capture and output in their entirety, not to mention the on-going storage and archive issues,” says Phil Méheux, BSC. “It’s fair to say that although it can vary from production to production, there is not a lot of difference (between film and digital) in budget by the time the film hits the screen.”

The presumed higher cost of 4K, however, is now being countered at facility houses who argue that the expense is becoming manageable, outside of VFX intense projects. For this reason Axel Ericson, founder of Digital Arts in New York, suggests the early adopters of 4K scene-to-screen features may also emerge from lower-tier markets.

“If you’re shooting mainly live action, and don't have a lot of VFX in your movie, then 4K can be much more cost-effective than you might think. The camera formats can now be de-bayered on-the-fly in post and you can work in realtime 4K with a full 444 pipeline,” says Ericson who has built in NYC's first 4K DI grading theatre. “The catch is that 4K is not yet a delivery requirement. Artistically, 4K can enhance storytelling, and producers have to balance how much extra value 4K is going to add to their movie today by giving it further life in years to come.”

As an indication of the size of data facing post houses, today's 2K DPX dailies at 300MB/s generate around 1TB/hour. At 4K 16-bit the volume is around 4TB/hour so a typical project, once all render passes and DCDM are accounted for, could easily become 40-50TB.

And that's just for single camera shows. A recent 4K multi-camera shoot using Canon and Sony file-based media had FotoKem chief strategy officer, Mike Brodersen, telling the SMPTE Digital Cinema Summit, that the volume of data generated was “becoming difficult to manage”.

“On some days we are handling 6 or 7TB, so how we would deal with 8K, I just don't know,” he said, though Fotokem plans to find out in tests of 8K images derived from a F65 when Sony releases a software upgrade to unlock the sensor's full capability this summer. [The F65 contains two 4K sensors placed on top of each other, one at a 45˚ angle, to effectively create an 8K sensor].

“If you look at how data rates of network bandwidth and optical transmission are increasing, then to do an 8K show as easily as it is today to do a show in 2K or 4K, we are looking at 10-12 years from now,” estimated Jim Houston, chair of AMPAS' Image Interchange Framework committee at the summit. “An 8K show at 120fps might average 187TB per day of new data.”

GPUs and solid-state storage continue to tumble in price and evolve in power, but as Barry Bassett, MD of rental house VMI observes, “the reality of transferring huge amounts of data is not a problem which is going to disappear quickly”, as data-hungry cameras generate ever larger amounts of media.”

Data aside, there continue to be disputes about what constitutes 4K. For Onsight's Shelley the only benchmark is provided by Digital Cinema Initiatives (DCI), where the 4K DCP is higher at 4096 x 2160 compared to TV's 3830 x 2160. “We know what we want to end up with, but there are any number of ways of getting to that deliverable,” he says.

Even here the International Telecommunications Union’s Ultra HD spec (which also accommodates 8K) has left open the standard frame-rate which could by anywhere from 50-150fps, while the DCI may be encouraged to adopt variable frame-rates into future DCPs.

“A variable frame-rate might make sense to smooth out jitters in fast horizontal pans, but in other sequences of the same movie the cinematographer might want a 24p look,” says Oliver Pasch, sales director, Sony Digital Cinema.

Notoriously, not all 4K cameras are the same either. The devil is in the detail of the sensor pattern, post de-bayering. “[Once extrapolated] this means that 4K images from a RED Epic, while better than 444 HD, are not as good as the numbers suggest,” says Bassett. “This is important because while the ARRI Alexa is not a 4K camera it does have a 3.5K sensor. Its de-bayered RAW output can be recorded at 2800 x 2160 – almost 3K.”

Bassett cites Roger Deakins BSC ASC’s work on Skyfall using ARRIRAW 2K workflow, with the output recorded to Codex. “Versions were prepped for 4K cinema with the file up-converted, and it looked fantastic,” he says.

Arguably, the only cameras currently available which produce a true 4K-resolution output are the Canon C500, Sony F65 and RED 5K Epics, all of which provide enough overhead for post work.


A more practical problem is that the visual benefits of 4K are debatable on existing home displays and cinema projection. According to consultant Michael Karagosian, viewing at normal cinema distances means you can't see sharpness unless you're in the first few rows of the cinema – positions which cinema-goers least appreciate.

“There's a common misconception that 4K only makes sense on a big screen,” counters Pasch. “It all depends on the shape of the theatre and viewing angle. All that matters is how near or far you are to the screen. In most smaller or mid-sized auditoria the viewing angle is wider than for big screens and you can see the difference between 4K and 2K.”

Also important in discerning the 2K/4K difference is contrast ratio. “All 4K is not the same and depends on the whole quality of the image. The lower the contrast of the projected image the more your eye will struggle to see the difference,” says Pasch.

He admits, though, that 4K for a standard commercial cinema is already the maximum of what the human eye can see. “It makes no sense to be at 8K in terms of spatial resolution unless you screen on IMAX.”

The CMOS chip inside Japanese broadcaster NHK's 8K Super-HiVision broadcast cameras are considered state-of-the-art, but the man revered as the inventor of CMOS is hard at work researching a concept he claims could be the ultimate electronic equivalent of film grain.

“The Quanta Image Sensor is a revolutionary change in the way we collect images in a camera,” Eric Fossum of the Thayer School of Engineering at Dartmouth told SMPTE. “The goal is to count every photon that strikes the sensor, to provide resolution of 1 billion or more photo-elements (or jots) per sensor, and to read out jot bit-planes hundreds or thousands of times per second resulting in 10-100 Terabits/sec of data.”

One advantage lies in post-capture where grain size can be traded for signal-to-noise ratio, like changing the film speed after capture. “Needless to say it is very difficult to do this and you won't be seeing a camera anytime soon,” he said.

As home screens ramp up to 60 and 80-inches, and beyond, replete with autostereo 3D, 120Hz and UltraHD, what does cinema need to do to up its game? Some see the future in making greater artistic use of the sheer amount of information captured through lenses which themselves are rated up to 16K. As it readies its new digital camera, Panavision seems to be betting that larger 70mm-style sensors will grasp the imagination of filmmakers.

Others though, believe the future of cinema will be assured if it remains true to its ability to deliver a uniquely social storytelling experience.

“We are making the same choices with digital as with film, except that instead of choosing an emulsion we are selecting a camera as an artistic choice,” says Poster. “Then we choose a lens system, design the lighting and filtration and then manage the post process. We need the same skills as before to craft the image in service of the story.”