Digital cinematography is the process of capturing (recording) a motion picture using digital image sensors rather than through film stock. As digital technology has improved in recent years, this practice has become dominant. Since the 2000s, most movies across the world have been captured as well as distributed digitally.[1]
Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like Red, Blackmagic, Silicon Imaging, Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, like Sony, GoPro, and Panasonic.[2]
As of 2023, professional 4K digital cameras were approximately equal to 35mm film in their resolution and dynamic range capacity. Some moviemakeres still prefer to use film picture formats to achieve the desired results.[3]
Rainbow (1996) was the world's first film to utilize extensive digital post production techniques.[11] Shot entirely with Sony's first Solid State Electronic Cinematography cameras and featuring over 35 minutes of digital image processing and visual effects, all post production, sound effects, editing and scoring were completed digitally. The Digital High Definition image was transferred to a 35mm negative via an electron beam recorder for theatrical release.
The first digitally videoed and post produced feature was Windhorse, shot in Tibet and Nepal in 1996 on the Sony DVW-700WS Digital Betacam and the prosumer Sony DCR-VX1000. The offline editing (Avid) and the online post and color work (Roland House / da Vinci ) were also all digital. The film, transferred to 35mm negative for theatrical release, won Best U.S. Feature at the Santa Barbara Film Festival in 1998.
In 1997, with the introduction of HDCAM recorders and 1920 × 1080 pixel digital professional video cameras based on CCD technology, the idea, now re-branded as "digital cinematography," began to gain traction in the market.[citation needed] Shot and released in 1998, The Last Broadcast is believed by some to be the first feature-length video shot and edited entirely on consumer-level digital equipment.[12]
In May 1999, George Lucas challenged the supremacy of the movie-making medium of film for the first time by including footage filmed with high-definition digital cameras in Star Wars: Episode I – The Phantom Menace. The digital footage blended seamlessly with the footage shot on film and he announced later that year he would film its sequels entirely on hi-def digital video. Also in 1999, digital projectors were installed in four theaters for the showing of The Phantom Menace.
In May 2000, Vidocq, which was directed by Pitof, began principal photography shot entirely using a Sony HDW-F900 camera, with the video being released in September the next year. According to the Guinness World Records, Vidocq is the first full length feature filmed in digital high resolution.[13]
In June 2000, Star Wars: Episode II – Attack of the Clones began principal photography shot entirely using a Sony HDW-F900 camera as Lucas had previously stated. The film was released in May 2002. In May 2001 Once Upon a Time in Mexico was also shot in 24 frame-per-second high-definition digital video, partially developed by George Lucas using a Sony HDW-F900 camera,[14] following Robert Rodriguez's introduction to the camera at Lucas' Skywalker Ranch facility whilst editing the sound for Spy Kids. A lesser-known movie, Russian Ark (2002), was also shot with the same camera and was the first tapeless digital movie, recorded on HDD instead of tape.[15][16]
In 2009, Slumdog Millionaire became the first movie shot mainly in digital to be awarded the Academy Award for Best Cinematography.[17] The highest-grossing movie in the history of cinema, Avatar (2009), not only was shot on digital cameras as well, but also made the main revenues at the box office no longer by film, but digital projection.
Major movies[n 1] shot on digital video overtook those shot on film in 2013. Since 2016 over 90% of major films were shot on digital video.[18] As of 2017[update], 92% of films are shot on digital.[19] Only 24 major films released in 2018 were shot on 35mm.[20] Since the 2000s, most movies across the world have been captured as well as distributed digitally.[21][22][23]
Today, cameras from companies like Sony, Panasonic, JVC and Canon offer a variety of choices for shooting high-definition video. At the high-end of the market, there has been an emergence of cameras aimed specifically at the digital cinema market. These cameras from Sony, Vision Research, Arri, Blackmagic Design, Panavision, Grass Valley and Red offer resolution and dynamic range that exceeds that of traditional video cameras, which are designed for the limited needs of broadcast television.
Technology
Digital cinematography captures motion pictures digitally in a process analogous to digital photography. While there is a clear technical distinction that separates the images captured in digital cinematography from video, the term "digital cinematography" is usually applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting a feature film. The term is seldom applied when digital acquisition is substituted for video acquisition, as with live broadcast television programs.
Flagship smartphones like the Apple iPhone have been used to shoot movies like Unsane (shot on the iPhone 7 Plus) and Tangerine (shot on three iPhone 5S phones) and in January 2018, Unsane's director and Oscar winnerSteven Soderbergh expressed an interest in filming other productions solely with iPhones going forward.[24]
Single chip cameras designed specifically for the digital cinematography market often use a single sensor (much like digital photo cameras), with dimensions similar in size to a 16 or 35 mm film frame or even (as with the Vision 65) a 65 mm film frame. An image can be projected onto a single large sensor exactly the same way it can be projected onto a film frame, so cameras with this design can be made with PL, PV and similar mounts, in order to use the wide range of existing high-end cinematography lenses available. Their large sensors also let these cameras achieve the same shallow depth of field as 35 or 65 mm motion picture film cameras, which many cinematographers consider an essential visual tool.[25]
Unlike other video formats, which are specified in terms of vertical resolution (for example, 1080p, which is 1920×1080 pixels), digital cinema formats are usually specified in terms of horizontal resolution. As a shorthand, these resolutions are often given in "nK" notation, where n is the multiplier of 1024 such that the horizontal resolution of a corresponding full-aperture, digitized film frame is exactly pixels. Here the "K" has a customary meaning corresponding to the binary prefix "kibi" (ki).
For instance, a 2K image is 2048 pixels wide, and a 4K image is 4096 pixels wide. Vertical resolutions vary with aspect ratios though; so a 2K image with an HDTV (16:9) aspect ratio is 2048×1152 pixels, while a 2K image with a SDTV or Academy ratio (4:3) is 2048×1536 pixels, and one with a Panavision ratio (2.39:1) would be 2048×856 pixels, and so on. Due to the "nK" notation not corresponding to specific horizontal resolutions per format a 2K image lacking, for example, the typical 35mm film soundtrack space, is only 1828 pixels wide, with vertical resolutions rescaling accordingly. This led to a plethora of motion-picture related video resolutions, which is quite confusing and often redundant with respect to the relatively few available projection standards.
All formats designed for digital cinematography are progressive scan, and capture usually occurs at the same 24 frame per second rate established as the standard for 35mm film. Some films such as The Hobbit: An Unexpected Journey have a High Frame Rate of 48 fps, although in some theatres it was also released in a 24 fps version which many fans of traditional film prefer.
The DCI standard for cinema usually relies on a 1.89:1 aspect ratio, thus defining the maximum container size for 4K as 4096×2160 pixels and for 2K as 2048×1080 pixels. When distributed in the form of a Digital Cinema Package (DCP), content is letterboxed or pillarboxed as appropriate to fit within one of these container formats.
In the early years of digital cinematography, 2K was the most common format for digitally acquired major motion pictures however, as new camera systems gain acceptance, 4K is becoming more prominent. The Arri Alexa captured a 2.8k image. During 2009 at least two major Hollywood films, Knowing and District 9, were shot in 4K on the Red One camera, followed by The Social Network in 2010. As of 2017[update], 4K cameras are now commonplace, with most high-end films being shot at 4K resolution.
Data storage
Broadly, two workflow paradigms are used for data acquisition and storage in digital cinematography.
Tape-based workflows
With video-tape-based workflow, video is recorded to tape on set. This video is then ingested into a computer running non-linear editing software, using a deck. Upon ingestion, a digital video stream from tape is converted to computer files. These files can be edited directly or converted to an intermediate format for editing. Then video is output in its final format, possibly to a film recorder for theatrical exhibition, or back to video tape for broadcast use. Original video tapes are kept as an archival medium. The files generated by the non-linear editing application contain the information necessary to retrieve footage from the proper tapes, should the footage stored on the computer's hard disk be lost. With increasing convenience of file-based workflows, the tape-based workflows have become marginal in recent years.
File-based workflows
Digital cinematography has mostly shifted towards "tapeless" or "file-based" workflows. This trend has accelerated with increased capacity and reduced cost of non-linear storage solutions such as hard disk drives, optical discs, and solid-state memory. With tapeless workflows digital video is recorded as digital files onto random-access media like optical discs, hard disk drives or flash memory-based digital "magazines". These files can be easily copied to another storage device, typically to a large RAID (array of computer disks) connected to an editing system. Once data is copied from the on-set media to the storage array, they are erased and returned to the set for more shooting.
Such RAID arrays, both of "managed" (for example, SANs and NASs) and "unmanaged" (for example, JBoDs on a single computer workstation), are necessary due to the throughput required for real-time (320 MB/s for 2K @ 24fps) or near-real-time playback in post-production, compared to throughput available from a single, yet fast, hard disk drive. Such requirements are often termed as "on-line" storage. Post-production not requiring real-time playback performances (typically for lettering, subtitling, versioning and other similar visual effects) can be migrated to slightly slower RAID stores.
Short-term archiving, "if ever", is accomplished by moving the digital files into "slower" RAID arrays (still of either managed and unmanaged type, but with lower performances), where playback capability is poor to non-existent (unless via proxy images), but minimal editing and metadata harvesting is still feasible. Such intermediate requirements easily fall into the "mid-line" storage category.
Long-term archiving is accomplished by backing up the digital files from the RAID, using standard practices and equipment for data backup from the IT industry, often to data tapes (like LTOs).
Most digital cinematography systems further reduce data rate by subsampling color information. Because the human visual system is much more sensitive to luminance than to color, lower resolution color information can be overlaid with higher resolution luma (brightness) information, to create an image that looks very similar to one in which both color and luma information are sampled at full resolution. This scheme may cause pixelation or color bleeding under some circumstances. High quality digital cinematography systems are capable of recording full resolution color data (4:4:4) or raw sensor data.
Intra-frame vs. Inter-frame compression
Most compression systems used for acquisition in the digital cinematography world compress footage one frame at a time, as if a video stream is a series of still images. This is called intra-frame compression. Inter-frame compression systems can further compress data by examining and eliminating redundancy between frames. This leads to higher compression ratios, but displaying a single frame will usually require the playback system to decompress a number of frames from before & after it. In normal playback this is not a problem, as each successive frame is played in order, so the preceding frames have already been decompressed. In editing, however, it is common to jump around to specific frames and to play footage backwards or at different speeds. Because of the need to decompress extra frames in these situations, inter-frame compression can cause performance problems for editing systems. Inter-frame compression is also disadvantageous because the loss of a single frame (say, due to a flaw writing data to a tape) will typically ruin all the frames until the next keyframe occurs. In the case of the HDV format, for instance, this may result in as many as 6 frames being lost with 720p recording, or 15 with 1080i.[30] An inter-frame compressed video stream consists of groups of pictures (GOPs), each of which has only one full frame, and a handful of other frames referring to this frame. If the full frame, called I-frame, is lost due to transmission or media error, none of the P-frames or B-frames (the referenced images) can be displayed. In this case, the whole GOP is lost.
For theaters with digital projectors, digital films may be distributed digitally, either shipped to theaters on hard drives or sent via the Internet or satellite networks. Digital Cinema Initiatives, LLC, a joint venture of Disney, Fox, MGM, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios, has established standards for digital cinema projection. In July 2005, they released the first version of the Digital Cinema System Specification,[33] which encompasses 2K and 4K theatrical projection. They also offer compliance testing for exhibitors and equipment suppliers.
Theater owners initially balked at installing digital projection systems because of high cost and concern over increased technical complexity. However new funding models, in which distributors pay a "digital print" fee to theater owners, have helped to alleviate these concerns. Digital projection also offers increased flexibility with respect to showing trailers and pre-show advertisements and allowing theater owners to more easily move films between screens or change how many screens a film is playing on, and the higher quality of digital projection provides a better experience to help attract consumers who can now access high-definition content at home. These factors have resulted in digital projection becoming an increasingly attractive prospect for theater owners, and the pace of adoption has been rapidly increasing.
Since some theaters currently do not have digital projection systems, even if a movie is shot and post-produced digitally, it must be transferred to film if a large theatrical release is planned. Typically, a film recorder will be used to print digital image data to film, to create a 35 mm internegative. After that the duplication process is identical to that of a traditional negative from a film camera.
Unlike a digital sensor, a film frame does not have a regular grid of discrete pixels.
Determining resolution in digital acquisition seems straightforward, but it is significantly complicated by the way digital camera sensors work in the real world. This is particularly true in the case of high-end digital cinematography cameras that use a single large bayer pattern CMOS sensor. A bayer pattern sensor does not sample full RGB data at every point; instead, each pixel is biased toward red, green or blue, and a full color image is assembled from this checkerboard of color by processing the image through a demosaicing algorithm. Generally with a bayer pattern sensor, actual resolution will fall somewhere between the "native" value and half this figure, with different demosaicing algorithms producing different results. Additionally, most digital cameras (both bayer and three-chip designs) employ optical low-pass filters to avoid aliasing; suboptimal antialiasing filtering can further reduce system resolution.
Grain and noise
Film has a characteristic grain structure. Different film stocks have different grain.
Digitally acquired footage lacks this grain structure. It has electronic noise.
Digital intermediate workflow and archiving
The process of using digital intermediate workflow, where movies are color graded digitally instead of via traditional photochemical finishing techniques, has become common.
In order to utilize digital intermediate workflow with film, the camera negative must first be processed and then scanned to a digital format. Some filmmakers have years of experience achieving their artistic vision using the techniques available in a traditional photochemical workflow, and prefer that finishing/editing process.
Digitally shot movies can be printed, transferred or archived on film. Large scale digital productions are often archived on film, as it provides a safer medium for storage, benefiting insurance and storage costs.[36] As long as the negative does not completely degrade, it will always be possible to recover the images from it in the future, regardless of changes in technology, since all that will be involved is simple photographic reproduction.
In contrast, even if digital data is stored on a medium that will preserve its integrity, highly specialized digital equipment will always be required to reproduce it. Changes in technology may thus render the format unreadable or expensive to recover over time. For this reason, film studios distributing digitally-originated films often make film-based separation masters of them for archival purposes.[36]
Reliability
Film proponents have argued that early digital cameras lack the reliability of film, particularly when filming sequences at high speed or in chaotic environments, due to digital cameras' technical glitches. Cinematographer Wally Pfister noted that for his shoot on the film Inception, "Out of six times that we shot on the digital format, we only had one useable piece and it did not end up in the film. Out of the six times we shot with the Photo-Sonics camera and 35mm running through it, every single shot was in the movie."[37]Michael Bay stated that when filming Transformers: Dark of the Moon, 35mm cameras had to be used when filming in slow-motion and sequences where the digital cameras were subject to strobing or electrical damage from dust.[38] Since 2015 digital has almost totally replaced film for high speed sequences up to 1000 frames per second.
Criticism and concerns
Some film directors such as Christopher Nolan,[39]Paul Thomas Anderson[40] and Quentin Tarantino have publicly criticized digital cinema, and advocated the use of film and film prints. Tarantino has suggested he may retire because he will no longer be able to have his films projected in 35mm in most American cinemas. Tarantino considers digital cinema to be simply "television in public."[41] Christopher Nolan has speculated that the film industry's adoption of digital formats has been driven purely by economic factors as opposed to digital being a superior medium to film: "I think, truthfully, it boils down to the economic interest of manufacturers and [a production] industry that makes more money through change rather than through maintaining the status quo."[39]
Another concern with digital image capture is how to archive all the digital material. Archiving digital material is turning out to be extremely costly, and it creates issues in terms of long-term preservation. In a 2007 study, the Academy of Motion Picture Arts and Sciences found that the cost of storing 4K digital masters is "enormously higher – 1100% higher – than the cost of storing film masters." Furthermore, digital archiving faces challenges due to the insufficient longevity of today's digital storage: no current media, be it magnetic hard drives or digital tape, can reliably store a film for a hundred years, something that properly stored and handled film can do.[42] Although this also used to be the case with optical disc, in 2012 Millenniata, Inc. a digital storage company based in Utah, released M-DISC, an optical storage solution, designed to last up to 1,000 years, thus, offering a possibility of digital storage as a viable storage solution.[43][44]
^Ohanian, Thomas; Phillips, Natalie (2013-04-03). Digital Filmmaking: The Changing Art and Craft of Making Motion Pictures. CRC Press. ISBN978-1136053542.