The ultimate guide to Chroma Key and Green Screen

Ever wondered how a news studio looks like in reality – and wanted to play around with the green screen? Try it yourself! Everyone can reproduce green screens and use chroma keying for home-made special effects. Learn the basics.

Virtual news studios are locations where dreams (as well as nightmares) can become on-screen reality: From dinosaurs suddenly walking through the studio to illustrations of rising water levels and other impressive weather effects, TV channels have been using chroma key and green screen increasingly.

Progress in computer animation and live processing of greenscreen setups have made it possible to project any kind of setting into these news studios. Filmmakers rely on the method for most kind of special effect, too – such as the invisible cloak in the Harry Potter films. But there is more good news: Anyone can use chroma keying at home nowadays. Let us walk you through the basics.

The essentials: What is chroma key and how does it work?

Chroma key is a method to replace a predefined color, the so-called key color, in filmed material – and insert (digital) content such as graphs, maps and animations or combine it with material from another shot.

The most common key colors used are green and blue. Why these two colors? They are in opposite contrast to the color of the human skin.

The purpose: Why use chroma key?

Chroma key helps you to combine two different shots in the same frame. It replaces all the pixels with the defined green or blue key color with content from other film material or with digital content. This means: Your imagination is the limit!

Possible settings are:

  • Virtual studios for your own “news broadcast”;
  • Putting yourself in the most unlikely locations such as the jungle, underwater or the oval office;
  • In a more sophisticated way, you also can use the greenscreen environment to insert digital animations.

Blue or green: Which color should I use for chroma key?

Choose your key color according to the colors in your shot: If there is blue in it, use green and vice versa. Plants will partly disappear in greenscreens, as will a person wearing blue trousers in bluescreens. However, be aware of the effects of the colors: Green is twice as reflective as blue, so it tends to contaminate your shot more than blue.

There are rare settings where both colors are needed at the same time – Spiderman with his partly blue costume fighting against a green Goblin is one example you might (or might not) encounter. In this rare case, you will be forced to shoot the protagonists separately.

Why is lighting so important for chroma key?

The greatest challenge while using chroma key is the illumination of your setting. The reason: Your key color must be as homogenous and soft as possible across the entire setting. Take extra care to avoid:

  • Shadows: They slightest deviation from the key color will result in errors – and shadows will do exactly that. Solution: Use extra lightings between the objects and people you are filming and the background in key color to avoid shadows.
  • Pleats: Make sure your backgrounds are ironed, even and uniform. Pleats will result in fractions of the key color which will subsequently disturb the process of chroma keying.
  • Dust and dirt: Studios are like pharmaceutical laboratories – they are super clean. Don’t let some dust put your greenscreen project in danger!

Pro tips: Distance, confidence monitor, camera focus, compression and on-click keys

No more than 3 meters! This is the ideal distance between camera and background in key color. The bigger the distance, the more complex becomes your setting in terms of lighting, focus and color.

Provide a confidence monitor for your protagonists. The greenscreen setting is harder to use for people in front of the camera: They interact with objects that are they cannot see. It may be a bit costly, but if technically possible, a screen can provide live orientation for the people in front of your camera.

Switch to manual focus. If your chroma key setting is well lit, many cameras will get lost in the color. Use your camera skills and focus manually.

Avoid compressed filming at all costs. Compression will result in a deviation of colors which could potentially make your shots useless. RAW is the best option.

All set for the shot? Then start with a still of your key color background – in so doing, you will be able to make one-click keys in post-production.

Bring the power of Green Screen footage & Chroma Key

You can cleverly bring the power of green screen footage to your next project. Whether you’re combining several takes into one composite, adding a colorful backdrop to a headshot, combining several takes, or creating special effects with different assets, powerful and responsive tools help bear your vision to life.

Remember Chroma Key goes hand-in-hand with Green Screen! Get a quick insight with our video example illustrating how to make a chroma key video with VEGAS POST, and learn how to key “green screen” footage like the pros in this step-by-step guide.

 

The post The ultimate guide to Chroma Key and Green Screen appeared first on VEGAS Magazine.

https://vegas-magazine.com/chroma-key-green-screen/

After Effects & Performance. Part 8: Multiprocessing (kinda sorta)

When is multiprocessing not multiprocessing?  When it’s rendering multiple frames simultaneously. In part 8 we’re going to have a quick look at this feature, which was first introduced with After Effects CS 3 in 2008, and was included through to CC 2014.

In Part 6 we looked at how the year 2006 was a very significant year for desktop CPUs.  Let’s start by looking at what After Effects was doing at the same time.

By 2006 After Effects was 13 years old; version 6.5 had been released almost two years earlier.  In January 2006 Adobe released After Effects version 7, certainly the most visible change in After Effects’ history.  Adobe introduced the new “unified interface” that we’ve been using ever since. From a personal perspective, I initially hated the new user interface and I postponed upgrading for as long as I could. But I was wrong, so wrong, and once I learned how to set up workspaces with panels and viewers I could never go back.

January 2006 was the same month that Intel launched the new Core Duo, superseding the Pentium 4. With the launch of the Core Duo, Intel ushered in the era of multi-core processors – as detailed in part 6.  The future of performance would be based on more cores, not faster ones.  The problem facing software developers was how to effectively utilize additional processor cores.  It’s a problem with no simple solution.  For a huge company like Adobe, dealing with an established code base that was 13 years old, as well as a large user base for whom stability and backwards compatibility were critically important, this was even more difficult.

Transitioning to multi-core processors wasn’t a complete surprise, and the industry had plenty of time to think about it.  But there weren’t any easy answers to the problems multi-core processors raised, and one of the biggest problems was trying to explain why it was difficult.

In November 2006, MIT Technology Review published an article on “the trouble with multi-core computers”, and not a lot has changed in the 14 years since.  The opening paragraph clearly sums up the situation then and now:

“Although multiple processors are theoretically faster than a single core, writing software that takes advantage of many processors–a task called parallel programming–is extremely difficult.

– MIT Technology Review 2006

The issue was no clearer two and a half years later.  In June 2008, the “International Solid State Circuits Conference” gathered senior chip designers from Intel, AMD, IBM, Sun and others to discuss the topic at an evening panel.  The opinions were diverse and showed a range of thoughts on the topic of multi-core processors, which are interesting to look back on over ten years later.  Some of the choice quotes are:

“by 2017 embedded processors could sport 4,096 cores, server CPUs might have 512 cores and desktop chips could use 128 cores. The question is not whether this will happen but whether we are ready.”

– Anant Agarwal, founder and chief executive of startup Tilera

 

“microprocessor cores will get increasingly simple, but software needs to evolve more quickly than in the past to catch up.”

-Shekhar Borkar, director of Intel Corp.’s Microprocessor Technology Lab

 

Ditzel told the panel he helped design Sun’s first 64-bit CPU then waited nearly ten years before commercial 64-bit operating systems became available.

“With multi-core it’s like we are throwing this Hail Mary pass down the field and now we have to run down there as fast as we can to see if we can catch it.”

-Dave Ditzel, former CPU architect at Sun and founder of Transmeta

Skipping back to the earlier 2006 MIT article, written after the Core Duo had been launched but before any quad-core processors had been released, it concludes with the following summary:

For the most part, operating systems such as Windows and Mac OS X are able to effectively split up applications on a dual-core system… but when it comes to 4, 8, or 16 cores, the applications themselves need to be modified to garner more performance.

– MIT Technology Review 2006

While the problem of efficiently utilising multi-core processors was industry-wide, the few people who were experts tended to work in the rarefied world of high performance supercomputers. Now, everyday programmers working on Windows and Mac applications needed to adapt, but such a generational change wasn’t going to happen overnight.

In with the new

Every time a new version of After Effects is released, users look forward to seeing what’s new and – hopefully – improved.  There are always new features being requested, as well as bugs that are discovered and that need to be fixed.  At the same time, the Windows and Mac operating systems are constantly being updated too, which can require additional work by Adobe to keep everything running smoothly. At any time, Adobe has a plan for how its resources will be allocated across new features and bug fixes, operating system compatibility as well as compatibility with other Adobe apps.

But as we looked at in part 6, 2006 was an unusually significant year in the history of desktop computers. Steve Jobs surprised everyone by announcing that Apple were switching to Intel CPUs, and Intel launched the Core Duo – signaling the rise of multi-core processors on the desktop. Additionally – and this isn’t something that I’ve mentioned in this series before – all of Intel’s new CPUs were now 64 bit, while all of the existing Adobe apps were still 32 bit.

Beyond the usual scope of new features and plans that Adobe had for After Effects, these industry-wide developments placed additional pressure on Adobe’s resources. It was only a matter of time before After Effects users would expect that the Mac version would be native Intel code, and both Windows and Mac users would be asking for full 64 bit support (more about this in an upcoming article). And while the first new Intel CPU had two cores, it was clearly only a matter of time until CPUs came out with 4 cores and then more.  Again, After Effects users would be expecting future versions to harness the power of new multi-core machines.

These three developments alone – Intel support on Macs, 64 bit support and multi-processor support – were formidable enough, before you consider the usual range of new features and bug fixes that customers expect with expensive upgrades.  When it came to the technical side of things, there wasn’t any ambiguity around Intel support for Macs, or 64 bit support across both platforms.  That’s not to say that those changes would be easy or quick, just that there was a clearly defined approach to implementing them.

But supporting multiple processors was different.  There was no general consensus on the best way to approach software design for multiple processors, which is referred to as “multi-threading”.  There wasn’t even a consensus on the terminology to use, although it was generally accepted that “multi-processor” referred to multiple discrete CPU chips, while “multi-core” referred to a single chip with multiple CPUs on it. Thus the Intel Core Duo 2 was a multi-core CPU, while Apple’s “Quad core” PowerMac was both multi-processor and multi-core, as it contained two CPUs that each contained two CPU cores. Confused?  Well it only got worse, because many of Intel’s competitors saw a lot of potential in having different types of CPU cores on the same chip, but what do you call that?  Other sectors of the industry were also developing alternative solutions to meet their own unique demands – for example ARM were using multiple core CPUs to reduce power consumption instead of increasing CPU speed, a successful approach they named “big.LITTLE“.

Separately from all of these hardware developments, another significant factor was the computer’s operating system – both Mac and Windows – which needed to incorporate support for multiple processors.  Apple and Microsoft approached multi-processing support in different ways and with different developer tools, increasing the complexity of the task for developers of cross-platform software.

So it’s understandable that when Intel launched the Core Duo in 2006, and the future was clearly multi-core CPUs, Adobe were facing a dilemma about the best way to initially respond.

Their immediate solution could be considered something of a hack, and in some ways it reflected the approach that Intel had taken with CPUs.  Intel made their newer processors more powerful by adding more cores to the one CPU, not faster ones. Adobe was able to take a comparable approach with After Effects: rendering performance could be improved by running more copies of After Effects on the same computer, not making it faster.  This is the same basic approach that Gridiron software had adopted with their “Nucleo” plugin, which had been released in 2005.

After Effects version 7 was the last version to ship with a conventional version number – the next release would adopt the new “Creative Studio” prefix. Eighteen months after v7 launched, Adobe shipped After Effects CS 3 in July 2007.

2007: Adobe After Effects CS 3

Adobe introduced many new features with CS 3, including their first nod towards multiple processor computers.  After Effects CS 3 was definitely not multi-threaded – the software itself had not been re-written for multiple CPUs.  However a new feature was accessible in the user preferences: “render multiple frames simultaneously”, under the heading “Multiprocessing”.

Over the next eight major releases, through to CC 2014, the behaviour would be tweaked and refined along with the user settings, but the basic concept remained the same.

“Render Multiple Frames Simultaneously” first appeared in CS 3, and was accessed in the preferences. The user settings went through a number of revisions, the first being a slider to help the user balance RAM between background processes and RAM previews, in CS 4.

Clicking the checkbox to turn it on would prompt After Effects to load up additional versions of itself (without the user interface) in the background. There was no visible indication that anything else had changed, but if you looked at your system’s task manager / activity monitor you’d see a bunch of extra processes either called “aeselflink” (Mac) or “AfterFX.exe” (Windows).  Each one of these represented a link from the main version of After Effects running with the user interface, to an invisible copy running in the background.

While the exact controls in the preferences changed over the years, in general there was a setting for how much memory was allocated to the background processes, and whether to leave a CPU core free for other applications.  Initially, as AE was a 32 bit application, the maximum amount of RAM available for each process was 3 gigabytes, but the later 64 bit versions upped the maximum to 6 GB.

In Theory

When it came time to render something in the render queue, each background process would load its own copy of the project.  As the name suggests, once each process had loaded the project they all began rendering individual frames simultaneously. Ideally, each background copy of After Effects would run on its own CPU core, fully utilising the potential power of a multi-core CPU.  Once rendered, each frame was sent to the main After Effects application, which assembled them in the right order and presented them to the user as a single render.

If you turned on “RMFS” and checked your activity monitor / task maanger. you’d see a bunch of additional After Effects processes. On a Mac they were called “aeselflink” and on Windows they were simply “AfterFX.exe”.

Render settings and output modules worked the same as they always had. Even though the rendering process was being split among multiple different background processes, the frames were automatically stitched together into a single video that could be saved as a Quicktime file, or any other selected format, with embedded audio if required.

The 3rd party plugin “Nucleo” had been offering the same technical approach for a few years, and the newer “Pro” version included many additional functions.  Most notably, it would speculatively render compositions, layers and previews in the background so they were already done – or mostly done – by the time you wanted them. Nucleo Pro was a commercial 3rd party product that had many devoted fans, but it was discontinued in 2011.

In Practise

It sounds simple in theory. Compared to the complexity of re-writing the entire application from the ground up for multiple processors, simply running extra copies of After Effects in the background was a clever alternative. It utilised existing resources – a render-only version of After Effects – and the concept was easy to understand.  It also placed the technical workload onto the operating system.  Your computer is always running several different apps at the same time, even if you’re not aware of it, and it’s the operating system that divides up the workload between the available hardware.  By running multiple instances of After Effects on the same machine, the operating system was doing the bulk of the work in distributing the workload across multiple processors.

But in practise… well that’s another story.  And it’s also a personal one, so I should probably make this clear right now.  I never liked the “render multiple frames simultaneously” feature.  It didn’t work for me, and over the many years that it was part of AE it caused me more trouble than it was worth. I wanted it to work, and I spent a lot of time and effort experimenting with it over many years.  But overall, I never found a point where the benefits outweighed the pitfalls.  But that’s just my opinion and my experience – many users loved it and found it beneficial. So why the varied opinions?

In Part 1 of this series I listed a diverse range of After Effects users.  Each one used different parts of AE, in different ways, and the projects they worked on had different technical requirements. In a similar way, the benefits of rendering multiple frames simultaneously varied according to the type of work being done.  I can happily admit that rendering multiple frames simultaneously simply didn’t suit the projects I was working on, although it was evidently very helpful to other After Effects users.

So let’s look at some of the problems – the most prominent of which was simply reliability.

Firstly, we need to remind ourselves of the sorts of computers people were using in 2007, the year that Adobe released CS 3. A quick Google search reveals that the average amount of RAM in a 2007 machine was 2 gigabytes. If we look at the machines that Apple released that year, the 2007 iMac came with 1 GB and supported a maximum of 4 GB, the 2007 MacBook Pro also had a maximum of 4 GB, and the 2007 MacPro supported a maximum of 16 GB of RAM – and we can assume that maxing out the RAM was expensive (NB. In later years the 2007 MacPro could be fitted with more than 16GB RAM, but it needed a firmware update that came out in 2012). Looking back on some of my older tutorials, I can see that in 2006 I was working on an old G4 PowerMac with 1 GB RAM.

So in 2007 putting 8 gigs of RAM or more in a workstation would have been possible but very expensive, while average desktop machines didn’t even support that much RAM even if you had the cash.

But as we looked at in Part 3, After Effects can easily use up loads of RAM – it doesn’t take many layers of HD video to fill up 4 gigabytes, let alone 2.  In order for After Effects to run reliably it needs more than 1 gigabyte of RAM, especially if you’re working with larger resolutions than SD.  By 2007 HD video – 1920 x 1080 – was well and truly established as the new standard, and as we saw in Part 3 a single HD frame requires about six times more RAM than a SD NTSC frame. If our computer only has 2 GB of RAM to begin with, then we simply don’t have enough to run any extra instances of AE in the background reliably.   That’s before we consider the operating system and any other applications which might also be running. If After Effects is trying to render something but simply runs out of RAM then rendering can either fail or slow the machine down to a crawl.

So the average machine in 2007 simply didn’t have enough RAM for more than one copy of After Effects to run reliably.

The next issue was CPU cores. When you turned on “render multiple frames simultaneously”, then only the invisible background processes did the rendering.  The main After Effects app in the foreground – the one with the user interface – no longer took part in the rendering process.  This meant that if you only had one copy of After Effects rendering in the background you wouldn’t see any performance benefit – After Effects would still only be rendering on one CPU even though two copies were running overall.  In order to see faster rendering than usual, you’d need to run at least two instances of After Effects in the background – for a total of three instances overall.  But to do this reliably would require a machine that had at least 3 CPU cores and 6 gigabytes of RAM – realistically translating to a quad-core CPU with 8 gigabytes of RAM (because you couldn’t get a CPU with 3 cores, only 2 or 4). While that might not seem very demanding now, in 2007 only brand new high-end workstations would have those sorts of specs.

So right out of the box in 2007, the “render multiple frames simultaneously” feature would only run reliably and provide faster rendering if you were lucky enough to have a new, high end workstation with a quad-core processor and at least 8 gigabytes of RAM.  It’s not that those machines didn’t exist in 2007, just that they weren’t common.  It meant that loads of average After Effects users were greeted with a new feature that simply wouldn’t work reliably – or at all – on their existing machines.

Fringe benefits

There were other, more niche cases where rendering multiple frames simultaneously could prove to be slower than rendering normally.  Some effects use several adjacent frames together, such as the echo effect and CC wide time.  When rendered normally, previously rendered frames are stored in memory, so they don’t have to be re-rendered again. But when rendering multiple frames simultaneously, each background process is rendering individual frames out of sequence, so previously rendered frames aren’t available in RAM. Depending on the settings for echo, wide time or similar plugins, rendering a single frame might require several adjacent frames to be rendered too, by each separate background process, and so the overall rendering time is slower. Later versions of AE improved this with the global performance cache, but it was a niche problem with earlier versions of AE.

Some other 3rd party plugins, especially particle systems, used their own cache systems which again could be slower overall when individual frames are rendered separately, out of sequence.

Maximum tweakage

The preferences included settings that allowed the user to adjust memory allocation, although the exact controls were adjusted and refined over time.  With CS 3, After Effects would automatically run as many instances of itself as it could, however this proved to be too aggressive, and so a basic slider control was added with CS 4. In CS 5 the user could manually specify the amount of RAM for each background task.  The minimum option was half a megabyte (512 MB), but in reality this simply wasn’t enough for After Effects to start up reliably and render anything useful. Later versions switched to a drop-down menu that upped the minimum to 750 MB, and then to 1 GB.

Between After Effects CS 3 and After Effects CC 2014. the controls for Rendering Multiple Frames Simultaneously changed slightly. By CC 2014 users could allocate 6 GB of RAM per background CPU, and choose if the feature was used for RAM previews as well as items in the render queue.

It doesn’t take much Google searching to find out that many users had problems using this new feature with version CS 3, and by the time After Effects CS 5 was released about three years later, a number of Adobe articles and blogs had appeared to help users to get the feature to work reliably.  Adobe’s Todd Kopriva published many in-depth articles on multiprocessing, all of which are still online.  As the “Render Multiple Frames Simultaneously” feature is no longer part of After Effects, it’s not worth analysing each and every release and tracking changes, but it is worthwhile to look through Todd’s articles and see what the main issues were.  In one of the articles published with the release of CS 5, it mentions that the minimum amount of RAM per CPU had been increased to 750 MB, because 512 MB had proved to be too low.

In many cases, performance is improved by using fewer than the maximum number of processors for Render Multiple Frames Simultaneously multiprocessing, even when you have enough RAM for all of the processors. For an 8-core computer system, the optimum number of processors may be 4 for some compositions, 6 for others, et cetera.

Keep in mind that using the Render Multiple Frames Simultaneously multiprocessing feature does not speed up the rendering of all compositions. The rendering of some compositions is memory-intensive, such as when you are working with very large background plates that are several thousands of pixels tall and wide. The rendering of some compositions is bandwidth-intensive (I/O-intensive), such as when you are working with many source files, especially if they are not served by a fast, local, dedicated disk drive. The Render Multiple Frames Simultaneously multiprocessing feature works best at improving performance when the resource that is most exercised by the composition is CPU processing power, such as when applying a processor-intensive effect like a glow or blur.

But perhaps the most telling quote is this one:

It seems that a lot of people have machines with eight processor cores and far too little RAM to feed all of those cores. It is far better to leave some of those processors idle than to try to make them run and then have them shut down because they don’t each have enough RAM to render a frame.

The basic, underlying problem with the “render multiple frames simultaneously” feature was that in order for it to work reliably, and provide a noticeable benefit to the user, it required a high-end machine with large amounts of RAM. Many users simply didn’t have the CPU cores or the RAM to make the feature work effectively, at least until many years after its initial launch in 2007.

The user experience

Having looked at the underlying concept of how the “render multiple frames simultaneously” feature worked, and the hardware required to get it to work, let’s now consider the feature from the user’s point of view.  As I mentioned above, I never really liked RMFS and a lot of that had to do with the experience of using it. Admittedly, Adobe refined and improved RMFS with every release after CS 3, and so some of the early frustrations I had weren’t as much of an issue in later years.

The first problem that users encountered was the time it took for the background processes to load up the project before rendering even started.  In previous versions of AE, once you hit the render button, rendering started immediately.  If you were saving out a still frame then rendering could be finished as soon as you pressed the render button. But now there was a noticeable delay between hitting the “render” button and rendering actually starting.  Behind the scenes, each individual After Effects process was loading up the project.  If the project was large and contained a lot of imported files this could take a long time.  If there were many instances of AE running in the background it would take longer still.  For small and simple compositions inside of a large project, the time taken for each background process to start up and load the project could be longer than it would have taken to render the project normally.

In CS 3, the first version released with the feature, the additional instances of After Effects would only start up in the background when you hit the “render” button for the first time.  Just like opening up After Effects itself, this takes time – the more CPU cores you have set to run copies of AE, the longer the delay.  From the user’s perspective, all of the while that this is happening After Effects is effectively frozen.

Out of interest, I timed how long my current machine takes to open After Effects 2020, and open the project I’m currently working on.  This is obviously completely subjective and not at all representative of all After Effects users, but as an example the total time was roughly 45 seconds. Let’s assume this is how long it takes each background process to start up when you hit the “render” button in CS 3.  It might not sound like a very long time, but it definitely feels like a long time, especially when After Effects has effectively hung and gives no indication of what it’s doing.  Larger projects on slower computers – think 2007 and not 2020 – could easily take much longer. So the first experience of the user trying out “render multiple frames simultaneously” is that as soon as you hit the “render” button, After Effects freezes for a long time before anything starts to happen.

For short renders the time taken to start the background processes negated any performance increases, and so the overall rendering process wasn’t any faster – not just the render itself, but the time taken from the moment you hit the “render” button.  Also – and this was something so irritating I count it as a bug – in CS 3 the same delay would take place even if you were only rendering out a still frame. Thankfully this was addressed with CS 4, and After Effects figured out that if you were only rendering a single frame, it didn’t need to load up multiple background processes even if you’d turn the RMFS preference on.

In later versions Adobe addressed this frustrating delay by having the background processes start as soon as the preference for RMFS was turned on. The delay was still there as each instance started up, but it wasn’t tied to hitting the “render” button.  By the time the user had loaded their project and pressed “render”, the background processes were already up and running. This small change made a big difference to the user’s perception of performance.

In later years Adobe extended the RMFS functionality to include RAM previews. Again, this could prove hit and miss depending on the types of work the individual AE user was doing. After years of tapping “0” on the numeric keypad and seeing the RAM preview begin rendering immediately, suddenly AE users were faced with the same excruciating delay as all the background processes loaded up the project.  Again, Adobe included a preference to toggle this behaviour on and off, but managing multiprocessing became something of a juggling act and required to user to evaluate whether or not RMFS would be beneficial, and then dive into the preferences to turn it on and off accordingly.

The next problem was one of stability. When After Effects was rendering multiple frames simultaneously, sometimes a background process would crash (hang is probably a more technically accurate term). This would cause the main After Effects application to hang too, waiting for a frame to be rendered in the background that was never going to be completed.  The more processes running in the background, the more likely it was that a single one would fail and crash the entire render. Exactly how and why the background processes would fail is still something of a mystery to me, but it was almost certainly related to RAM and memory fragmentation.  Stabilty vs performance became something of a juggling act. The more memory you allocated each instance of After Effects, the more reliable it would be, but the fewer background instances you could run. If you couldn’t run at least 2 or 3 background instances then it wasn’t really worth it.  In Part 3 we looked at bottlenecks when rendering, and how the CPU is often not the slowest component in a system. In 2007 computers still had slow spinning hard drives and gigabit Ethernet, USB seemed pretty fast and Firewire 800 was positively screaming.  But if you tried to run as many instances of AE as you could, they would all be fighting over the same limited hardware resources – again increasing the chance of failure.

You could try to run 6 copies of After Effects in the background, each with 750 MB of RAM, but in the words of Sideshow Raheem: “I wouldn’t”.

In Part 7 of this series I introduced the render-only version of After Effects, and looked at the idea of background rendering and render farms.  Technically, what After Effects was doing with the “render multiple frames simultaneously” feature was an automatic version of background rendering, and all of the issues relating to RAM and system bandwidth that I mentioned in the previous article apply here as well.

Just because I didn’t like Render Multiple Frames Simultaneously, and it didn’t suit the types of projects I was working on (which involved very large canvas sizes, and slow network bottlenecks) doesn’t mean it wasn’t useful to other After Effects users.  Once you found the right balance of memory and CPU cores then it could make a noticeable difference in rendering times.  Projects that didn’t rely on large external files, for example animations using shape layers, text layers, and still images instead of videos, could render much faster with RMFS turned on. The smaller the After Effects project file, the more seamless the user experience.

One step backwards to move forward

Many of the problems I personally had with Rendering Multiple Frames Simultaneously all relate to details that have been examined in previous articles; RAM, bandwidth and bottlenecks. But mostly RAM – After Effects loves RAM – large image sizes and high bit depth projects consume huge amounts of memory.

When After Effects is rendering it’s spending a lot of time just shuffling all of this memory around.  Rendering time can be effected more by slow storage, slow networks, and basic system bandwidth and not the speed of the CPU itself.  Even if you have a multi-core CPU, you might not have the RAM to reliably run multiple background copies of After Effects – and if you do, then they’re all sharing the same network connection and hard drive bandwidth so faster rendering isn’t guaranteed.

Ultimately, whether or not “Render Multiple Frames Simultanesouly” worked for you successfully came down to a juggling act between hardware and software settings, and the types of After Effects work you were doing.

Adobe’s first implementation of multiprocessor support was understandable for the time, and in some ways was a clever and admirable hack.  But it was never “true” multiprocessing support, in the sense that the main After Effects application was not altered or updated to actually run faster across multi-core CPUs.  Running more copies of After Effects worked well for some users, but not others. And while rendering could be faster, there were still many other parts of After Effects which were just as slow as they had always been.  True multiprocessor support would speed up the entire application, not just rendering.

The next step was to actually make the huge leap and begin re-writing After Effects as a true multi-threaded application, one that would utilise the power of multi-core CPUs more effectively and efficiently than just background rendering.  Adobe knew this, and After Effects CC 2014 was the last version to include the “Render Multiple Frames Simultaneously” feature. In order to make the big leap forward and begin to re-write After Effects to be truly multi-threaded, the RMFS feature had to be deprecated to clear the way for more efficient sharing of CPU resources. While some users were disappointed that the feature was gone in CC 2015, I took it as a sign that Adobe were working on newer, better support for multi-core CPUs.

 

This has been Part 8 in a series on the history of After Effects and performance.  Have you missed the others?  They’re really good.  You can catch up on them here:

Part 1, Part 2, Part 3, Part 4 , Part 5, Part 6, & Part 7.

I’ve been writing After Effects articles for the ProVideo Coalition for over 10 years, and some of them are really long too. If you liked this one then check out the others

https://www.provideocoalition.com/after-effects-performance-part-8-dipping-the-toe-into-multiprocessing/

Documentary Serendipity??io Mendes: In The Key of Joy

“Sergio Mendes: In the Key of Joy,” a documentary that I shot in 2017 and 2018 just had its premiere at the Santa Barbara International Film Festival.

It finally happened. Director John Scheinfeld’s feature documentary about the life of Brazilian composer and musical legend Sérgio Mendes, “Sérgio Mendes: In The Key of Joy” that I served as the director of photography on all through 2017 and into 2018 is finished and had its world premiere at the Santa Barbara International Film Festival a couple of weeks ago. It’s been quite a while since I’ve seen my work projected in a theater with an audience, and if you haven’t had the privilege to experience it, it’s something special to feel their reaction as they watch the story you helped to tell.

Documentary Serendipity
Part of our wonderful crew in Brazil, our camera assistant, grip and gaffer.

During production, there were dozens of shoots here in Los Angeles, at Sérgio’s home and at multiple recording studios, Un 2017, we journeyed to Brazil with Sérgio and his wife Gracinha to shoot Sérgio at his birthplace, at the first club he ever played at professionally and at various locations all around his home town of Niteroi, across the bay from Rio. We also shot studio sessions with various musicians in Rio and a song session with Soccer legend Pelé in Sao Paolo. In 2017, I flew with director John Scheinfeld and producer Dave Harding to Rio where I had my first exposure to working with Brazilian crews. My crew there was so professional, helpful and a lot of fun to work with. Visually, Brazil is a wonderful tapestry of tropical beauty mixed with the European and Portuguese influence; I’ve rarely shot in a more beautiful, spectacular location. 

Documentary Serendipity
A still from our jazz studio session we shot in a small studio in the hills above Rio.

While there, we shot Sérgio and various scenic b-roll all around Rio and Niteroi, but the highlight of the shoot for me was the day we spent in a small recording studio in the hills of Rio, almost directly in the shadow of the iconic Christ the Redeemer statue. For the session, Sérgio assembled a small jazz ensemble of bass, drums and a three-piece horn section. With Sérgio at the keyboard, the group played original jazz tunes that Sergio had written as a young composer when he was just coming onto the music scene in Brazil. These jazz tunes were written well before Sergio found worldwide acclaim for his global hit, Mas Que Nada with Brasil ’66. No vocals, no lyrics—just pure, unadulterated jazz. It was a challenging shoot, trying to light the small studio to resemble a jazz club, but the musical experience was incredible as a fan, and I kept practically pinching myself that I was getting to hear this music as I shot that nobody else had heard played for more than 60 years. It was a magical experience for a jazz fan.  

Documentary Serendipity
Waiting for the sun to set over Ipanema Beach in Rio.

As the days of the shoot wore on, we traveled with Sérgio all around Rio, shooting b-roll of the beautiful resort and beach areas. One of the most spectacular shoots was documenting the sunset over Dois Irmãos (Two Brothers), two iconic mountains that tower over Ipanema Beach. We filmed the scene from a rock outcropping on the east side of the beach. In Rio, watching the sunset is an activity all its own, and we were joined by hundreds of people on the rocks as we filmed the sun descending behind the mountains. Everyone on the beach cheered and applauded as the sun disappeared behind the two mountains, the scene punctuated by vendors selling fruit and drinks to the assembled crowd, it was quite a unique experience. Where else have you filmed where the sunset is its own star with an adoring audience?  

Documentary Serendipity
Filming Sérgio, in windy conditions, reminiscing about riding the ferry from Niteroi to Rio for the young pianist and composer to play the jazz clubs in the alleys and back streets of Rio.

Another epic shoot in Brazil, both logistically and emotionally, was filming Sérgio aboard the ferry from Rio to his hometown of Niteroi across the bay. Sérgio was born during World War II in 1941, so riding the same ferry that he rode as a teen, then a young man to travel from his home in Niteroi to Rio where he played his first gigs as a professional musician and composer was quite emotional. Compounding that was filming on the top deck of a 200-foot long ferry over the bay. Te winds were very high, making the shoot a challenge with sound and with trying to not have the flags and reflectors I was using to light Sérgio blow away. My Brazilian grip and gaffer were on it, but everyone in the crew pitched in to keep any of our grip gear from going airborne into the ocean as we crossed the bay. 

Documentary Serendipity
Filming an interview with Brazilian composer and musician Carlinhos Brown in Rio.

Our director, John and our producer Dave Harding decided that we would shoot all of our interviews using a green screen. As a cinematographer, for me, shooting interviews on a green screen isn’t always the most creative endeavor as you rarely get to choose and light the background plates. That function, at least in documentaries, is often decided later, in post, long after you’ve shot the interviews. Green screen is often a “cart before the horse” situation in that you don’t know what the backgrounds will be, so how do you decide to light the talent in the green-screen shot? What will the lighting on the backgrounds look like? Which direction will the light come from for a given shot? Color and textures?  

Documentary Serendipity
Filming an interview with Sérgio in Los Angeles. Note his cool, iconic hat.

Since we were shooting in multiple countries and locations, as a producer, I understood the decision to shoot green screen; it made sense. We shot some interviews in various recording studios, several more interview sessions in Sérgio’s home and in many different locations in Brazil. But I was a bit wary that the interviews I had shot probably wouldn’t match very well with the backgrounds. More on this later.

Documentary Serendipity
Herb Alpert explains his musical relationship with Sergio from the early days to the present day.

We were able to shoot interviews with an amazing array of talent. Quincy Jones, Harrison Ford, Herb Alpert, Lani Hall, John Legend, will.i.am, Common, Brazilian musician Carlinhos Brown, Gracinha, Sérgio’s wife and all of Sérgio’s family, numerous Brazilian TV executives, journalists and musicians all make appearances in the film. The list went on and on. We developed a loose signature look for the interviews, even though they were all shot green screen. Our producer, Dave Harding, was a former gaffer, so it was nice to have a producer who understood lighting and could confer with me to help push the look to what John’s vision for the film was.

Documentary Serendipity
Meeting and working with Brazilian Soccer legend Pelé was the best. He’s probably the most famous, adored athlete in history and is friends with Sérgio and his wife Gracinha.

I utilized a soft key source, usually using a LitePanels Gemini 2×1 shone through a 4x or 6x silk and then, rather than utilizing another soft or hard source as a fill source opposite the talent as one might normally do, I’d place another smaller instrument like an Aputure Lightstorm LS-1S LED panel through a 42-inch diffusion disc on the same side as the key source but lower and at less intensity. This would give me some extra power to light talent, whose skin tones raged from Caucasian and fairly pale to fairly dark skin, in a relatively even light level. I would sometimes use a solid or Duvetyn on a C-stand on the fill side of the talent’s face to knock down the wrap from the two soft sources, allowing us to get some shadow and “mood” on the talent, but not too much as the tone of the film was to be upbeat, lighthearted and joyous.

Documentary Serendipity
We shot an interview with Sérgio in the Niterói Contemporary Art Museum, a spectacular building located on the bay overlooking Rio.

For the women and some of the male interview subjects, I’d finish off the look using a small hair/rim light. For women, even though hair lights are a bit out of style lately, it gave a nice flattering glow to their hair, but I kept the level to a minimum, trying to not make it too apparent. Most importantly, the hair/rim light would nicely separate Sérgio’s iconic hats that he always wears. It was kind of wonderful; Sérgio seemed to wear a different hat in almost every interview in the film. I wanted to make sure that the hats were clearly highlighted as part of Sérgio’s look, and I think we succeeded.

Documentary Serendipity
This is the end result of what our sets and design elements looked like in the finished film when composited with the green screen interviews.

Later, in 2018, I received a call that we were going to actually have sets built and shoot them as the background plates for the green screen interviews. We also shot a large selection of individual elements that would also be integrated into various notion graphics in the film. It’s rare, as a documentary DP, to have the opportunity to also shoot the background plates for your green screen interviews so that was an interesting experience for me and our Los Angeles crew. Our Gaffer, Mark Napier, even came up with a really effective way to add movement to our shadows on the backgrounds using a rotating Mason jar as well as a woven Bamboo basket. The patterns and lettering from the glass broke up the light and lent a nice, organic quality to the movement, along with the Bamboo basket, kind of like the sun shining through moving tree branches without being as literal as using a “Branch-o-loris,” shining a light through an actual tree branch on the C-stand. The backgrounds came out nice and are used to great effect throughout the film.

Documentary Serendipity
Director John Scheinfeld and Sérgio Mendes field questions from the audience at the Santa Barbara International Film Festival premiere of “Sergio Mendes: In the Key of Joy.”

Director John Scheinfeld’s cut of the film comes in at about 100 minutes and played its opening night at the Santa Barbara Film Festival to a sold-out house. The audience gave the film, John and Sérgio—both who were in attendance—a standing ovation. John and Sérgio gave a Q&A after the screening, which was then followed by a set with Sérgio and his touring band. I was lucky enough to be in the audience that night and received a nice shout out from John about the photography of the film and the challenges we encountered shooting it.

Documentary Serendipity
Rolling Stone published a recent article on the film and trailer.

At the screening, I reflected back on how lucky I was to use my cinematography to help tell the story of a musical legend. In a way, Sérgio Mendes has musical accomplishments that are commensurate with artists like the Beatles (Sérgio and band opened their set at the screening with a Brazilian flavored cover of the Beatles’ “Fool on the Hill,” a big hit for Sérgio in 1968), his music built a huge global audience for Bossa Nova and he’s still touring all over the world, collaborating and composing with younger musicians and producers like will.i.am (producer of The Black Eyed Peas with whom he re-recorded in 2006 an updated version of his breakthrough hit, Mas Que Nada), John Legend and Common as well as numerous other musicians and collaborators who appear on his latest album that’s just being released as I write this.

Documentary Serendipity
The film may have a life on the film festival circuit. This is the poster for the film, hopefully coming to a streaming service soon.

As John related during the Q&A session at the screening, the film is John’s positive, feel-good antidote to the darker political and social times that are so prevalent in 2020. As Sérgio’s story evolves in the film, he faced some incredibly dark times with political persecution during the military coup in Brazil as a young man, which was the main reason he came to New York in the 1960s. But he never let the darkness he experienced color his optimistic outlook on life; he found joy and happiness through the serendipity of life, which is a timeless and relevant message for us all.

The post Documentary Serendipity—Sérgio Mendes: In The Key of Joy appeared first on HD Video Pro.

https://www.hdvideopro.com/blog/documentary-serendipity-sergio-mendes-in-the-key-of-joy/