Friday, February 05, 2016


If you have been living and routinely interacting with other human beings over the last month, you’ve probably heard one or two words involving this year’s Academy Awards and the heated controversy over the startling lack of both films and people of color among the nominees. Personally, I think that the real focus of concern ought to be less on the back end-- awards handed out for films which were financed and/or studio-approved, scheduled for production and filmed perhaps as much as two or three years ago-- and more on addressing the lack of cultural and intellectual and experiential diversity among those who have the power to make the decisions as to what films get made in the first place. This is no sure-fire way to ensure that there will be a richer and more consistent representation of diverse creative voices when it comes time for Hollywood to pat itself on the back for its artistic achievements (which, of course, have nothing to do with its achievements in the arena of lucre wrangling), but it seems like the logical place to begin the overhaul.

But in the interest of avoiding yet another circuitous argument, let me direct the spotlight toward one category around which Oscar has inspired almost no arguments this year: Best Achievement in Visual Effects. This category (and the one honoring the year’s finest work in cinematography) represent a bounty of riches for which there seems to be little dissent, and even less consensus, as to what deserves the honor and, more to the Oscar Pool participants’ interest, what movie will actually win.

The rampaging woolly mammoth in the room is most certainly the team that managed to make Star Wars: The Force Awakens both retro-evocative and up-to-the-minute shiny and chrome, a fine line most agree was successfully tread by the film as a whole. SW:TFA’s presence here is not unexpected, and it almost feels, in reading some of the nonstop industry analysis since the nominations were announced, that had the movie been left off this particular list of five, there would have been geek riots to shame the relatively peaceful outrage over the absence of potential Best Actor  Michael B. Jordan or Straight Outta Compton on the Best Picture final ballot. Indeed, there’s still a lot of musing that Star Wars: The Force Awakens, like J.P. Morgan/Chase, should have been just too big to fail and that its sheer commercial force (I’m sorry) would have been enough to propel it into Best Picture contention.

Alas, this new-age Star Wars sequel/reboot/reimagining will have to be satisfied with a possible win in this category as a representative of the “Yes, They Can Still Make ‘Em With Computers They Way They Used To With Computers” mindset. The movie is likely to have a tougher time in the other technical categories, where movies like Mad Max: Fury Road and The Revenant will presumably be more difficult to beat. But its assurance of a win in the Visual Effects competition is not, as I see it, rock solid. The Martian, somewhat less ostentatiously representative of an equally old-fashioned, less-is-more-believable aesthetic, is a strong contender, but of the five nominees it may suffer from being the least showy—ironically, the fact that it’s generally easier to “believe” the world The Martian creates, the fact that you can more naturally forget that the world you’re watching in this movie has been largely concocted on sets and inside computers, may work to its disadvantage when it comes time to hand out the golden candy. That goes double for the seamlessly integrated marvels of Ex Machinawe’re too busy being captivated by Alicia Vikander’s performance to continually be distracted by the fact that she’s sporting a see-through torso.

The computer-generated marvels unleashed along George Miller’s furious road are probably the most eye-popping of the five, not least for having been so well integrated into the physical world of stunts and propping up the illusion of seeing a relentless choreography of action that feels (even if it at times isn’t) as if it could be happening entirely in the real world, in front of cameras. That’s the illusion that is also marvelously sustained, for the most part, in The Revenant, which uses its own kind of choreography of light, movement and sound to coerce the audience into accepting its grueling version of what a particular part of life on the American frontier in the early 1800s might have been like. But ironically the scene in The Revenant that is among its  most celebrated—the grizzly bear attack—is the one that simultaneously makes my jaw drop and temporarily jolts me out of the web of reality the movie spins.

Too much is made of what Hollywood likes to term the sophistication of modern audiences. We hear all kinds of stuff about how audiences are tougher to fool these days, how they’re so much more knowing about the tricks of the trade. (Maybe not so much about how to spot fraudulent storytelling sleight-of-hand.) What that usually means is that viewers have traded in their willingness to suspend belief, to give themselves over to the medium, for a sort of inside-baseball knowledge that might yield great dividends for trivia contests at the local watering hole but fewer when it comes to expanding appreciation for everything the movies can do.

And even when modern audiences think that the biggest reward is having a vague sort of knowledge about how one of their favorite tricks was pulled off, they can still pull a real howler out of their hat. While watching the end credits of The Revenant, a guy sitting next to me turned to his girlfriend and, with a truly awesome mixture of puzzled sincerity and hipster cool, said to her, “You know, I don’t think that bear was real.” It was sort of poignant, in a Weird Hollywood sort of way to listen to this guy paw his way around trying to reconcile what he knew couldn’t be true with what he wanted to believe. But that’s the thing about that grizzly bear not-rape scene: it violates the movie’s meticulously constructed illusion of “reality,” one which notably works like gangbusters for the duration of the running time otherwise, not because it doesn’t look and feel alarmingly like what being mauled by a wild two-ton beast would feel like, but because we can’t (or at least I couldn’t) divorce myself from the knowledge that Leo—not Hugh Glass, but Leo—was in no real danger, or distance myself from occasionally musing about how the hell they did that.

All this talk brings me back to January 2013, which is when I put some thoughts together about modern special effects and some of the expected, and perhaps unexpected side effects of prolonged exposure to them. Three years ago I was thinking about movies like John Carter and Trollhunter and even Life of Pi as being the natural and happy fulfillment of the promise inherent in such disparate special effects ventures as King Kong vs. Godzilla and Zelig. And even for their occasional lapses I can see this year’s five Oscar nominees as continuing in that reputable and encouraging trend in modern visual effects, even surrounded as they are by hundreds of annual examples in the art of how not to do it so very well. In that light, I thought you might be find the piece that resulted from all this consideration worth a look. It’s called “Seeing and Believing,” which was originally published at Sergio Leone and the Infield Fly Rule a little over three years ago, on January 21, 2013. You are, of course, under no obligation to continue reading. But if you don’t, I have been given permission to send along Alejandro G. Inarritu’s CGI bear, and perhaps Mr. Inarritu himself, to help spur your level of interest. Don’t make me bring out the claws.


“You’ll believe a man can fly.”

That was the promise made to potential ticket-buyers during the fall and winter of 1978, when Richard Donner’s take on the Superman legend was being readied to soar across Christmastime movie screens all over America. We certainly never believed (nor were we asked to believe) that George Reeves, the Superman known to viewers of the popular TV series which ran from 1952 until 1958, could really fly. And even our faith in all things wise and wonderful wasn’t enough to convince us that Julie Andrews had really taken to the air with her satchel and umbrella, to say nothing of the aeronautic abilities of Sally Field.

But after Star Wars (1977) had premiered a barrage of stylish visual effects, themselves doubling up on the groundbreaking realism of Douglas Trumbull’s effects work for 2001: A Space Odyssey, expectations had been, shall we say, heightened. Star Wars was, of course, fanciful in its own way, and certainly in a way that 2001 was not—no Jawas and Wookiees for Stanley. Among the light-sabers and countless other visual marvels offered on George Lucas’s menu, which illustrated a vision of interstellar life whose influences seemed to indicate that its world existed concurrently in the past and in the future, there was that heretofore-unseen height of technical sophistication which seemed to plead for a greater allowance of suspended disbelief on the part of the audience, all the while feeding them a dazzling level of space-age trickery in order to ensure such leaps of faith might be easier to make. Superman promised that we would believe a man (even one from the planet Krypton) could actually fly, but after having already witnessed Luke Skywalker zooming over the plains of Tatooine in his land speeder, seemingly suspended on nothing but streams of air, in many ways we already did.

Of course no one really thought that Christopher Reeve was up there zooming through the atmosphere without the help of movie magic, and if we ever even incrementally believed that a man could fly—and a woman could convincingly hitch a ride on his cape tails—it had a whole lot more to do with the chemistry between Reeve and Margot Kidder, and the swooning romanticism that propelled their nighttime cruise over Metropolis, than it did the degree to which we were convinced by the wire work and blue-screen techniques employed to get the scene on film. Superman’s effects were, taken as a whole and truth be told, far more rudimentary and far less convincing than the ones that bowed in Lucas’s film a year earlier. But the insistence of the movie’s advertising that belief in what we would see was something we could expect, well, if it wasn’t exactly new, then it at least carried a whole lot more weight (supplied by a major movie studio’s marketing power) than the usual hucksters’ come-ons of exploitation movies past.

The Star Wars and Superman franchises spent the early part of the 1980s refining their approaches vis-à-vis their technical prowess, and perhaps cannibalizing themselves to certain degree as well, all while the market was becoming saturated by low-budget competitors soaked in the fantasy syntax that the two series inspired. Incessant imitation of their visual bravado introduced the inescapable sense of a Xerox copy being reconstituted endlessly, each new generation a little dimmer than the last. And rather than from the loins of George Lucas or Steven Spielberg, the apparent inspiration for the next step in the movie audience’s increasing desire to believe would come from a most unlikely place.

A long time ago, in a time far, far away, before the letters C, G and I, spoken in that order, meant much of anything to moviegoers, Woody Allen released a little film called Zelig, the ostensibly slight story of Leonard Zelig, the Human Chameleon, a nobody so lacking his own character that in the presence of stronger personalities he begins to co-opt their physical and psychological attributes. Thematically, Zelig, a movie posited on the notion that, no, everybody isn’t necessarily somebody, was a perfect fit within the totality of Allen’s neurotically fueled canon. And the movie stood out, from Allen’s own oeuvre as well as from every other comedy in release at the time, because it was crafted in a mock-documentary style— Zelig was released in 1983, a year before This is Spinal Tap, and many years before the mock-documentary became an overcooked genre of its own. But even beyond the conceit of its structure, technically it was a step outside the box for American cinema, and certainly outside Allen’s own comfort zone as a director known more for his visually natural, restrained palette. (Despite his constant nods toward Fellini, the excesses and general phantasmagorical reach of Stardust Memories was the exception, not the rule.)

The movie got mostly positive reviews—Allen had not yet fallen out of favor with either critics or his soon-to-be-dwindling audience—though coming between Manhattan (1979) and Hannah and Her Sisters (1986) Zelig was considered a trifle, absent the writer-director's usual thicket of seriocomically intertwining relationships and emotional baggage. Pauline Kael seemed to admire the movie—she called it “an ingenious stunt” that had been “thought out in terms of the film image, turning the American history we know from newsreels into slapstick by inserting the little lost sheep Zelig in the corner of the frame.” But even Kael had to admit, at the end of her review, that she left the screening hungry for a real movie, presumably one with more meat on its bones than the one Allen offered.

Seen today the movie looks like a standout in Allen’s career, one of the most original and innovative movies of the decade of the 1980s, regardless of its comparatively insular qualities and supposed lack of thematic scope. But Zelig’s real impact, as it turned out, was in the suggestion of what could be done with that familiar newsreel imagery. Cinematographer Gordon Willis ingeniously mocked-up and graded down the gorgeous monochromatic imagery he perfected for Manhattan and Stardust Memories so that footage of the “little lost sheep Zelig” could be seamlessly integrated into the same picture with historical figures like Babe Ruth, Calvin Coolidge and even Adolf Hitler. The meager audiences that turned out in theaters marveled at Willis’ achievement— Zelig and Willis were even mentioned on the front cover of the highbrow film journal People magazine upon the film’s release, a level of mainstream coverage which would require a sex scandal some 10 years later in order for Allen to duplicate.

But some openly worried that the use of techniques like those Willis used for Zelig were far too effectively blurring the lines between realism and the accepted standard of the special effects of the day—we were suddenly a long way, Dorothy, from fantasy universes which no one could ever mistake for simple photographic representation of actual life. The fretting continued just over a decade later when, in an apparent advance over what Willis had achieved, cinematographer Don Burgess and a vast array of technicians had their fictional character, a simpleton by the name of Forrest Gump, not only sitting in the same frame with well-known figures of history, as Leonard Zelig did, but physically interacting with them.

This “simple” technical adjustment made for bigger press than Zelig could have ever dreamed of, but it was also a sticking point for observers who pointed out that if photographic verisimilitude was the goal, then Zemeckis and Burgess had fallen far short of it—and maybe that was a good thing. At one point Gump, played by Tom Hanks, appears on a mocked-up 1963 TV broadcast during which he appears to shake hands with then-living-President John F. Kennedy, and nowhere else in the movie did the integration between the rock-still digitally-based Hanks with the relatively unstable 40-year-old filmed imagery look less convincing. In fact, the effects in Forrest Gump were often far more ambitious than they were photorealistic— that attempt to visualize physical contact between the living and the dead turned out to be a challenge these technical wizards weren’t entirely able to bring off. And in 2009 the results weren’t any more convincing-- the conjuring of 1982-vintage Jeff Bridges to interact with his naturally Lebowski-fied 21st-century version in Tron: Legacy was creepy and problematic for many of the same reasons.

Only a very special grade of Gump-grade ignoramus would have actually believed that Tom Hanks and President Kennedy ever shared physical space together in order that they could be filmed for a Robert Zemeckis movie. However, the specter of being able to render the impossible, or alter the accuracy of realistic representation in graphically acceptable visual grammar, was one that Forrest Gump raised nonetheless. Suddenly we were in an era in which computers were being employed to try to convince audiences, if only momentarily, that the impossible could happen, that it was happening.

But the purveyors of CGI have always been a little too convinced of their ability to accurately represent reality. Some 15 or so years ago the Los Angeles Times sponsored a series of ads designed to run before the feature in Southern California cineplexes which cast light on some of the below-the-line occupations involved in the making of movies. One such group of artists highlighted were computer effects teams, and the one chosen for the Times promotion attempted, over the course of the 90-second spot, to demonstrate how their effects house would go about creating the digitally generated and animated image of a small basset hound. It was a fascinating glimpse into a process which has undoubtedly become even more refined and sophisticated over the passage of time. But there was no avoiding a certain amount of embarrassment and incredulity when the animators, at the end of the spot, trotted out their digitally generated creature, placed it next to a shot of the real thing, and then began to gush over how difficult it is to tell DigiDog, which bore all the telltale signs of digital rendering, from lack of realistic texture to imprecise body movement, from the real thing.

Companies specializing in TV advertising have seized upon, refined, inflated and saturated the market with the possibilities of computer-generated imagery since long before George Lucas initiated the all-digital assault of Star Wars Episode II: Attack of the Clones (2002). The main result has been a double-edged sword, to be sure: advertisers have broken ground on what they can show us in ever-increasingly expensive 30-and- 60-second spots that may be entertaining, but those spots also reinforce our resistance to believe a single thing we see (or hear) in them. (The Nissan spot seen below goes to great lengths to convince us of a certain reality that our heads insist couldn't possibly play out in real-world physics.) 

Do you believe what you just saw?

Digital manipulation of what we see on TV and in movies is so completely pervasive in 2013 that we question even the simplest images. We roll our eyes at a commercial of a mud-caked pickup truck being cleansed by a sudden drenching from above because in the aftermath it looks too damn clean; every speck of grime is washed away in a single thunderous wash, as if the CGI artists-- or the corporation paying for the ad-- couldn’t resist making every inch of that truck sparkle. (A stunt like this in a commercial from 30 years ago would have had to have been staged in physical reality, and it would have been a lot messier.)

But there’s another unexpected bit of fallout from incessant exposure to the state-of-the-art effects which seem so insistent as to suggest that only permanent alteration of our perceived actual reality will satisfy the ambitions behind them. It’s become clearer to me as I watch movies in 2013, both modern ones and ones that were made in the less technological promiscuous eras of the 50s, ‘60s, ‘70s and even ‘80s, that I crave the lost art of sparking the imagination of the viewer. This is not the same thing as creating a parade of visual effects and techniques intended to fool me into thinking what I’m seeing is actually happening. In fact, quite the opposite is true. As a viewer in 2013, the more plain the artifice, the more likely it is that I will respond—that I will want to respond—to the intended enhancements being made to the film’s basic storytelling devices. These days I find myself far more attracted to and definitely more receptive to those movies whose imagery is less polished, whose effects are clunkier, whose visual schemes are more obviously set-bound or otherwise inescapably artificial, whose artisans had to rely on traditional matte paintings and physical effects that made it necessary for audiences to dive deeper into their stories, to make more concerted efforts to lose themselves in these conjured worlds.

These sorts of movies, either spectacles or smaller-scale stories dependent on a certain level of effects magic, couldn’t rely on technology to bail the filmmakers out by distracting audiences from their deficiencies. They come from the era of the 1950s, when spectacle was a way of combating the relative technical constraints of the tube, up through the 1960s, when the old wineskins of morality and constraints on the depiction of “adult” behavior were the primary concerns, into the 1970s, when those wineskins were finally being traded in for more experimental ways of transporting their films’ content. Of course being given the opportunity to flex one’s imaginative muscles while watching a movie like Jason and the Argonauts (1964), in which the creatures are stunningly depicted within the very stylized parameters of Ray Harryhausen’s stop-motion animation mastery, or even a Japanese horror thriller like The Living Skeleton (1966), in which some of the effects aren’t necessarily any more “convincing” than the average episode of Dark Shadows, points up that there is room for that imaginative interplay even while watching a movie which has been, at most all levels, expertly crafted.

But even the experience of a narratively muddled movie can be enhanced by its own artificiality. In J. Lee Thompson’s The White Buffalo (1974), Charles Bronson stars as Wild Bill Hickok, who moves through the beautiful and dangerous landscapes of the West in pursuit of the mythical title creature, which haunts him from a series of dreams. If it were made (or remade today), The White Buffalo would undoubtedly have major emphasis and huge expense (at least relative to the original film’s budget) lavished upon scrupulous efforts to render each of its bitter winter landscapes in the most realistic and oppressively beautiful fashion possible. But would those efforts be able to duplicate the best element of the flawed movie that actually exists—the eerie stillness and dreamlike incertitude brought on by the very artificiality of some of those outdoor sets meant to represent the unyielding and brutal wilderness, or the hulking fearsomeness of the title creature itself, which was created in consultation with the late, great Carlo Rambaldi? Probably not. The fact is that much of what we “see” in The White Buffalo, what’s most effective about it, transcends the shortsightedness of its direction and resonates directly with the fact that its protagonist is himself operating in a sort of fugue state, all of which is underlined and reinforced by the strange beauty of the mythologically oriented, obviously constructed environments Hickok often finds himself moving through.

Conversely, one of the movies made recently which has most resonated with me in its attempt to forge a perhaps intangible connection to the mythological through a shared experience of artistic representation did so with a whole bag of technical trickery at its disposal. The 2010 Norwegian thriller Trollhunter fused the modern (here represented by the found-footage documentary) with the fairy-tale fantastical (a countryside populated by a variety of giant trolls who live either deep in the woods or inside mountains). And the reason why Trollhunter succeeds so well at bridging these two points of view is that, rather than running from the supposed limitations of low-budget animation and effects, it embraces the computer’s ability to enhance its signature monsters, to make them look both believable in a realistic setting and fantastical—that is, stylized in the manner of a wood cut illustration—simultaneously.

Trollhunter’s approach to its effects is not one of making you “believe” that trolls exist so much as to coax you into accepting that there is a place on the planet—the gorgeous, rainy climes of the Norwegian mountain countryside—where the fears and superstitions of a forgotten age have been spliced into a world where the presumption is that found-footage video doesn’t lie, even as we try to construct ever more elaborate techniques for fooling ourselves into accepting an alternate version of reality. One of the film’s major set pieces, an encounter on a lonely bridge at night between one of the titular creatures and the master trollhunter, decked out in a protective suit you suspect early on might not be up to the job, displays  this sort of eerie fusion of fairy-tale and modern horror in spades. 

And to its credit, Trollhunter never attempts to do all the work for you. There’s plenty of room, especially during the film’s many driving sequences, when we’re allowed to survey the landscape and imagine what might be lurking beneath a rocky hill or just out of our peripheral vision on the edge of a rainy forest. In this fundamental way the movie retains a connection with a whole era of films in which technical prowess could not be relied upon as a safety net, or a scapegoat, for lazy storytelling. 

But one of the stranger developments in this anything-goes age of computer representation in movies and on television is that, despite all claims to the contrary, seeing is no longer believing. The utter proliferation of CGI, itself hardly the purest representation of evil on any stage, has made us cynical about imagery, which on one hand is obviously a good thing, given how easy it is now to tinker with that imagery to whatever end. However, in terms of storytelling the overuse of computer-generated imagery realizes the complaints naysayers have always leveled against the movies, comic books, TV and other “lower” forms of pop art—it invites the form to do all the heavy lifting for us. CGI tends to work best when it is relatively invisible, when it doesn’t call so much attention to itself, when it enhances the possible, or the believable, without become a jarring endgame in itself.

It can also work in the way 3D works best, that is, in service to pulp material that doesn’t demand hyper-realism anyway but instead merely a stylized take on fantastical material—John Carter’s effects, for example, have the ultimate sensation one gets from looking at a series of great old-school matte paintings, as if the Edgar Rice Burroughs book covers had seeped into the imaginations of its technicians and artists. There’s no distressed attempt to render this fantasy in anything like what we might consider Martian, or Barzoomian “realism,” and even though the movie is packed to the gills with eye-popping digital artifice, somehow the movie resists the kind of Lucasfilm overkill that scuttled the Star Wars prequels (and the attempts to fudge the original three films into some acceptable level of effects “sophistication.”) 

As a friend suggested in a blog post last year, the rendering of CGI imagery has started to take on a certain sameness about it, in the degree of “realistic” detail and also in effects artists’ apparently irresistible urge to make reality more special by giving, say, the urban landscapes of Paris a twinkling vibrancy, an overabundance of detail, they have ceased to trust would come naturally through a 35mm camera lens. John Carter was unfairly kicked liked the family mule by most of the press when it came out last year, but somehow it manages to avoid that sameness that afflicts many other heavily computer-generated productions.

In many circles all this worry over the way visual effects have come to dominate what we see, how we see it, even the ways movies are made, could be (and undoubtedly has been) construed as an old man’s argument. And the positions of people like author and film historian Neal Gabler, who fretted in a very generalized way in the Los Angeles Times recently about a generation that allegedly finds classic films boring and antiquated, don’t strike too many blows for dispelling that perception. Obviously for many young people who consider anything made before 1990 as belonging in the realm of "old movies," the presence of black and white film is a sort of perceived poison, and it does seem that the majority of young people—college age and younger—aren’t being encouraged to investigate movies that don’t look or sound or feel like the ones they’re used to. But even if, as Gabler says, “part of this cinematic ageism is the natural cycle of culture,” I find it the experience within my own admittedly microscopic sphere of influence encouraging. I’m starting to watch my own kids begin to prove out the notion that guided exposure at a young age to the pleasures of movies and literature outside a child’s comfort zone is the best way to ensure their continued receptivity and interest as they grow up among a social group who may not be similarly inclined. And certainly among my own circle of friends, I always get the feeling there’s something going on beyond simple nostalgia or the desire to define one’s experience along generational borders.

A while back my friend Larry Aydlette, editor at the Palm Beach Post, dropped a comment on Facebook that I find typical of this sort of perception, and reassuring to boot. He was talking about revisiting Paint Your Wagon, a movie that was never high on anyone’s list of cinematic achievements during the year, 1969, in which it was released. And quite apart from whether or not it was a good movie (as I recall, Larry liked it quite a lot), he made the observation that among the many things to be enjoyed about seeing it again was the presence of “real, densely populated sets with real people, not CGI stick figures. It made the film a richer experience.” When I read this, I immediately thought about that Coliseum filled with cheering little ones and zeroes staring down at Russell Crowe and Joaquin Phoenix in Gladiator. I wondered if just a little bit of weight not unlike the “real, densely populated sets” my friend observed in that ill-fated Lee Marvin-Clint Eastwood musical might not have helped Ridley Scott’s Oscar-approved spectacle from seeming as if it were about to float away on the first stiff breeze, as most movies seem to that are built around attempts to generate a degree of CGI realism, a mere 10 years or so down the line. For me, it comes down to not feeling as if I have to be constantly on my guard, questioning the representative veracity (or lack thereof) of every damn shot.

It’s not just about what feels or looks real, though. Plenty of movies that I love, even on the most base level, have precious little to do with being in any way “realistic” or displaying any interest in making me believe what I’m seeing could in any way actually be happening. One of the signature shots of the studio era, when it comes to spectacles based at sea particularly, is the studio tank shot, in which a battleship or a fishing boat or some other vessel is seen floating on an relatively smooth ocean surface bereft of the kind of white-capping wave activity or vessel movement that is the hallmark of documentary footage. These shots are usually accompanied by blatantly false background skies or constructed in such a way that emphasizes how the water has a peculiarly antiseptic look, a clue that the only life hidden beneath its surface is the buildup of algae clinging to the tank walls. They’re also some of the most easy to recognize and, therefore, easiest to disparage and feel superior to, for those who have a tendency toward the indiscriminate MST3K-ification of our movie past.

Conversely, it’s entirely possible to appreciate a boat afloat on a soundstage studio tank solely on the basis of its very artificiality. Richard Harland Smith, film historian and writer for Turner Classic Movies’ Movie Morlocks site, admitted in a recent Facebook conversation a special affinity for this familiar practical effect, which has probably never once really fooled anyone into thinking it was anything but obvious trickery:

Scenes set at sea but filmed in a water tank on a patently obvious soundstage have always filled me with a wonderful sense of dread and wonder. I suppose Toho is to blame, as any shot at sea in one of their movies ultimately ends with some behemoth rising up out of the drink to snap a fishing trawler in half. But Hammer's The Lost Continent (1968) contributed to this latent fear as well, bless it. I miss the days of practical fakery in the movies. Stormy seas have never been the same… With the old Toho water tank gambit, you were always waiting for some guy in a monster suit to jump up, ooga-booga style, to scare you. It's really a primitive, childlike effect... that works a charm. Every. Time. It's too bad an establishing shot of a water tank sea these days will draw ‘knowing’ laughs from movie audiences, even ones who should be more charitable to old school special effects.

And speaking of Toho, recently I had a chance to revisit one of my old favorites, the none-too-revered King Kong vs. Godzilla (1962). It was delightful to remember how fascinated I once was by this movie—the first Toho production I actually saw on the big screen—and how it retained a certain fascination even through viewings on afternoon TV when I was considerably older and more aware of how movie effects were achieved. While watching this classic battle of titans again this year, I thought a lot about the gulf between what captivated us as children and how we supposedly become more sophisticated from years of following the development of special effects, and how we often reject some of these early spectacles as too silly or somehow less worthy of our attention because the tricks are easier to see through. Despite their reputation for obvious or “cheesy” effects (Oh, how I’ve come to hate that word), it’s easy to see why the Japanese monster movies, often orchestrated by physical effects master Eiji Tsuburaya, and many of their descendants, have held such sway over kids, even spilling over into appreciation by manga and anime enthusiasts.

It’s because these orgies of destruction, these epic battles staged over the skylines of cities just waiting to be decimated, are almost literally the incarnation of a child’s most elaborate dream of toy sets come to life. There’s a sequence about halfway through King Kong vs. Godzilla in which the military digs a big hole in the ground to use as a sort of Burmese tiger pit in ensnaring one or both of the monsters, and I couldn’t help but be struck by all the shots of construction equipment digging around in the dirt, dump trucks moving loads of earth around, and noticing how the scene was exactly the sort of scenario boys play at all the time in their backyards, perhaps even staging battles between their favorite monsters in the same way. Seeing this scene played out on the big screen as a kid was thrilling, and despite our apparent hunger as a culture for ever-escalating levels of “realism” in our movies, those scenes still worked on me in the same way. I feel sure that the special effects wizards who recreated Los Angeles and the Santa Monica Pier, that rolling Ferris wheel and the lurking Japanese submarine captained in that giant studio tank by Toshiro Mifune in 1941 were after some of the same feeling of awe, of imagining that the biggest things in the world were really only our play toys. And look what we did with ‘em, Ma!

The pursuit of those ever-escalating heights of “realism” is what bothers me most about Peter Jackson’s The Hobbit: An Unexpected Journey. It has been said that the inauguration of the movie’s highly touted 48-frames-per-second projection speed (known as HFR, or high frame rate) was created so that viewers might experience an immediacy of imagery unlike any ever seen in a movie before. One might reasonably ask what use a movie so steeped in the make-believe as another epic journey through Tolkien’s Middle Earth so obviously is might have with a heightened sense of “realism.” But beyond logic, the unfortunate side effect of this revolutionary technique seems to be that it has rendered much of the movie’s imagery as looking far “fakier” (to co-opt a phrase in heavy usage when I was a kid) than did anything in the three films of Jackson’s previous trilogy. A side effect, perhaps, of trying to bring what is essentially an animated action film into the realm of perceived reality without initially shading the imagery more toward realism to begin with? It’s a question for someone far more technically accomplished and fluent than I am, that’s for certain.

The most accurate and damning description I can think of, one which was echoed over and over again by critics and observers regarding The Hobbit and its HFR experiment, is that it’s like watching a big, loud action epic on a badly calibrated HDTV set in the middle of a CostCo or some other big box store. Experiencing The Hobbit for myself, I not only came to realize that there is a point where image clarity as an end in itself is something less than desirable, but I also began to worry about what this blind pursuit of technological innovation for its own sake means for the way movies might be made in the future. If this sort of visual debacle, in which one of our most technologically minded directors really seems to believe that what he’s achieved is the wave of what’s to come, a new standard for digitally created imagery, then really what hope is there for retaining anything of magic, of what is consistent, even when digitally recreated, with the texture and quality of film?

As far as waves of the future go, I’m much more heartened by what Ang Lee and his battery of artists and technicians have achieved in adapting Yann Martel’s seemingly inadaptable Life of Pi into a satisfying, transcendent, breathtaking movie, one that uses all the digital tools of the trade to conjure life from a story that, technologically speaking, probably couldn’t have been told five or six years ago. It’s full of genuine awe and terror and supreme flights of cinematic imagination, with no capitulation to the pull of standard-issue Disney-style anthropomorphizing-- the tiger that hitches a ride with the title character after a horrifying shipwreck remains a mysterious and potentially deadly companion whose persistent threat compels Pi to find ways to survive. (That shipwreck, by the way, is far scarier and more frightfully beautiful than anything James Cameron has yet committed to film.) And the movie marks perhaps the best use of 3D I've ever seen in a narrative film to date-- it makes Hugo look like a cold piece of clockwork. 

Ang Lee takes his time in getting the story cooking, in both visual and emotional terms. But even the movie's more placid first third turns out to be a kind of blessing, through the contrast it provides between the routine predicaments of daily life and the soaring spectacle of a survival fable which seems to glow with a heightened, magical reality that makes perfect emotional sense, especially as you tumble back through it when the lights go up. It's a thrilling movie, a near-great one from a director for whom consistency of vision has always seemed elusive. Life of Pi feels like a wondrous summation of Lee's strengths as a filmmaker and a storyteller-- it has the feeling of a story uniquely interpreted by someone whose destiny it was to tell it through the magic of the movies.

And it also feels like a great summation of the possibilities yet to be tapped within the realm of realistic and hyper-realistic effects on screen, as well as the great justification for the full-body plunge into CGI that has characterized American and, increasingly, world cinema over the past 15 years or so. Experiencing the spectacular beauty and fear and adrenaline and grace manifested by this movie, watching that tiger interact with the unfortunate Pi, knowing that it couldn’t possibly be real, yet unable to deny what my eyes, and my beating heart, was telling me, and then reading of how it was done, and how little screen time was actually taken up by a living, breathing creature, has only increased my appreciation for the movie’s achievement. And yes, when I see articles designed to highlight ( the movie's technical achievement, I can’t help but flash on how far we’ve come in terms of the realistic representation of animals in movies, not only since that inadequate little basset hound in the Times commercial of 15 or some years ago, but even just in the last two or three years, when the digital stakes seem to have been ramped up ever higher.

No, I didn’t ever really believe that Superman could fly. I didn’t believe that digital basset hound for a second either, especially when it was sitting right next to the real thing. But I believed in Richard Parker, Life of Pi’s triumphantly, magisterially frightening Bengal tiger costar, and while watching Ang Lee’s movie I instantly believed once again in the power of visual effects to do something other than just strive for spectacle, for a hyper-realized representation of life, to make us believe in the impossible. For those two hours the measure of what the movie really achieved could be found in how it encourages us to remember why it’s important to be able to imagine in the first place.

In a very real way, movies like Life of Pi, and King Kong vs. Godzilla and Jason and the Argonauts and Trollhunter and even Zelig ask us, yes, to “believe” in what we see, but they never allow us to forget to believe in ourselves too, as an integral part of the storytelling process. These movies, minor and major feats of imagination, don’t simply insist on paving imaginations over and supplanting the spark inside with yet another recycled series of images. Instead they assist in augmenting our own imaginations with grace notes of wonder and brash appeals to our inner believer, encouraging us to make connections and leaps of faith as we marvel and laugh and gasp at what they show us. Even as they take flight they leave us with a bit of the work left to do for ourselves, that their visions might stay buoyant, soaring through the air.


For further reading on special effects, I direct you my pieces of Kinji Fukasaku’s The Green Slime, one piece published here at Trailers from Hell ( and the other more visually oriented post at SLIFR ( The original version of “Seeing and Believing” can be found ( here.


Monday, January 18, 2016


The SLIFR Movie Treehouse is a place where I like to gather a few of my movie-writing pals and exchange long e-mails on the way the movies shaped up for us in the year just left behind. It’s been a few years since I’ve undertaken this project, but the time felt right again, so I invited the very talented critical voices of Brian Doan, Odie Henderson, Marya Murphy and Phil Dyess-Nugent to take part over the last week, and to my great happiness they all agreed. 

Of course, you can find all these pieces in their entirety by scrolling down this blog page, but I was so excited by what they all wrote for this project that I wanted to create a sort of digest with excerpts and quick links to the individual articles.

So then, what follows here are samples from the 16 posts we submitted over the week of January 11-17, the tip of the iceberg of a week-long conversation that barely exposed the tip of the iceberg of possible conversations about the movies of 2015. We could have gone on for a couple more weeks, but we'd all have either died of exhaustion or given up our day lives to take up permanent residence in the Treehouse. 

I've rarely enjoyed being so thoroughly outclassed. These folks are all terrific, witty, passionate and honest writers. I hope you enjoy this sampling of what we were up to last week. 


“Having spent several year-end seasons isolated in my own opinions, I felt like being sociable again. So I sent out invitations to four of my favorite film commenters to breathe life within the rickety walls of the Treehouse once again, and to my great delight they all accepted. Two of them already have official outlets for their film writing. The other two are writers whose work I first became familiar with through the august efforts of the Muriel Awards, then followed chiefly via their witty and wise commentary on Facebook. And all four I consider friends even though, as is increasingly often the case in the world of blogging and of Zuckerberg-influenced social interaction, none of us have actually met in person.  What we’re going to undertake here over the course of the next week (and maybe, depending on stamina levels, slightly beyond) is basically an exchange of correspondence in which we will engage each other on the subject of the year in movies—our favorites, our least favorites, disturbing and encouraging trends, hopes for the future, and just about anything else that created a blip on our radar as we spent time gazing at the silver screen, or our laptops and tablets and phones,  absorbing the range of cinematic offerings available to us. Rather than trying to convince anyone, least of all ourselves, that the Treehouse represents some sort of carving out of a Supreme Court of film scholars, I’m envisioning a series of exchanges that will emphasize the way our personalities and tastes interact and feed off one another—those looking for the unleashing of nasty barbs and blood in the water will probably be disappointed. I’m really looking forward to seeing what we come up with, and if you spent a serious amount of time with the movies of 2015—would you be reading this if you hadn’t?—I hope you’ll enjoy your time eavesdropping in the Treehouse as much as we will hanging out up there."

I live in Oberlin, a small Ohio college town that's about an hour or so (weather and traffic depending) from art-houses in Cleveland and 30 minutes or so from multiplexes in nearby towns. We have one theater, with two screens, which alternate out films about every two weeks or so (give or take—The Force Awakens is in its third week, while Sisters had the good sense to slink out of town after seven days). While it's not quite Jeff Bridges and Timothy Bottoms hanging out at the Royal in The Last Picture Show, this isolation, combined with the limited viewing time created by day jobs, does mean that I'm often behind on things my big-city friends are chatting about on Twitter.
When the Ebert site asked its contributors for their Top Ten lists, I prefaced mine by addressing this supposed quandary; I guess I felt like it was something that needed to be addressed (a social/cinephile anxiety which certainly says something about me, and maybe about current trends/pressures of talking about films in a tiered movie economy whose discourses are shaped by geography as much as anything). Here's what I said, shared to give y'all (and those who read it at the blog) a sense of where I'm coming from:
There was a long period when I was bothered by the difficulties that my geographic location presented to my staying in touch with current films; I think I even felt weirdly ‘guilty’ about it, as if being out of the loop meant being away from my ‘real’ movie-going self. But now, I think of it as an odd advantage: it gives me a lot to look forward to, freedom from whatever suffocating cliquishness might exist in bigger cities, and a perspective whose skewed nature (relative to everyone else’s) means that whatever else my viewing habits are, they are mine to take responsibility for and enjoy. As Roland Barthes said, ‘My body is different than yours.’ Or, in the words of Malcolm, the lead character of Dope (one of my favorite films of the year): ‘I don’t fit in. I used to think that was a curse, but I’m slowly starting to see, that maybe, it is a blessing.’"
“I know that watching movies on watches and phones is de rigeuer, but I find the entire concept to be de trop. (You work those French lessons, boy!) I can’t watch squat on my Android, but I am guilty of laptop viewing. In my defense, I have a TV sized monitor on this laptop when I’m home. Part of the critic’s life is dealing with the wonderful movie delivery system known as an online screener. These screeners run about as quickly as a stoned turtle, and have watermarks in the least convenient places (once the watermark covered the subtitles, forcing me to work those French lessons, boy!). Once, it took me 13 hours to watch a movie I had to review. Unfortunately, that movie was The Stanford Prison Experiment, #7 on my 2015 ten worst list. So I try to get out to a theater to see most movies. It’s my preferred method of watching a movie, even if I have to do it in a critic’s screening room that feels like a mausoleum.
For now, though, let us join our fellow cinephiles for a moment of silence to mourn the death of cinema.
Just kidding! Cinema has been dead and resurrected so much that even Jesus is rolling His eyes. I’ve never understood this woe-is-me phenomenon put on every other year by whiny-ass think-piece writers. Cinema, like rock ‘n roll, will never die. If TV didn’t kill it, it will last forever.”
2015 was a terrific year for small movies that burrowed deep into private obsessions and cultish passions, illuminating out-of-the-way pockets of experience and people who have waited a long time for the chance to be treated as equal partners in the pop culture mainstream. The emergence of movies like Sean S. Baker's Tangerine is an inspiring story, but having lived through the 1990s, I don't want to oversell the whole ‘creative spirits from the fringes of society are coming for your multiplex!’ aspect of it. I'm old enough to remember when the ‘plucky outsider with minimal resources crashing the studio gates’ story of the year was El Mariachi, and at the end of the day, it's possible that the big lesson of Robert Rodriguez's career is that there are people you maybe shouldn't encourage. Of course, the big difference now is that talented outsiders may not have any interest in even finding their way to the studio gate unless they have a belly full of beer and are itching to unzip. Baker's movie was midwifed by the Duplass brothers, bless them, who have gone from making movies that often look like overextended TV skits to creating an HBO TV series that plays like the world's longest, whitest indie movie. Tangerine blows the pants off most of the Bros.'s oeuvre, but it's still kind of weird watching a movie that was shot on a phone that is, in turn, fated to be watched by people looking for something to stare at on their phones.”
“When I say I don’t really watch movies anymore, I should clarify that, yes, okay, I watch a hell of a lot more movies than most middle-aged mother-types. But the person I see myself as, the person who spent all day at the multiplex sneaking from theater to theater, who spent most of her young adult birthdays alone in movie theaters, who only went ‘shopping’ if there was a movie theater in the mall, who pre-ordered the first Roku box and dove headlong into the joy of Netflix streaming… I dunno. I dunno where she went.
Okay, I kind of know. My movie-watching and movie-making got tangled up in my head with a marriage that ended. A series of crappy things in the early twenty-teens estranged me from the filmmaking and film-writing communities. It all became fraught. And being a full-time working nearly full-time parenting human is kind of exhausting. But I have time. Man, I have plenty of time. And every freaking time I go inside an actual movie theater, I slip down in my seat, Coca-Cola fizzing next to me, the house lights go down and the screen flutters and I’m ecstatic. And I think, ‘Why am I not always in a movie theater?’

I can’t watch movies at home anymore. I mean, I can, but it’s an effort. I have to stop myself from jumping up to start a load of laundry or check my damned phone or let the neighbor’s cat in or let the neighbor’s cat out or play Simpsons Tapped Out on the iPad or maybe do a jigsaw puzzle or chat with five people on Facebook or take a bath which turns into a nap. I can watch TV shows. Episode after episode after episode. 22-44 minutes at a shot. Endless loop. But movies, man. That’s a commitment.
Shoot, I just realized what it is. It’s the same thing that happens if I drive more than 30 minutes. I start thinking. And feeling. And that’s just not acceptable. In the movie theater, where it’s a wholly absorbing experience, I can fall into it completely. At home… it’s dangerous. Feeling and thinking are the enemies of sanity for me these days.”

“I took my daughter to see Hitchcock/Truffaut, which was followed by a midnight screening of Psycho, which she had never seen. (She’s a 15-year-old movie fan with a strong attraction to the classics of the horror genre.) I asked her before going in how much she knew about the 1960 movie landmark and preemptively mourned the fact that, because we were seeing the doc first, Psycho’s element of surprise would likely be ruined for her. She said she knew about the shower scene, but that was about it. And I was relieved when Kent Jones’ documentary, other than using a shot of Arbogast reeling backward down the stairs in its opening moments, carried discussion of Psycho itself only as far as that shower scene. (Nonetheless, she viewed most of that section of the doc with her ears and eyes partially covered.) After the movie let out, as we were heading back out to buy our tickets for Psycho, I asked for her thoughts and she said, ‘Well, I tried to cover my eyes and ears, but I saw some of it (the shower scene) anyway. Oh, well. So I know how the movie ends, but there’s probably a lot of scary stuff leading up to that, right?....’ Ka-ching! Surprise factor preserved! She spent the last half of the movie in a glorious state of nervous wreck-titude and had no idea what was coming when Vera Miles opens the door to the fruit cellar. One of the great movie moments of the year for me, and for her too, judging by our 2:00 AM discussion on the ride home.”

Regarding Coogler’s subversive pivoting of the (Rocky) franchise: It’s important how and why he did it. Hollywood simply does not accept the notion that white male audiences will accept and embrace a hero who is a minority or female. And considering all the screaming and clothes-rending that went on vis-à-vis the colorization and/or feminine invasion of the Star Wars and superhero franchises, I can’t say I blame Hollywood for thinking this way. Coogler knows the series runs through the Rocky character, so he needs to give him a storyline for the Rocky fans. But it’s a rope-a-dope on the order of the similar switcheroo in Sirk’s version of Imitation of Life. This soapy, yet effective Rocky storyline is a front for the real story of Adonis Creed’s ascension to the status of new torchbearer/hero. Coogler knows that to have the audience acceptance required to make his remix work, he’ll have to use Rocky the way other movies use a Sidekick Negro character. Except unlike those characters, Rocky has a backstory and an arc. And Stallone does a great job.
More importantly, for all his love of Rocky’s character, Coogler gets to engage in some childhood wish-fulfillment with his new hero, which I immediately picked up on and embraced. He even cops to it in the film visually, with those scenes of young brown faces looking at Adonis with admiration and hope. I’d never seen that in a movie in 43 years of attending the cinema. I wish I had seen this movie when I was a kid.”
I remember when Pauline Kael's first collected edition of her ‘movie notes’ from The New Yorker came out in 1983, it included a little note advising the reader that if you watch a great movie on a TV set--even if it's on LaserDisc!--you are, and I'm quoting from memory, committing an aesthetic crime of which you are the victim. That passage was conspicuously missing from later editions of the book, proving that one of the first things that people usually bring up when they're explaining why Pauline Kael was the Antichrist isn't true: she DID once change her mind about something! Or at least backslid on a point of principle when she realized there was no other way she was going to get her grandson to sit still through Seven Samurai.
I'm very grateful for the technological innovations that have resulted in an explosion in viewing choices and given small-time filmmakers a shot at broadening their audience without having to drive from one college campus to the next with a film print in the trunk of their car. (It's certainly been a boom for documentary filmmakers, and in the past several months streamers have had the chance to sample excellent new docs about Amy Winehouse and Nina Simone and Janis Joplin, the Black Panther party, and Marlon Brando, Crystal Moselle's mind-boggling The Wolfpack, Alex Gibney's triple bill of Scientology, Sinatra, and Steve Jobs, and Albert Maysles's wonderful, high-spirited memento mori, Iris.) But I also miss the feeling I used to have that, however flawed and homogenized movie culture was, those of us who cared to do the spadework could still get our arms around it, get a clear sense of what it encompassed and who was being clearly left out and where to find the pockets of activity that might speak most directly to a moviegoer's individual needs outside the blockbuster mainstream. It might even have been easier in those days for a critic to discover a deserving movie that the mainstream was turning its back on, like Carl Franklin's One False Move, and beat the drums for it.”
“Ah, Jennifer Lawrence... I know she is a virtually untouchable figure these days-- Deadspin's Drew Magary made a joke a year or two ago that even breathing a hint that she might not be All That would cause a person to be sent to a remote detention facility by the internet Powers That Be--but at the risk of excommunication, I'm afraid I have to dissent from Phil's call for an ‘amen’ on her status as ‘the single solitary person doing the most to keep the art form alive while carrying an entire industry on her back.’ I'm pretty sure Jennifer Lawrence actually exists (to paraphrase what the late Ralph Nader* once said of the Reagans' racist appeal to white Democrats) to make Hollywood feel better about its ageism, and to let the rest of us be amused by their unacknowledged ironies. Her fellow stars can cheer her on while wearing their various ribbons at the Globes or the Oscars, without feeling awkward about how much of the 'dramatic' career they're celebrating (I don't know whether I was more moved by her ACTING! in the terrifying ‘exploding microwave’ scene in American Hustle or the ‘I can TOO chew scenery better than De Niro! And in the same scene!’ moments in Silver Linings Playbook) is based on repeatedly giving 40-something roles to a twenty-something with box office pull. That this is occurring while Lawrence is currently burnishing her ‘I've got a great bullshit detector’ persona in interviews and solipsistic Lenny pieces proves that finding a career path through maudlin manipulation didn’t die with Stallone’s stardom.”
“Yes, we pay movie stars exorbitant amounts, but we kinda think it’s a scam. We’ll fall in love with actors, but we don’t take them seriously. After all, an actor without a script can be a terrifying thing. There are very few actors (mostly just the comedians who write) who can go off script and sound vaguely intelligent. Movie stars are lucky. They’re pretty. They’re at best savants of some sort. Our careful and closed intellectual brains can’t quite process what it is that makes them magic. So let me tell you what it is. Hell, I don’t know. Wait, yes, I do. It’s magic. It’s the soul. It’s the exquisite pain of existence poured out in little ice cube trays for your convenient consumption. You hear about a temperamental asshole actor lashing out with crazy petulance and fury at a grip who interrupted a scene, and you feel all superior because you would never lose your temper like that. Consider that the temperamental asshole actor’s job is to tear down every single fucking wall and coping skill and barrier she or he has in order to flay him or herself in front of an audience, the camera, the world, so that you can feel a little something as you watch in the theater, on your television, or on your pocket computing wonder. And when you question the bat-shit choices, addictions, love affairs, acts of public indecency of your cinematic heroes, remember that they’re feeling every second of every day the most vicious, tender, tragic, terrifying, exquisite feelings you’ve ever felt. Every second. All of the feels. Again, I don’t know where I’m going with this, except perhaps to say it’s amazing to me that they’re not all constantly numbed out of their noggins with every possible drug, drink and sexual escapade they can muster.”
As much as I (unexpectedly) enjoyed his performance, Sylvester Stallone’s supporting actor nomination, incredibly Creed’s sole recognition, has to register as bittersweet. No Ryan Coogler? No Michael B. Jordan? No Creed for Best Picture, something I thought up until about 5:40 a.m. was as sure a thing as could be anticipated? But nothing. And on Carl Weathers’ 68th birthday too. I get that Creed’s reception by the Academy, the nameless, faceless, lacking-in-actual-corporeality ‘they,’ has been up to this point and continues to be largely a pretext for the coronation of Stallone’s Balboa character, one that began in earnest in 1976 and got derailed by the actor’s cosmos-dwarfing ego, ghastly tendency to mold and relentlessly ride the political zeitgeist of Reagan-era jingoism and subterranean storytelling skills over the last 38 years. But the lack of recognition for the movie, in the screenplay category at the very least, speaks more tellingly to an overall tendency of the Academy to muffle considerations of race, even when it’s an integral part of the story of a film they seemed primed to celebrate, 12 Years a Slave being the exception to the rule that most easily pacifies those who choose not to acknowledge the bigger, more troubling picture. (Of course, the year’s best, prickliest, most challenging movie on race and American society, Chi-raq, never stood a chance of getting within a 10-mile perimeter surrounding the Kodak Theater in Hollywood.) In this light, Straight Outta Compton’s lone nomination, for its Caucasian screenwriters, by the way, unfortunately ends up looking more like tokenism than tribute. A peek at the way the votes actually tallied might dispel at least some of the unease. But in a year that featured Michael B. Jordan, Tessa Thompson, Oscar Isaac, Teyvonah Parris, Jennifer Hudson, Benicio Del Toro and Chiwetel Ejiofor in high-profile roles, to name only a few, the fact that the closest any actor of color came to a nomination was Jordan and Thompson hanging out somewhere in the vicinity of Stallone’s admittedly effective turn, or Samuel L. Jackson sweating it out in the same big, wide, claustrophobic  room as Jennifer Jason Leigh while she snarled the N-word, is a bit harder to swallow than it might have been otherwise.”
"(In Straight Outta Compton) (t)he early stuff about Dr. Dre and the formation of the group, the material with Ice Cube and his solo career, the insights into the business side of their lives-- I thought that stuff was fascinating. The recording scenes were great, and so were the recreations of live performances. But the tendency in any biopic to say "...and nothing was ever the same again," especially when it works to isolate the group as a singularity (rather than part of a broader moment of hip-hop's flourishing, increased politicization, and growing stylistic diversity) felt wrong to me. I know it's about one group, in one period (just as Love & Mercy, which has its own fraught relationship to the biopic, also is), but the larger claims it wants to make might have worked better for me if we got more acknowledgement of musical context than one scene of Ice Cube recording with the Bomb Squad.
It's actually a movie that got smaller in my imagination the more time passed, because as I turned it over in my head, I couldn't buy into the movie's paradoxical braggadocio about NWA's political stances, and its repeated insistence that "no one else is doing this" (which, having grown up with Public Enemy, the Native Tongues collective, and Boogie Down Productions, all of whom were also doing varied and genuinely radical work contemporaneous/near-contemporaneous with the 1988 Straight Outta Compton album, clearly ain't so; it works as a character aside, but the film also wants it as its motto, even placing Chuck D's famous line about rap as a black CNN in the group's collective mouth). I also wasn't sure what to do with its ambivalent take on the group's relationship to violence, which I thought it could never decide if it was celebrating or condemning (although, in fairness, The Chronic itself--which we see the genesis of in the film's second half--also wrestles with this kind of ambivalence)."
“I didn’t see any Brando-level performances this year, but I do think that Jennifer Lawrence, more supernaturally alive than just about anybody onscreen, has shown an amazing gift for playing clear-eyed, passionate heroes who won’t back down, whether they’re bent on toppling a dictatorship or selling a mop, and this year she’ll turn the same age Brando was when he made his first movie. There were, as they say on the infomercials, some amazing discoveries: Daisy Ridley in you-know-what, Britt Robertson in Brad Bird’s overly maligned epic bomb Tomorrowland, Kitana Kiki Rodriguez and Mya Taylor in Tangerine, Shameik Moore and the other young performers in Rick Famuyiwa’s Dope. Alicia Vikander in Ex Machina and The Man from UNCLE, Rebecca Ferguson in Mission Impossible: Rogue Nation, Nadia Hilker in Justin Benson and Aaron Moorhead’s genre-bending horror romance Spring, which also boasts a wonderful performance by Lou Taylor Pucci as an American in Italy whose emotional state shades gratefully from PYSD to lovestruck. (It’s sort of like Before Sunrise with tentacles.) In Brooklyn, Saorise Ronan grows up onscreen, transforming herself from a painfully shy fish out of water into a confident woman of the world; she made me feel as if I were finally seeing the performance I read about whenever Jessica Chastain plays a young earth mother. Dakota Johnson gives an irresistible star performance in Fifty Shades of Grey, without a shred of help from her material, director, or co-star. (The fact that she’s right there with them on the list of Razzies nominees is a terrible indictment of what indulging in kneejerk, unreflective mockery of officially certified bad movies does to the brain.)”
“(Meryl) Streep said that today’s film criticism was sorely lacking in female voices, which is true. In my younger days, I read so many female critics: Pauline Kael, Judith Crist, Kathleen Carroll, Susan Wloszczyna, Sheila Benson, etc., in addition to Roger and Archer Winsten and even Sexy Rexy. I hadn’t even realized that the universe had changed and become so male. So I agree with Meryl.
But in addition to this, I add that we need more diverse voices of color in film criticism as well. We’re both pretty fucked, unfortunately: The consensus is that women will only give good reviews to Nancy Meyers movies and can’t sit through The Revenant (my mother watches Lucio Fulci movies, so fuck whoever thinks this about women). And I can’t tell you how often people are surprised that I know about things besides Black movies. I was talking about at a party and someone said “GASP! You know about Fellini?” I responded, “You do know Fellini was Black, right? Billy Wilder too! He was passing!”
When that’s not happening, people mistake me for the only Black critic they’ve heard of, Armond White. Now, I’ve met Armond, and he’s probably far more insulted that I’m being mistaken for him than the other way around.
My point here is that diversity matters in the arts and in criticism, and not the fake-ass diversity bullshit I keep getting fed that’s been manufactured by the same PR firm that created “post-racial America.” True diversity, because the current "diverse" situation is predominantly White, predominantly male and totally bullshit. I like exploring the viewpoints of those who aren’t like me, because I may learn something I didn’t know. Why shouldn’t readers or viewers also experience the viewpoints of LGBT people, or women, or people of color?”
So. The Oscars. Oh, man, you guys. I love the Oscars in all their absurdity. Of course I want my favorite actors and films to be nominated and to win. But what other awards in the world do we feel like we have so much stake in? The movies are ours. We own them. Actors and directors belong to us. They are our royalty and our property. Who won Best Advancement in Cancer Research in 2006? I dunno. Probably some old white guy. But The New World was freakin’ robbed. Crash? Seriously?
I hope the Oscars will eventually go the way of most professional sports. It’ll be only black people who win anything or are nominated. Because, dude, come on. Who wants to see pasty white people emoting? But, nah, as long as we are a mostly white establishment we’ll want to see people like us and we’ll appreciate the performances of people like us and stories like ours. Actual talent be damned. The world is rigged. It is getting better. Maybe. It’s just sooooo goddamned slow. And when the black, brown, gay, female talents come along, well… there’s only room for one at a time. Look, guys, we gave that Hurt Locker chick some awards, like, just a couple of years ago. We often nominate one or two black people. What more do you want from us? Now you’re just trying to take away spots from the nice white men who deserve them. Maybe we can split up the awards. We have actor and actress. How about director and directress? Director and Directress of Color? LGBT special consideration awards. The Special Oscars. Honestly, the entertainment industry is better for non-white people than most industries in this silly planet. I work for a law firm with 65 partners. About 55 of them are, yeah, you know-- straight, white men. I want this conversation to be louder and broader. We are a racist, sexist, homophobic, classist, able-ist species. Let’s try to be better.”
“Where once there was jittery disregard for visual coherence in Inarritu’s films, there is now an attempt to wed the visual element in his films to the way people in his films experience the world, which is, I think, a far cry from writing off Inarritu’s intent as an effort to reduce the American frontier of the 1830’s recreated in his new movie to a first-person shooter game environment. Objections like the ones Brian raises—`a bullying assault on its audience, a macho dare to dislike it and therefore be out of the loop’-- seem rooted in resistance to a perceived tactic that wouldn’t be out of line with Inarritu’s directorial past, but one which I’m not convinced is entirely fairly applied here. (This is not Gaspar Noe’s The Revenant, after all, and thank God for that.) There is a certain level of verisimilitude that comes with the territory of telling a story like this, one which has been cherry-picked by Hollywood before, by the way, most notably in the 1971 Richard Harris epic Man in the Wilderness. But I never sensed, as Brian clearly did, that I was being put through the ringer by an expert sadist for the dirty thrill of it—for that privilege see instead The Hateful Eight, if you must—or that the movie’s ‘grunting antics’ were ‘art house porn for people who wouldn’t be caught dead at a Peckinpah or Don Siegel retrospective.’”

Thanks again to Brian, Marya, Odie and Phil for spending time with me in the SLIFR Treehouse talking about movies. Hope they’re ready to climb back up that rickety ladder next year, because I already am.