From the Department of be Careful What You Wish For

A couple of years ago, swarms of people with some truly confused understandings about law, economics, politics, and basic human nature decided it was time to go for a camp-out.  In downtown New York City.  Yes, we refer Gentle Reader to recollections of those days of THC-laden fumes, bull-horns, vandalism, sexual assault, attempted terrorist bombings, bodily functions and sweat, and sordid ordinary greed that called itself the Occupy Wall Street movement.  In the weeks and months after their initial attempted colonization of the city’s financial district, they spawned numerous copy-cat “occupations” in other cities around the world.

For those still interested (both of you), they’re still around, and even have a website and everything.  It’s here.  To get a true flavor of what passes for thinking over there, Gentle Reader can click on the “Action” tab on the banner and then go around the pinwheel chart on the page.  I looked for a “blow up bridges” link in the “tactics” portion of the wheel, but didn’t find one.

Let’s ignore the movement depositing its money into Amalgamated Bank, which as of fall, 2011 was controlled by an SEIU affiliate and was circling the toilet bowl operating under an FDIC consent order, largely as the result of having invested $800 million in Countrywide Home Loans mortgages.  You’ll remember Countrywide, won’t you, Gentle Reader?  Countrywide was by a wide margin the leading private originator of subprime home loans — loans to people who had little likelihood of being able to repay them.  Loans that are now characterized as exploitative and conclusive evidence of the “1%” plundering The Working Man.  And shit.  Seems the SEIU was just jim-dandy getting in on a slice of that plunder, and the Occupyistas were happy to send them their business.  By the way, Amalgamated was rescued by the sale of roughly 40% of its shares to Ron Burkle (billionaire and Big Time Democrat) and Wilbur Ross (another billionaire, although he backed Romney in 2012).  Amalgamated became the Democrat National Committee’s sole lender in 2012.  And so on and so forth.  In short, business is business, even for outfits whose stated mission is “world revolution.”

At the risk of understatement, the Occupy loonies having served their purpose of re-electing America’s first explicitly anti-American president, they’re about as relevant today as the Wobblies.  So why am I devoting bandwidth to them?

Because today is June 17, after all.

On June 17, 1953, in the Worker’s and Peasant’s Paradise, more formally known as the Deutsche Demokratische Republik — the German Democratic Republic: East Germany in round numbers — and more informally among West Germans of a certain generation as the Sowjetische Besatzungszone or the SBZ — the “Soviet Occupation Zone” — the rest of the world got to see how movements like the Occupy Wall Street outfit get treated post-revolution.  The preceding day in East Berlin construction workers had finally had enough of the privations, oppressions, and exactions of Sovietization.  As happens with dreary predictability, the government had announced forthcoming increases in “work norms” with no corresponding increase in pay.  Work more, same income.  So on June 16, they went on strike.  The next day they were joined by other groups of workers.  For 1953 in still-devastated Central Europe, news of the goings-on spread amazingly rapidly throughout most of East Germany.

On the morning of June 17, the workers began to march towards downtown East Berlin.  The government pretty quickly decided to use force to deal with the protests and, the times being what they were, they turned to their Soviet occupiers for help.  Roughly 20,000 troops and 8,000 police, complete with tanks and so forth, turned out, and the fun began.  The total numbers of killed and wounded is somewhat vague, as are all numbers of victims of communist oppressions.  When you add in the subsequent executions it appears to have been north of 500.

From the 1950s until actual German reunification, June 17 was the Tag der deutschen Einheit — Day of German Unity.  Beginning in 1990 the newly-reunified country moved it to October 3 (the formal Reunification Day, instead of November 9, the day the Wall fell . . . too many unfortunate associations with that day (e.g., Kristallnacht)).  A principal consequence of the June 17 Uprising and its brutal suppression was to heighten the exodus of every East German who had the gumption, prompting the 1961 construction of the Berlin Wall.

I’ll make a humble suggestion, for the benefit of those three or four dozen remaining true believer Occupiers.  I think they need their very own holiday.  I think they need a holiday that will serve as their inspiration to World Revolution, and provide them a glimpse of their Paradise on Earth.

We’ll have it on June 17 (now that day’s free of prior claims), and we can call it Fools and Tools Day.

A Tacky Quibble

I’ve now been sent more than one link to this photo essay from the International Business Times, showing a series of then-and-now photographs from the Normandy landing sites and their immediate vicinity.  The photography is pretty well done, and of course the contrast between what was going on that day 70 years ago today and what goes on there now is moving on any of several different levels.

So it’s tacky of me to quibble with it.  But I am — or try to be — a stickler for saying things correctly, and not over-blowing statements.  I do experience something of a jolt when I see a statement like this:  “On June 6, 1944, Allied soldiers descended on the beaches of Normandy for D-Day, an operation that turned the tide of the Second World War against the Nazis, marking the beginning of the end of the conflict.”  What I have an issue with is the expression “turned the tide of the Second World War.”

Taking nothing away from the men who stormed ashore there, and the men who died trying, and the men who suffered and died to make the whole thing possible although they never got close enough even to see a smudge of France along the horizon, but whatever else it did do, the Allied invasion of France in June, 1944 did NOT “turn the tide of the Second World War.”  Germany had lost the war.  She had lost it no later than the surrender at Stalingrad.  Her last major offensive in the East was a distant memory by summer, 1944.  Even had the U.S. and the British (among whom I include the Canadians, the Aussies, and the Kiwis) not succeeded in opening a second front in the West, the Soviet Union would have gone on to defeat Germany.  Would have taken longer, and a lot more Germans and Soviets would have died.  But the swastika would have come down all the same.

What D-Day very much turned the tide on was the post-war world.  Had Stalin defeated Hitler with his own troops the only ones with boots on the ground, does anyone really think he would have handed over any portion of Germany to the Western allies’ control?  Would he even have allowed France or Italy to enjoy any independence?

The Yalta Conference, at which the boundaries of post-war Europe were carved up, and the Eastern Europeans cynically (or cravenly, take your pick) handed over to Stalin was held in February, 1945, by which time the Red Army was in Prussia proper, the Ardennes offensive in December, 1944 had collapsed, and even Hitler had to have known it was game, set, and match.  Even with the Western allies standing on the Rhein, practically, Stalin still plucked the goose pretty thoroughly.  What would have happened at a Yalta Conference with Eisenhower sacked, Montgomery running about the place whining that it was all Patton’s fault, and Stalin having his communist operatives in the French underground scoping out locations for the new NKVD execution cellars?  Would there even have been a Yalta Conference?  Why should there have been, under those circumstances?  Stalin sure as hell didn’t care how many Soviet soldiers he got killed on his way west (after the war he sent most of the ones who’d seen the west to the Gulag camps).  What incentive, from his perspective, would he have had not to tell Roosevelt and Churchill, “You know what?  I’ll just take whatever my troops can conquer.  You promised me a second front and I don’t see a second front.  All bets are off.”  Even if the U.S. and Britain had pulled out of the European war Stalin still would have marched into what was left of Berlin.

D-Day made sure that such a scenario never had the chance to occur.  The men whose corpses washed gently to and fro in the surf made only a hypothetical material contribution to defeating Hitler.  But their lives were not given in vain.  Oh no:  They made sure that there would in fact be a free Europe when the war was over.  The hundreds of millions of Europeans who have lived in freedom since that day owe those freedoms to those men and their comrades who came after them.

And for that, we will forever remember and honor them.

From the Department of Always use the Right Tools

31 May 1916.  The German High Seas Fleet is coming out, hoping to draw out some of the British Grand Fleet and pounce on it before the rest of it can come up.  Unknown to the Germans, the British, using the Room 40 decrypts they enjoyed thanks to the Russians’ having had the pluck and sense to strip SMS Magdeburg of her code books in 1914 and then the generosity to turn them over to the Royal Navy, knew precisely where they intended to go and what they intended to do.  So Jellicoe got the entire Grand Fleet underway to meet them.

The stage was thus set for the Armageddon-style naval battle that every commander since Nelson had sought.  Here a brief historical overview might be of assistance.  Prior to Nelson, battle fleets fought and had for 140-plus years fought in line-ahead formation.  To some extent it made sense because wooden warships mount their guns in broadside only, and those guns have only limited ability to be trained out of the strictly perpendicular to the keel.  The ends of a wooden ship — especially the stern — are much thinner than its sides, and much shorter.  So even though you might mount a few bow-chasers or stern-chasers so you wouldn’t be complete impotent, those guns represented only a tiny fraction of the ship’s total weight of firepower.  To put some numbers on it, of a first-rate’s 98 or more heavy guns, ranging from 18-pounders to 32-pounders, maybe a total of eight or ten would be mounted outside of the broadside batteries.  In line ahead, where each ship follows the one ahead of it as closely as it safely can, you concentrate and mutually reinforce the fleet’s individual broadsides, and you also prevent the opposing fleet’s ships from piercing the line and firing its concentrated broadside against the largely unprotected bow or (even worse) the stern, almost completely unprotected by heavy timbers.

Gentle Reader will rapidly perceive the logical development of this tactic, though, along the principle of sauce for the gander.  Both fleets adopt the line ahead.  Now, a fleet in line ahead is not going to be able effectively to pierce the opponent’s line along its length.  That would produce a geometry looking like a cross-member tire iron.  While the fleet piercing the line will be able to concentrate its fire on the ends of the ships between which it sails, the other fleet will be able to concentrate its fire on the ends of the ships nearest its line where pierced.  Stymie, in other words.  The dynamic is entirely different if you can, not pierce, but cross the opponent’s line ahead of it.  It’s called “crossing the T,” and it’s the holy grail of battle line tactics.  Of course, now you’ve crossed the T, what do you do?  You’re still line ahead, only on the other side of your enemy, and now he “has the weather gauge of you,” meaning he is upwind of you, a crucially important tactical advantage in the era of square-rigged battle fleets.

The upshot of all of the above considerations was that from its formal adoption in the 1660s all the way up to the late 1700s, naval battles, at least when on the high seas where there wasn’t a lee shore you could run your enemy onto (as at Quiberon Bay, in 1759), tended to be indecisive.  Oh sure, occasionally a ship would have the misfortune to be so disabled as not to be able to stay in line.  She’d drift out on her own and be surrounded and captured.  Beyond individual misfortune such as that, however, naval battles just didn’t decide a whole lot.  That’s not to say sea power as such was indecisive, because it very much was.  It’s just that the effectiveness of main battle fleets against each other was limited.

Still the line ahead made sense, by and large, and it made enough sense that the Royal Navy formally incorporated it into Fighting Instructions, its mandatory combat manual.  Woe betide the captain who broke line.  Woe betide the admiral who failed to maintain his battle line.

Until Nelson.

Nelson realized two things, one strategic and the other tactical.  His strategic insight was that the line ahead was never going to produce a strategic-level result precisely because it could not be expected to produce a battle where one fleet was largely destroyed by the other.  His tactical insight was that the virtues of the line of battle were strongest when the two fleets were of equal quality in seamanship and gunnery.  But if one fleet was significantly the other’s superior, then it might well be able to sail sufficiently exactly as to pierce the enemy’s line in multiple places simultaneously, and the difference in gunnery would significantly reduce the damage inflicted while doing so.  This would then place the better fleet’s ships close alongside their counterparts, where their superior gunnery stood the best chance of achieving decisive results.  Nelson further realized that, after a dozen or so years of purge, guillotine, and neglect, the Royal Navy had attained that level of mastery over the French.

The validity of Nelson’s insights was proved dramatically at Trafalgar in 1805.  Nelson divided his fleet into two squadrons, abandoned the line ahead to hoist the signal “general chase,” and then drove his fleet like two mailed fists into the straggling, disordered, bumbling combined French and Spanish fleets.  There developed a general melee in which Nelson’s parting instruction to his captains that, “No captain can do very wrong who places his ship alongside that of a Frenchman,” bore fruit.  Two-thirds of the combined fleet was sunk or captured that afternoon, and British hegemony at sea assured for another century-plus.  Nelson died that day, but not before receiving the news of his victory.

From Trafalgar onward, every naval officer in every country dreamed of another Trafalgar.  Mahan dreamed of it, and wrote it into his book.  Fisher dreamed of it, and built Dreadnought and her descendants to make it happen.  Tirpitz dreamed of it but was realist enough to understand it wasn’t likely against the British.  Jellicoe dreamed of it; Scheer dreamed of it; Beatty dreamed of it.

And in May, 1916 it seemed as though it was to happen.  Britain and Germany together floated dozens and dozens of massive Castles of Steel (to borrow Robert K. Massie’s book title), each capable of hurling up to a dozen massive armor-piercing shells, weighing anywhere from 900 to 1,800 pounds each, miles and miles, to fall onto the enemy’s decks, bulkheads, and hulls.  Their populaces had internalized the image of knights clad in armor, smiting each other hip and thigh in noble combat.

Except it wasn’t quite so.  Winston Churchill, First Lord of the Admiralty from 1910 until 1915, attempted to correct his fellow Members’ understanding.  Two modern dreadnoughts in battle, he said, were not correctly thought of as two plated and mailed knights hacking at each other with swords, but rather as two eggs striking at each other with hammers.  And by 1916 the hammers weren’t even the most dangerous threat.  Mines and torpedoes, the former more than the latter, could explode beneath the giants’ armor belts, below the waterline, and in a matter of minutes destroy the work of years.  As in fact happened to the brand-new dreadnought HMS Audacious, sunk by a mine in October, 1914, fourteen months after she was commissioned.  Admiral Jellicoe, the Grand Fleet’s commander, fully realized the peril.  Shortly after the war started he declared his belief that chasing a retreating German fleet back towards Germany was a mistake, as it was every bit as likely to be a ruse, to draw the British over minefields.  He announced an intention to avoid falling into that trap (and his stated intention received the Admiralty Lords’ blessing, it should not be overlooked).

And so the fleets sailed towards each other, through the haze and fog banks of a North Sea early summer.  The Germans had no idea of what was headed their way until the fleets’ respective scouts went to investigate a fishing trawler each sighted.  Each fleet’s closest squadron just happened to be its battlecruisers, and here is where we get to this post’s title.

Battlecruisers were, like Dreadnought herself, an invention of Admiral Jackie Fisher.  They had a dreadnought’s heavy guns, but they were to be fast, like jungle cats (in fact they were referred to, both in the press and in the fleet, in those terms).  Now folks, the laws of physics apply with even greater brutality on the ocean than they do on land.  You’ve got three things and you can’t have them all at once: guns, armor, and speed.  If you want more of one you’re going to have to skimp on the others.  That’s just the way it works.  So Fisher chose to skimp on the armor leg of that triangle.

Fisher’s original vision had been a ship fast enough to catch any major ship it could out-gun, and by like token to run away from anything that could match or out-gun it.  Logical enough, and indeed that is precisely how things worked out in the fall of 1914.  German Admiral Maximilian von Spee’s Pacific Squadron had jumped a couple of elderly British cruisers off the coast of Chile and sent them to the bottom with the loss of all hands.  So the Royal Navy dispatched two of its original battlecruisers, Invincible and Inflexible, to deal with Spee.  On December 8, 1914, they caught him making a run at the Falklands and in an afternoon’s shooting destroyed nearly his entire squadron (taking not only Spee but his two young sons down).

But o! what a difference a word can make!  Fisher permitted them to be called “battlecruisers,” and further permitted them to be regarded in the fleet as components of the battle fleet.  They were to be the fleets’ “scouts.”  But the fleet had scouts, you see.  It had shoals of destroyers and full squadrons of actual cruisers.  Ships that could out-run even a battlecruiser.  Since a scout’s whole mission is to get close enough to the enemy to figure out what’s going on (recall, Gentle Reader, that radar was still 20 years or more in the future), you don’t want a scout you can’t afford to lose.  Hard cheese on the expendable scouts, but there it is.  In short, the very worst place for a battlecruiser is in a fleet battle formation, where its speed is negated (it can’t maneuver faster than the slowest unit in the fleet), and where it will necessarily be exposed to heavy and concentrated shelling from the opposing fleet.  And that’s precisely where the British put theirs.

Without going into too great detail, Jellicoe managed to cross Scheer’s T not once, but twice that day.  With lousy visibility, poor communications (flag hoists were nearly useless and the day’s primitive radio sets tended to be knocked out the concussion of the ship’s own guns firing), and the press of fleets maneuvering at well over 20 knots each, Jellicoe managed one of the greatest sustained feats of seamanship in all naval history.  At a cost, a ferocious cost.  Full three of Britain’s deadly cats went down, each one the victim of a German shell finding its magazine.

In the below picture, somewhere at the bottom of that enormous cloud of smoke and flame, is what used to be HMS Queen Mary.

Destruction_of_HMS_Queen_Mary

And here’s HMS Indefatigable going down:

HMS_Indefatigable_sinking

And this is what is about to become the former HMS Invincible (the Royal Navy’s original battlecruiser):

InvincibleBlowingUpJutland1916

And at the end of the day, after Scheer for the second time had ordered Gefechtskehrtwendung (“battle turn”) away from the Grand Fleet, to make a run for home, Jellicoe, true to his previously stated and endorsed intention, did not follow.  The public and to some extent the brass never forgave him.  He was booted upstairs and Admiral Beatty, who commanded the battlecruisers that day, was given command of the fleet.

The High Seas Fleet never came out again in force until it did so to surrender.  For all of Tirpitz’s brilliance as a political operative and administrator, he never successfully addressed the strategic conundrum facing the Imperial German Navy:  It was bottled up in the North Sea and unless it destroyed the British fleet — which no one thought it could do — there it was going to stay.  Meanwhile the British fleet enjoyed the freedom of the world’s seas, as did its enormous merchant fleet.  Until the advent of the submarine.  Mahan’s fleet-in-being theory could not exist, in short, without reference to the hard facts of geography.

What did the British learn from Jutland?  Not enough to avoid building HMS Hood as a battlecruiser, and not enough to pull her out of service once built to bulk up her armor.  And not enough not once again to use the wrong tools for the job, sending her to her doom against Bismarck in 1941.

My Own Modest Proposal

Over at The Atlantic, via Instapundit, we have a call for judicial fixed terms and, more importantly, a single such term.  Specifically the author advocates a single 18-year term for appointees to the U.S. Supreme Court. Occasion for the cogitations is the 60th anniversary of the Brown v. Board of Education (sometimes referred to as Brown I) decision which ruled that as at least to public schools, separate was inherently unequal and thus could never satisfy the Fourteenth Amendment’s requirements.  Our author praises the unanimous decision, specifically for the unified front it gave the judiciary in the face of the inevitable ructions which were sure to follow it.  I’d not heard this part, that the court took two entire years to craft a decision that all nine justices could agree on.  The author describes a forum he attended at Yale at which a group of lawyers who had been clerks to those justices talked about the deliberative process and so forth.  All very cozy, and I’m sure it was full of mutual congratulation, as such things drearily are.

What’s not mentioned is the extent to which the process that produced the Brown decision departed from all recognized standards of judicial ethics.  Years ago in law school I first ran across mention of this; not anticipating the internet (perhaps because I didn’t work for Algore at the time?) I didn’t note the citation to it.  But what happened was this:  Brown I was argued twice.  Thurgood Marshall argued for the plaintiffs and John Davis (of Davis & Polk) for the defendant school board.  What I read way back in law school was that at that first argument Davis kicked Marshall’s ass all over the courtroom.  Davis was the pre-eminent Supreme Court litigator of his day; my understanding is that to this day he holds the record for most cases won in that court.  Marshall was just no match for him.  It was Frankfurter who wanted to have the case re-argued, a decision usually presented as being a stalling tactic for the court while it tried to cobble together a unanimous opinion.  But it actually seems that there was another, more sinister purpose:  The order for re-argument “invited” the federal government to submit an amicus brief.  Frankfurter did not disclose to his colleagues that he had been and proposed to remain in close contact with a former clerk at the solicitor’s office, discussing and in fact feeding him in painstaking detail what arguments to use.

Here is one mention of the incident (first page only, the balance apparently being behind paywall).  And here is another. And here is another, over at SSRN.  Since the source of all these is the same — the former clerk himself, in an article published in 1987 in the Harvard Law Review — there doesn’t seem to be much doubt that it happened.  To put it mildly, “[t]his sort of ex parte communication is considered a violation of legal ethics.”  This apparently did not distress either justice or clerk:

“‘I have no easy, snappy response to that view.  In Brown I didn’t consider myself a lawyer for a litigant.  I considered it a cause that transcended ordinary notions about propriety in a litigation.  This was not a litigation in the usual sense.  The constitutional issue went to the heart of what kind of country we are, what kind of Constitution and Supreme Court we have: whether, almost a century after the fourteenth amendment was adopted, the Court could find the wisdom and courage to hold that the amendment meant what it said, that black people could no longer be singled out and treated differently because of their color, that in everything it did, government had to be color-blind.’ He said that he would not defend his discussions with Frankfurter in technical terms.  ‘I just did what I thought was right,’ he said.”

Well.  How about that?  He just followed his “revolutionary consciousness,” to use the expression favored by his philosophical forerunners, the Cheka revolutionary tribunals who scourged the land in 1918-21.

Why are the above reminiscences by a lawyer who should have been disbarred, about a justice who ought to have been impeached, important now?  Because our author over at The Atlantic specifically praises the court that rendered Brown for being politicians.  “The Warren Court that decided Brown had five members who had been elected to office—three former U.S. senators, one of whom had also been mayor of Cleveland; one state legislator; and one governor. They were mature, they understood the law, but also understood politics and the impact of their decisions on society. As a consequence, they did not always vote in predictable fashion.”  He contrasts that with today’s court:  “Now, zero members of the Supreme Court have served in elective office, and only Stephen Breyer has significant experience serving on a staff in Congress. Eight of the nine justices previously were on U.S. courts of appeal. Few have had real-world experience outside of the legal and judicial realm.”

Our author does not stop at just praising specifically politicized jurisprudence when he agrees with the outcome.  He excoriates what he calls politicized jurisprudence when he disagrees with it.  The lengths to which he goes are truly remarkable.  Let’s let him speak for himself:

“Roberts is political in the most Machiavellian sense; he understood the zeitgeist enough to repeatedly assure the Senate during his confirmation hearings that he would strive to issue narrow opinions that respected stare decisis and achieved 9-0 or 8-1 consensus, even as he lay the groundwork during his tenure for the opposite. His surprising ruling on the Affordable Care Act was clearly done with an eye toward softening the criticism that was sure to come with the series of 5-4 decisions on campaign finance and voting rights that lay ahead.”

Get that?  Way back in 2012 Roberts was just a-scheming away, smoothly allaying fears that his politicized judgments would be obnoxious for the lefties, all the while plotting to give free rein to his politicized jurisprudence to run the opposite (wrong) way, because he just knew that all them decisions was going down on 5-4 splits.  To borrow a line from Peanuts, good grief.  Notice, by the way, that he’s also implicitly accusing his dear lefties on the court of the same sin; how else could Roberts have just known that there would be four dissenters in each of those cases?

The solution is to limit tenure on the high court bench to a single 18-year term.  Stagger the terms, so that you won’t get George W. Bush some future president able to stack the entire court during his term(s).

Being the good lefty, our author overlooks the most powerful argument in favor of limiting the time anyone gets to park himself on that bench, even though he states it himself.  To see what I’m talking about, let’s do just a teensy-weensy little editing:  “Few have had real-world experience outside of the legal and judicial realm.”  And there you have the central indictment of the judiciary, certainly at least the federal bench at its senior levels.  Huge numbers of these people are life-time government hacks (no other way to describe them).  They’ve not had to make payroll from their own pocket.  They’ve not had to choose whether to let someone go, cut everyone’s pay a bit, or not make their own house payment.  They’ve not lain awake nights praying that they can get a case settled before their child needs braces, or that the leaking head gasket on that old car will hold out just a few weeks more, so they can replace the office computer server.  In short, they have only the most theoretical notion that any mommocked-up decision of theirs will have any material consequences.  They’re philosopher-kings.

So here’s my own modest proposal.  Every judicial officer (that would include the non-Article III magistrate judges, bankruptcy judges, and administrative law judges) would have an allocation of 25 total years’ government or “non-profit” employment of any kind at any level.  Each day he spends at the public or taxpayer-subsidized teat reduces by one day the length of time he is eligible to be a judicial officer.  If he’s appointed at age 30, then at age 55 he’s off the bench, for good.  If he’s appointed to the bench at age 30, hangs around seven years, and then goes and gets a real job, at age 57 he’s got 18 years of eligibility left.  And in the intervening twenty years he’s got to see how badly things get screwed up for genuine people when philosopher-kings make a pig’s breakfast of their ruminations.  If he goes to work for some cushy “non-profit” “advocating” for “justice” or whatever the hell those outfits do for 15 years, then he gets 10 years.  It ensures turnover and it ensures, to the extent possible at all, that we will have seasoned, mature jurists and not palace eunuchs confusing their whims with constitutional mandate as is presently the case.

[Update (24 May 14):  I suppose I ought to add that segregation needed to go.  I’m not sure I agree with the proposal that separate is inherently unequal (too many counter-factuals can be heaved onto the counter for inspection for that proposition to stand, starting with the Dunbar High School that Thurgood Marshall attended).  No less-respected scholar than Herbert Wechsler famously invited the odium of all the Right Thinkers by declaring that he had racked his mind and could not come up with any logically defensible basis for the Brown ruling.  On the other hand there sure as hell is no honest argument that “separate,” as practiced by those who did so, had for its sole purpose and pretty uniform outcome “unequal.”

I think there were a very great many ways to explode the system of segregation across all of state and local law that didn’t involve doing what the Supreme Court did in Brown (which, as Ilya Somin points out, didn’t actually come out and say it overruled Plessy).

I ought to confess that I’ve never read a book-length treatment of the history of the litigation campaign that produced Brown.  My understanding, however, is that the civil rights litigants had spent years pecking piecemeal at the component systems of segregation and came to the realization that they’d spend eons doing so if they carried on that way.  So they changed strategy and went for the root-and-branch approach.  The way they went about that required the court to adopt the argument — factually incorrect and legally unsupported — that separate was inherently unequal.

Rather than do what it did, how much less violence to law and logic would it have been for the court simply to change how it read the word “person” in the Fourteenth Amendment and its implementing legislation?  I don’t do civil rights litigation (too much illogic to it), but my understanding is that as to “official” government action, the courts have gone to great lengths to avoid characterizing states, their political subdivisions, agencies, and instrumentalities as such to be “persons acting under color of state law.”  That’s always puzzled me because I cannot for the life of me figure out how that can possibly be correct.  If you say that “person” cannot include a juridical person then how the hell do you extend the operation of the Civil Rights Act of 1964 to prohibit action by corporations?  So we agree that juridical persons can be “persons” for purposes of these laws.  Why should some juridical persons be “persons” but not others?  Where is the defensible point of distinction?  The court could simply have said something along the lines of, “OK, we rule that states, their political subdivisions, agencies, and instrumentalities are ‘persons’ for all purposes of the Fourteenth Amendment.  We further rule that a person’s failure to ensure that all officials, agencies, political subdivisions, and others answerable to that person extend the protections of the Fourteenth Amendment to all individuals is a ‘denial’ of equal protection or due process, as applicable, to the same extent as if that person had acted in his, her, or its own right.”

Now observe what strategic avenues that simple change in reading opens up.  For starts, you’re down to 50 lawsuits, tops, against 50 states.  In those suits you can further use all the wrinkles and fillips of states’ laws and practices to demonstrate not so much that any particular component of a state’s actions violates the constitution, but to show the comprehensive pattern of in-fact behavior that the states were pursuing for the purpose and with the effect of denying equal protection and due process rights.  You don’t have to show that each last penny-ante elementary school doesn’t have X textbooks per pupil instead of Y.  All you have to show is that this is a prevailing pattern and the effects of the pattern where it exists.  You show the conditions in segregated jails and prisons.  You show the funding patterns and student outcomes of segregated colleges.  And so forth.  This then allows the court to find that, irrespective of what may or may not be the theoretical possibilities of segregation, the actual behaviors demonstrated, taken separately and in the aggregate, constitute a clear, intentional, and effective attempt to violate the constitution.

Going that route doesn’t require you to rule contrary to observable fact.  It doesn’t require you to grind your way piecemeal, in separate litigation, through the entire apparatus of state and local government.  It recognizes the fact that a law nominally neutral on its face can easily be so administered as to violate the constitution’s mandates and prohibitions (and by the way, that does not mean that it blesses bullshit arguments like “disparate impact” analysis).  And it recognizes the conspiratorial element in the entire Jim Crow project.]

Remind me how it Worked out Last Time

That a highly controversial, polarizing Middle Eastern head of state came to Germany and all the protesters turned out.  Prime Minister Erdogan is coming to speak in Cologne — Köln to the natives.  According to the FAZ, the protesters are already assembling from all over Europe.

It was Berlin, June, 1967, and the Shah of Iran was coming to town.  Granted, he was only going to the opera — Mozart’s Die Zauberflöte — but hey! he was an American ally and puppet.  Berlin, which has somewhat prided itself on civil disobedience ever since the latter days of the Kaiser’s reign, turned out in force.  Actually, when one says “Berlin,” one must bear in mind that back in those days the population of Berlin contained an enormous element of disaffected youth from all over the rest of Germany.  Because of its four-power occupied status (I’m going from memory of what I heard from my German friends 25+ years ago), if you were a male resident in Berlin you weren’t subject to the draft.  And apparently even student residence was sufficient to get you out.  Which means that Berlin university students skewed even more strongly left than university students typically do.

The demonstrations turned ugly, and fast.  I’ve never found a book-length treatment of that night, although I’m sure such exist.  Knowing what I do about how that place worked and to some extent still works, I’m quite confident there was a great deal of provocation among the demonstrators, in that they would have been liberally sprinkled with plants, mostly from the communist East, whose sole mission was to see to it that the demonstrators got well out of control.

On the other side you had the police.  Something to understand about Germany at this time is that large numbers of their senior leadership in all public agencies had . . . ummmm . . . not exactly pristine consciences, when it came to what they were doing for . . . oh, say . . . the years 1933 to 1945.  Oh sure, they’d got their “de-nazificationj” certification, but to an alarming extent those were simply fraudulent.  How that process worked, at least in the Foreign Office, is laid out pretty thoroughly in Das Amt und die Vergangenheit, the government-commissioned study of the office before, during, and after the Nazi era.  Let’s just say that there was a lively industry among former willing participants, fellow-travelers, and opportunists, where each would vouch for the other’s anti-Nazi bona fides.  And a lot — a lot — of people whose fingerprints were all over files, files detailing close cooperation with the SS, the SD, and the Gestapo in occupied and allied countries, in identifying Jews and Jewish assets, as well as leaning on host country officialdom, to get in the boat and row on implementing the Endlösung got their “Persilschein” (referring to a popular European laundry detergent, Persil, famed for its whitening powers). I have no reason, no reason at all, to suppose that the police would have been any different, especially since the police had been even more tightly integrated into the apparatus of horror.  Let’s just say that it’s a safe working assumption that the police on the street that night were anything but disappointed that the commies wanted to mix it up and maybe crack some skulls.  For some of their senior officials it might well have awakened fond memories of the Kapp Putsch or the glory days when the Sturmabteilung went about breaking up communist rallies and smashing Jewish shop windows.

As Lincoln observed in his Second Inaugural, “And the war came.”

On the streets the night of June 2 was a student named Benno Ohnesorg (ironically his last name translates to “without worry”).  He was married, expecting his first child, and this was his very first political demonstration (or so we’re told; it doesn’t really matter).  Also on the streets that night was a plain-clothes police officer, Karl-Heinz Kurras.  In the courtyard of a building he shot Ohnesorg, who died before they could get him treated at a hospital.  At the time Kurras was cleared (of course he was, all his fellow officers swore up and down on it, didn’t they?)

Except that Kurras wasn’t just any old beat cop.  He was also an agent of the Stasi, the principal East German surveillance and terror ministry.  He was also a long-time member of the SED, the official East German political party.  That didn’t come out until years later.  Also not coming out until years later was that the June 2, 1967, demonstrations weren’t Kurras’s first rodeo.  Turns out he’d been spying for the Soviets during the 1961 Checkpoint Charlie stand-off (English language link, this time).

The BBC calls it “the shot that changed Germany.”  And boy did it ever.  Among other young Germans radicalized by the events was a certain Gudrun Ensslin, who became one of the leaders of the Rote Armee Fraktion, the RAF, or as perhaps more widely-known in the Anglosphere, the Baader-Meinhof Gang (somewhat inaccurately; Ulrike Meinhof had long been marginalized, by among others Ensslin, well before the German Autumn of 1978).  October, 1978 saw the suicides of the senior leadership in prison, but by then the organization had morphed into a second-generation, even more violent, operation.  And they kept it up for years afterward, with bombings, assassinations, kidnappings, and so forth, only formally dissolving in April, 1988.

By way of postscript:  By 2012 new investigations (Kurras is still alive) cast serious doubt on the story told by Kurras and his colleagues (English-language link).  That story was that the officer was attacked by knife-wielding demonstrators and to defend himself he shot back.  Apparently that story can’t be squared with what is now known of the remaining physical, photographic, and documentary evidence.

Post-communist review of Stasi files does not reveal, it seems, that Kurras was acting on positive orders.  And after the shooting the Stasi broke off contact with him (well of course they would; their asset had to be considered a watched man, by the left if not by the authorities).  On the other hand, the Stasi recruited its agents very carefully, watched them like a hawk (counterintelligence), and generally spent a great deal of effort to ensure that they did things, and only those things, consistent with command from above.  And Kurras had joined the Stasi in 1955, so by June, 1967 he’s been on the payroll for some twelve years.  Even apart from his 1961 services to the Soviets he’s no rookie.

The promised demonstrations against Erdogan are supposed to be peaceful.  I suppose we’ll just have to wait and see.

Am I the Only one Seeing a Pattern Here?

Via a link at Althouse, I stumbled this morning across a 2006 interview transcript from an NPR broadcast.

The interview subject was the fellow who was publishing a biography of Upton Sinclair.  Most Americans (at one time) knew him as the author of The Jungle, his 1906 exposé novel of the Chicago meat-packing industry.  Whatever its purely literary merits (and they seem to have been patchy enough), it was enormously effective in getting America stirred up about what was on its plate.  Literally.  Sinclair was disappointed because as a socialist (he was in fact hired to write the book as a socialist tome, not a public-health pot-boiler) the parts of the book he was least interested in got the most public attention.  We’ve all heard how the book was instrumental in prompting introduction of a federally-mandated inspection regime, which generations of high school teachers have solemnly informed us was fought tooth-and-nail by “the industry.”  Except it wasn’t, at least not by the large operators.  Inspection regimes are fixed costs.  Large operators can spread those fixed costs over larger production, so the price-per-final-product is less.  Small operators have to recapture that cost over a smaller number of products with a correspondingly larger price increase.  The desired result, from the big boys’ perspective, is that their competition will be priced out of the market and new market entrants faced with a large barrier to successful entry.  And so it proved to be.  Whether meat inspection is a good thing or bad can be debated.  But what is interesting in retrospect is the extent to which Sinclair may have gilded the lily on the hygienic conditions in the industry.

So Sinclair, the socialist, had a track record of service to Larger Truths.  In 1927 two Italian immigrants, Sacco and Vanzetti, were executed for murder.  Ever since we’ve been told by all our well-meaning teachers how they were just two innocents, framed up because they were (i) immigrants; (ii) Italian; and, (iii) avowed anarchists.  So obviously the fix was in, wasn’t it?  That’s the premise that Sinclair took with him when he went to write his novel, Boston, about the case.  Sinclair’s later biographer thinks he was pretty fair in presenting the case and the evidence.  Notice how Mr. Biographer words his statement:  “I think he was fair in his representation of the evidence and the case.”  The evidence and the case are not the whole story.  Do remember, please, that trials, especially criminal trials, are highly artificial proceedings.  That’s intentionally so; giving effective meaning to the presumption of innocence requires it.  Anyone who expects “the truth” necessarily to come out in a trial is a gull who deserves to blow $4,800 on penis-enlargement surgery which goes wrong.

And as it turns out, Sacco and Vanzetti were guilty as hell, and their lawyer presented a fraudulent defense.  Let’s hear it from Sinclair himself, as related in a letter from 1929:  “Alone in a hotel room with Fred [the defense attorney], I begged him to tell me the full truth. He then told me that the men were guilty and he told me in every detail how he had framed a set of alibis for them.”  Did he disclose that?  Well no, no he didn’t.  At least he had the common decency to admit, privately, where his duty lay:  “I face the most difficult ethical problem of my life.”  And how did he resolve that “most difficult ethical problem”?  Well, being the good lefty, he went out and served that good ol’ Larger Truth.

From his biographer:  “I think he felt that the climate of opinion and the representation of their foreignness, they were Italian, and their political beliefs, which were anarchism, had almost condemned them out of hand before they had a chance at a fair trial. . . .  Even if the men were guilty, he felt that the larger context of the world in which they were living rendered their guilt perhaps less important than it might have been otherwise.”  Ummm.  Fair trial?  No, they did not have a fair trial.  They got to put on a fraudulent defense.  Their lawyer lied to the jury.  A “fair trial” does not mean “the defense wins.”  And somehow their guilt was “less important” because they were anarchists?  No, it was even more important precisely because they were anarchists.  Recall in the 1920s there was tremendous debate going on about the fundamental nature of all these (later revealed as monstrous) political movements which had welled to the surface of post-Great War Western society and were tearing European societies to pieces.  The lefties here assured us that all of us troglodyte Americans were just too hard on those folks.  They only wanted Justice for the Common Man; they were for Peace (sort of like our left-extremists nowadays keep proclaiming the Religion of Peace, and refusing to call outfits like Boko Haram what they are: bloodthirsty terrorists, even when pressed to do so by their own colleagues in government).  In the Sacco and Vanzetti case America got to see what these thugs were really all about.  So it was critically important that the correct verdict be reached precisely because it ripped the mask off.  And as it turns out, notwithstanding they were lied to, the jury got it right.

Then come the Rosenbergs, Julius and Ethel.  Rivers of tears were shed for those poor innocents, done to death by a bunch of red-baiters.  Except that Julius definitely was, and Ethel may well have been, guilty as sin.

Fast forward to the Chambers and Hiss ruckus.  For decades the left extremists swore up and down that Alger Hiss was simon-pure and no more than the victim of a witch hunt.  Except he wasn’t.  He was guilty as sin.

And then we come to Tailgunner Joe, a distasteful person by any means, and a drunk, and a mountebank.  When Eisenhower, whom General Marshall had made (it was Marshall who promoted Ike directly from Lt. Colonel to Brigadier General; it was Marshall who tapped him to command TORCH; it was Marshall who handed him OVERLORD, even though he dearly wanted it for himself (Marshall had never commanded troops in battle, and he knew this would be his last chance) and even knowing that the commander of the invasion could easily have the presidency, if he wanted it) stood on a podium and listened, in silence, as McCarthy slandered Marshall as a traitor, Truman so lost respect for Eisenhower that he would never thereafter speak his name in public.  In Plain Speaking he refers to “that fellow who followed me.”

McCarthy famously brandished his list of however many hundred people who were communist infiltrators.  No one ever saw any such list, of course, and it’s undeniable that the 1950s Red Scare tarnished many people, ruined their careers even.  On the other hand, since publication of the VENONA files (Wikipedia has a list of American names appearing in the decrypts; some of them are breath-taking, and that’s not even a complete list: more are known), it’s likewise undeniable that McCarthy was dead-on right about the degree to which senior government positions had been infiltrated by the Soviet Union.  Alger Hiss was just one of many.  Hollywood still moans about its black-listed performers, producers, and others.  On the other hand the Soviet Union in fact did make a concerted effort to subvert American popular culture.  Their most effective and lasting capture, still loyally defending his blood-soaked master decades after the facts were known, was Pete Seeger (on whom more here, from his former extremely close friend, Ron Radosh).

I could go on.  I could trot out the new left-extremist notion of “fake but true” (which fits under the rubric of “dialectics”).  I could observe that the closest that Hollywood’s got to the Katyn massacres is a tangential reference in “Enigma.”  But why go on?

The common thread in all of those is that to the left, facts just do not matter.  What must be served is the Higher Truth, or what today’s left-extremists call the “narrative.”  It’s what was at the heart of Journolist, the news-manipulation cabal run from The Washington Post and whose mission was to elect Dear Leader.  I cannot and so do not deny that there are those on the Right for whom inconvenient facts get deep-sixed.  I am unaware, though, that the air-brushing of history is formally a part of Rightist philosophy and is not only engaged in on an organized basis but is actually an approved method.  Where is the right-wing Saul Alinsky, after all?

I suppose I really ought to give up on one of my favorite expressions:  You can’t make this stuff up.  You most certainly can, and we’ve got an entire chunk of the American political spectrum that regularly does.  Because that’s what its doctrine tells it to do.  Gentle Reader might study on that.

Department of Everything Old is New Again

Yesterday in Vienna the results of a survey study were published.  Those polled were Austrians over age 15.  They were asked their opinions about a number of things, including You Know What.

First, the good news.  Eighty-five percent agreed with the statement “democracy is the best form of government.”  Remember that number: 85%.  Thirty percent agreed with the proposition that the national socialist era (in Austria, at least) brought “only bad” things; another 31% agreed with the position that it brought “mostly bad” things.  Those two groups strongly correlated with whether the particular respondent had a “Matura” (the equivalent of the German Abitur, which is a level of academic challenge and achievement most Americans aren’t exposed to until their junior year in college, if then), and with whether the respondent had an overall optimistic view of his economic future.  The further good news is that the combined 61% who saw either primarily or exclusively bad things in the 1938-45 years represents an increase from 51% in 2005.  So in nine years we’ve seen a 19.6% increase in the proportion of People Who Get It.

But, lest one get too congratulatory, 36% of the respondents agreed that the Nazi era brought “both good and bad” with it (the write-up doesn’t make clear whether the survey included questions to tease out the responsive question, “For whom?”).  I mean, I can partly understand at least the ethnic Germans figuring that, since the Anschluß ousted a government that was scarcely democratic or representative, and in fact was first cousin to the authoritarian state to the north, all they did was trade one thug for another.  On the other hand, it’s not as though Austria was poised for war in March, 1938, or that its military had been given instructions similar to those received (with blanched face and sweaty palms) by the German high command in November, 1937.  And it’s not as though pre-Hitlerian Austria was already rounding up and persecuting its Jews.

What’s alarming is that 3% of the respondents agreed that the national socialist era brought “primarily good” to Austria.  I guess all you can do is observe that there’s one in every crowd, and in fact, it seems, at the rate of 3 per 100.

More disturbingly, 56% agreed that it is time to “end the discussion of the Second World War and the Holocaust.”  Yeah, because talking too much about a monstrous crime in which your society played a leading role makes it so much less likely that someone else will go goose-stepping down your path.  American chattel slavery ended 150 years ago next spring.  Scholars are still parsing through the surviving records and evidence and still finding new facets to explore, new insights to gain, new lessons with resonance for human relationships in the 21st Century.  The twelve years of national socialism left incomparably greater documentary residue, and the Last Pertinent Question on the war and its implications for humanity isn’t likely to be asked or answered in my lifetime.  But hey! Austria’s Got Talent! or whatever crap they watch over there.

You can to some degree write off that 56%.  Half the human population is of below-average intelligence (that’s not invidious; it’s statistics).  It’s not reasonable to expect that lower half of the curve to have the imagination to suspect the vast scope of the unexplored that remains out there in any field of contemplation as complex as what went down from 1933-45, and in fact the years preceding it and following.  While it sounds callous, you can write them off because there’s no reason to suppose they’ve been listening to the discussion in the first place.

The genuinely alarming data point from this survey is the number — 29% — who agreed that what Austria needs is “a strong Leader who does not need to worry about parliaments and elections.”  Oh dear.

For starts, don’t think that 29% figure is small enough to ignore.  The Nazis themselves in Germany only topped out at 43.9% in their last election (05 March 1933), and that was after they’d taken power, after the Reichstag fire, after arresting most of the socialist and communist party leadership, and after loosing the Sturmabteilung in its tens of thousands on the streets.

Secondly it gives an idea of how high a proportion of the population (i) seeks its salvation in government action, and (ii) views that action as itself a normative positive value.  As Jonah Goldberg points out in Liberal Fascism, one thing the fascistic parties of Europe (and their leftist sympathizers in America) all shared in common is an express faith in the value of action, forceful action, action that stands for no delays for deliberation.  “Bold, continuous experimentation” (FDR), anyone?

This 29% number suggests that a large proportion of one’s fellows has not contemplated how much easier is it to do harm than good, how much easier it is to un-do good than harm, and finally, how susceptible to the laws of unintended consequences governmental action is.  When Calvin Coolidge’s father was elected to the Vermont legislature, his son, by then a Massachusetts state senator (I’ve slept since I read this, and I don’t think he’d been elected governor yet), wrote him a note.  It was much, much more important, Calvin wrote his father, to thwart bad legislation than it was to pass good.  Calvin Got It.  Wanting a “strong leader” who can “cut through the red tape” and “get things done” without all that pesky give-and-take, all that empty vaporing debate, is strong evidence that one is dealing with someone who simply has not attended to the world around him very carefully.  [Ironically it was Coolidge and Dawes, grinding through the federal budgets line by line, who actually in the literal sense eliminated use of the red tape that had been used to bind government documents.  That anecdote is in Amity Shlaes’s recent biography of Coolidge.]

Finally, 29% thinking what one needs is a strong leader who need not bother with legislatures and elections, while 85% think democracy is the best form of government, suggests that a sizable proportion of the Austrian population is politically schizophrenic.  Guys:  You cannot square those two positions into any relationship other than diametric opposition.  Holding those two thoughts simultaneously and consistently is not possible.

You have to wonder whether the survey designers shoved in questions which, together or in a single question, restated the guts of the Ermächtigungsgesetz (translation here) and then asked the agree/disagree position.  I wonder how many, relative to 29%, would have agreed with the proposition that what Austria needs is legislation that grants the country’s Leader the power to do those certain specific things which the Reichstag granted Hitler in 1933.

Woodrow Wilson’s Long Shadow

So I recently finished reading Wilson, A. Scott Berg’s new biography of Woodrow Wilson – actually, Thomas Woodrow Wilson. It was a Christmas gift, along with Margaret MacMillan’s The War that Ended Peace: The Road to 1914 and Scott Anderson’s Lawrence in Arabia (in the middle of reading which last I now am).

This book was only my third extensive exposure to the life and thought of a man who comes as near to American beatification by serious thinkers as any politician since Lincoln. FDR’s reputation rests more on what he actually accomplished than on his character traits. Kennedy is mostly a media creation. Wilson is, in common with Lincoln and Jefferson, revered for what are represented to be his thoughts. What those thoughts might be are commonly – and vaguely – understood to be very high-flown notions of the unity of all men, the need for collective security, and of course at the center of it all his Fourteen Points. 

The careful reader will notice that all of those hazily understood concepts have one thing in common: the Great War. Specifically, they’re all outgrowths of Wilson’s contemplation of Europe’s four-year suicide bid. Lincoln, by the time he got to the White House, had spent years engaging with slavery, abolitionism, and the political tensions those forces generated. His 1858 debates with Stephen Douglas remain among the classics of Western political discourse, and that senate campaign was far from his first debate. With one portentous exception, Jefferson’s most creative political thinking was several years in the past by the time he got there. Until August, 1914, however, Wilson had never had occasion to devote much energy at all to international affairs and certainly none to the implications of an entire culture immolating itself. He came to the office with the expressed intention of spending his efforts on purely domestic issues. 

A further point of distinction is not insignificant, I suggest. Jefferson and Lincoln both had the experience of years of head-to-head engagement with ideological foes and allies who saw themselves, and with whom Jefferson and Lincoln engaged, as peers. Their thinking benefitted from the crucible effect of, in Jefferson’s case, his exposure to an historically unique constellation of statesmen, and in Lincoln’s his experiences riding the circuit from county to county, arguing and debating with peers, juries, and the public. I could not tell from Berg’s biography that Wilson ever really engaged with the welter of thought around him. The life Berg describes is one spent lecturing (remember this was an era in which the public lecture was very popular entertainment among most levels of society). As a classroom teacher he grew accustomed to being acknowledged as a if not the fount of wisdom by his pupils. Except for one brief period in Georgia in which Wilson practiced law – if having a single probate case as one’s entire professional experience counts as “practiced” – he never really held a job outside academic circles and elective office. Combined with his ecclesiastical family background his formation as the layer-down-of-rules seems to have left an indelible mark on how his mind worked. 

And it doesn’t appear to have been just that. Even as a child Wilson was big on setting rules for others to follow. One example is given of a group of young boys, his peers, who got together for I no longer recall what, and Tommy (as he was known until his graduation from college) makes it among his first orders of business to promulgate a written constitution for the club. “Promulgate” seems to be precisely the correct verb, too; in all of Wilson there’s not a whiff of his even seeking input, let alone consensus from anyone. Contrast Lincoln circulating his first inaugural address in draft to several of his prospective cabinet members, or Jefferson working as part of a committee to draft the Declaration. If Americans had coats of arms, Ipse dixit would be on Wilson’s. 

Wilson was greatly enamored of both his wives. After his marriage to Ellen, to the extent he needed human interaction, he seems to have derived nearly all he required from her, and then within a few months after her death he was consumed with ardor for Edith. While that’s enviable in some respect, it is also not necessarily a desirable character trait in a political leader. Certainly his interactions with adults seem to break down into two overall groupings: (i) those who fawned on him as the Sage of New Jersey, and (ii) those to whom he laid down the rules. Even the two men with whom he was closest, Edward M. “Colonel” House and John Grier Hibben, a fellow junior faculty member at Princeton, do not seem to have broken the pattern, although perhaps of all men Hibben came closest. 

In short, I get the profound impression that Wilson’s life experiences did not sufficiently expose him to the friction and concussion of dealing with men he regarded or had to regard as his equals. 

Largely if not entirely without actual adult friends, it’s hard to get a sense that Wilson ever had the daily experience of emotional closeness to another person whose ideas were not die-stamped by his own or just parroted back in hopes of a good grade. In all of Berg’s book I don’t recall a single instance of a peer acknowledged by Wilson as such telling him he was talking through his hat and kindly leave off gibbering. Predictably, he does not seem to have accepted the notion that reasonable men could disagree with him in good faith and were entitled to pursue their own notions of what was necessary or proper in any particular circumstance. When he was appointed president of Princeton he was treated as walking on water and parting it for those who couldn’t. That’s always a dangerous brew to serve to anyone and especially to someone whose resistance to it seems to have been roughly similar to the resistance to alcohol on display by the Indians in Betty McDonald’s The Egg and I. When the inevitable disagreements occurred, there was never any question of collectively making the decision and everyone living with the outcome cheerfully. 

The most dramatic instance of this (apart from the fight over the Versailles treaty) involved, as does so much else in academic settings, a tempest in a teapot. Wilson believed the off-campus clubs were fostering a spirit of elitism at Princeton. He believed they were exclusionary of the less-affluent students, the less-socially-gifted ones. So he set out to undermine them by denying them a recruiting pool. Wilson’s notion was to build large, self-contained student living facilities, what we today know as quadrangles. There the underclassmen would be obliged to live with each other, their company not self-selected but chosen for them by whatever mechanism the university chose to adopt from time to time. Hardly surprisingly this idea did not meet universal approval, and some intense politicking went on. Eventually Wilson couldn’t carry the issue. Hibben, by that time senior faculty, sided with Wilson’s opponents. Wilson never addressed another personal word to him for the rest of his life, and even in the White House tackily avoided meeting the man whom he had once described in almost amatory terms. 

How Wilson treated Hibben over what was, after all, a relatively trivial issue and one which was not a decision that could never be re-visited (there was no reason the trustees couldn’t decide at some later point to go ahead and build quadrangles and implement Wilson’s vision for them in whole or in part) demonstrates what I’m going to say was a deep character flaw in Wilson. His enmity was not at all feigned; when he had occasion to allude to Hibben in later years (nearly always elliptically, it seems) he never backed off from the accusation of betrayal. 

Compare and contrast Wilson’s treatment of Hibben with the relationship that grew between Jefferson and Adams. True, it took a number of years after both men were out of office, but once rekindled their friendship produced hundreds of letters and thousands of words over the course of many years. They wrote each other about nearly everything and although they still disagreed on many things, by the time they died, on the same day and within hours of each other, each died with the other’s name on his lips. And Jefferson had run the original dirty, slanderous campaign which destroyed Adams’s political career. Not that Adams wasn’t as prickly as they come, but he’d been a farmer, a courtroom lawyer, and a diplomat for decades before he came to office. Each and all of those provided him with the experience of contradiction, frustration, and engagement with fundamentally opposed and well-defended principles.

Wilson thought House got above himself at the Paris Conference in 1919. He wasn’t entirely unjustified, either. For several weeks Wilson had to come back to the U.S. to attend to matters for which the president was indispensable. While Wilson was gone House had very consciously made deals that he must have known Wilson would never have countenanced if present. Recall that House had no official position, at all; the White House porter was more a government official than he. House was only in Paris as Wilson’s alter ego; the actual secretary of state, Robert Lansing, was side-lined, treated as a cipher, a nullity. So while House must bear the blame for having exceeded his phantom remit, it was Wilson who put him in the position of being able to do so in the first place. Had Wilson not been so adamant on denying any scope to the feller who was, you know, the lawful official to discharge that function, Lloyd George and Clemenceau would have no more listened to Edward House than they would Wilson’s barber. Whatever the who-shot-Johns of the matter, the fact remains that after their return to America Wilson never addressed another word to House. 

After Wilson left the White House, his long-time aide, Joseph P. Tumulty, was scrambling to find some hand-hold. The country had swung wildly Republican in the 1920 elections. The Congress was solid Republican and Harding’s White House was dedicated to undoing as many of Wilson’s policies as it could. Tumulty had been with Wilson since his nomination to the New Jersey governor’s mansion ten-plus years before. He’d been loyal, self-sacrificing, incredibly hard-working, discreet – in short, everything you could possibly want in a confidential secretary. And now he was out of work and out of favor in the only town he knew how to navigate. Wilson wouldn’t lift a finger to help him, and when Tumulty finally went too far, publicly attributing to Wilson statements that Wilson had pointedly refused to make at Tumulty’s request, Wilson cut him out of his life. Yes, what Tumulty did was wrong, but how much a strain on common decency is it to see to it that the people who have sacrificed their existences and fortunes to advance your own are taken care of, once you no longer have need of their services? Fairness, however, requires that I mention two other prominent personages who are known to have sinned in this regard, viz. Churchill and Wm. J. Clinton. Churchill never obliged people to commit crimes and take the fall for him, as did the latter, but once Winston was done with you, you were pretty much done with (for a better look at this disappointing aspect of him, see Troublesome Young Men, Lynne Olson’s book about the small number of men who clustered about Churchill during his Wilderness years . . . which of course makes his treatment of them all the more unworthy). 

There are very few words that adequately describe someone who treats people like Wilson treated them. “Vicious” is one that will fit the bill. 

A good deal of Wilson is naturally devoted to the war years and their aftermath, and a central part of that period was Wilson’s growing dedication to the notion of what we now call “collective security.” At the Paris Conference he more or less insisted that adoption of the League of Nations and its incorporation into the final treaty itself (as opposed to making it a side bargain) be the first order of business. He carried that point; the League was adopted pretty much as he’d demanded it be. From that point things didn’t really go his way very well. 

Over the decades Wilson’s taken a good drubbing as some starry-eyed naïf, a little boy in short pants who blunders into a lion’s den with the idea that if they’ll all just take turns licking on his lollipop everyone will do just fine. There is a bit of truth in that. Wilson represented – very simplified – the notion of peace without victory. The problem was, the situation on the ground, both on the former battlefields and behind doors in the chancelleries, just would not admit of that resolution. Every one of the belligerents was a parliamentary democracy, and two of them – Italy and France – were notoriously unstable democracies at that. Even Britain was still operating with its makeshift wartime coalition (how cohesive could a government be that had both Lloyd George and Lord Curzon in it, after all?). Bluntly, they had to answer to their voters, and those voters had just watched most of an entire generation of young men be slaughtered, maimed, gassed, and shell-shocked into twitching bags of nerves. Two of the Allies, France and Belgium, had endured physical destruction on a scale never before seen in human history. Wilson doesn’t seem to have accepted that those populations were just not going to be satisfied with not having won the war. 

On the other hand, and in Wilson’s defense, the objectives of Lloyd George and Clemenceau were no less unrealistic. In a fight, you haven’t won until the other guy acknowledges you’ve won; until then the fight is still on, even though you might not actually be trading punches. The way the Great War came to a close, with an armistice instead of a surrender, with the German army marching home in formation and under arms, and with the social power structures – for which read: the pervasive dominance of the military – still intact, whatever the outcome was, Germany was not in a position of being compelled to acknowledge defeat. And it didn’t. We all know how the poisonous “stab-in-the-back” conspiracy theory came to be seized on in later years, first by the army and then by the Nazis. On a more immediate level, though, Lloyd George and Clemenceau were trying to impose the kind of peace that you would achieve after an unconditional surrender. In a supreme irony, “peace without victory” is not just what Wilson was advocating – it was exactly what Britain and France got. Which is to say that they got neither victory nor peace. 

On a final note of irony, given the personality of the man – remember how he treated Hibben, the closest he ever had to a friend other than his wives – how would Wilson have fared if the U.S. had ratified the Versailles Treaty and joined the League? He couldn’t bear contradiction or defiance. Wilson couldn’t take it when the Princeton trustees wouldn’t let him build residential quadrangles, fer cryin’ out loud. How would he have reacted to the post-war chaos in Eastern Europe? Would he have quit communicating with his fellow heads of state? Would he have recalled the American representative to the League? Would he have taken the U.S. right back out the first time the steeped-in-gore-up-to-the-shoulders politicians of Europe heard one of his sermons and either laughed in his face or gave him the Bronx cheer? How would he have dealt with the Imperial Japanese delegates, men representing a society both incomparably more ancient than Wilson’s own and at the same time aggressively expansionist? 

I understand that in writing a one-volume biography of someone who lived a life such as Wilson’s there’s a tremendous amount that you’re simply not going to get in. So I don’t say this by way of faulting Berg (what I do fault him for are his not-terribly-subtle digs at one end of the modern partisan spectrum, such as by pointing out that Wilson played more golf while in office than any president before or since, or his reference to the second Iraq war), but one thing I looked forward to reading more about were Wilson’s ideas about government, the relationship between the citizen and the state, and the nature and proper purposes of political power. 

Because you see, there are some dissenting voices, even here in America. Not everyone agrees that Wilson was Solomon reincarnate, a veritable saint of equal parts brilliance and compassion. A few years ago I read Jonah Goldberg’s Liberal Fascism, his 2007 tome on the intellectual roots and modern manifestations of the ideas which gave us most famously Mussolini and Hitler. The book’s dated, however, by including a great deal of material on the intellectual antecedents and pronouncements of one Hillary Rodham Clinton, who back then was “inevitably” going to be the 2008 Democrat nominee. Don’t get me wrong: Goldberg’s done his work on Clinton’s intellectual and moral background, and what he lays out is pretty sobering stuff. But unless she’s nominated and elected in 2016 those portions of the book will not age very well. 

For me the by-far most interesting part of Goldberg’s book is Chapter 3, “Woodrow Wilson and the Birth of Liberal Fascism.” You see, before Wilson was appointed president of Princeton, he was a prolific writer on political subjects; in fact, he’s got a good claim to be godfather of “political science” as a specifically academic subject. Among his most famous works is an 800-page doorstop entitled The State. As a graduate student at Johns Hopkins he produced Congressional Government. Other significant works include Constitutional Government in the United States. Wilson also wrote numerous essays, and his speeches were, in the fashion of the times, compiled into book form. Among the former Goldberg mentions “Leaders of Men,” an 1890 effort, and among the latter The New Freedom, consisting of his 1912 campaign speeches. I’d wanted to see some significant time spent by Berg on those writings, because Goldberg actually quotes from them and from Wilson’s speeches. What he quotes is, to put it mildly, unsettling. 

“No doubt a lot of nonsense has been talked about the inalienable rights of the individual, and a great deal that was mere vague sentiment and pleasing speculation has been put forward as a fundamental principle.” Compare and contrast: Independence, Declaration of. 

The constitutional structures of what we know as checks and balances among the three branches among which coercive power is divided had “proven mischievous just to the extent to which they have succeeded in establishing themselves as realities.” 

“[L]iving political constitutions must be Darwinian in structure and in practice. Society is a living organism and must obey the laws of Life . . . it must develop. . . . [A]ll that progressives ask or demand is permission – in an era when ‘development,’ ‘evolution,’ is the scientific word – to interpret the Constitution according to the Darwinian principle.” Substitute the German völkisch for Darwinian and you’ve got the “national” part of “national socialism” in a nutshell. 

The “true leader” uses the masses “like tools,” Goldberg quotes. Further, from the same source (“Leaders of Men”): “Only a very gross substance of concrete conception can make any impression on the minds of the masses. They must get their ideas very absolutely put, and are much readier to receive a half truth which they can promptly understand than a whole truth which has too many sides to be seen all at once. The competent leader of men cares little for the internal niceties of other people’s characters; he cares much – everything – for the external uses to which they may be put.” Oh dear; that sounds distressingly like a first cousin to Hitler’s große Lüge – the “big lie.” It also stands in a straight line with “fake-but-true,” the mantra of the modern American legacy media. 

From a speech given in New York during the 1912 campaign, we have, “You know that it was Jefferson who said that the best government is that which does as little governing as possible . . . . But that time is passed. America is not now and cannot in the future be a place for unrestricted individual enterprise.” Tell that to Steve Jobs. Hell, tell that to Oprah Winfrey, for that matter. 

Except for that last one I cannot recall seeing any of those quotations mentioned in Wilson. Goldberg’s endnotes suggest a wide range of further reading on the subject. Not having the time to parse through all of them myself (or maybe not; many are still available on Amazon.com), I was hoping that Berg would do the heavy lifting for me. He didn’t. Again, as the author he’s got to leave something out or he’d never finish the book. On the other hand he does spend a great deal of space on Wilson’s moralistic approach to his political thought. And of course Wilson made his name first as precisely a political theorist. The last line of Berg’s book refers to “the lengthening shadow of Woodrow Wilson” over Washington, DC. It’s exactly because I think Berg’s got that observation just right that I find his omissions in respect of Wilson’s expressions of theory to be especially unfortunate. 

Maybe it’s time for me to mich auseinandersetzen (that wonderful German reflexive verb for which I can’t think of an English equivalent; transliterated it means “to take oneself apart,” and it means to engage in a subject or person fully, by completely unpacking all the components and examining them in the closest detail) with Wilson’s actual writings. Notwithstanding our present Dear Leader’s self-description, no one has come to high office a blank slate. Each person’s road there formed how he or she thought about the world, how it works, how it ought to work, and what measures are necessary or permissible to make it conform to one’s own vision. Wilson was no different. 

My very last semester in college I took one of the most interesting courses I’ve ever taken at any level. History 366 it was, “20th Century American Wars as a Personal and Social Experience.” Two observations by the professor I still remember. The first was that, until the Great War, most Americans’ only exposure to the federal government was in the form of their local post office. The second was that a huge number of the men who made the New Deal cut their teeth in the World War I mobilization effort. Jonah Goldberg makes the argument – which if perhaps a tad overdone isn’t so by much – that World War I was America’s first taste of totalitarian government. 

By “totalitarian” Goldberg means a frame of thought and action which does not view any aspect of human existence as not being appropriately the subject of political (and therefore coercive) control. The war years were years in which Americans were encouraged and recruited to spy on each other. The post office was given carte blanche to monitor and censor Americans’ communications with each other. Loyalty oaths were imposed. Industrial relations were controlled, as were entire swathes of the economy (the railroads were outright seized for the duration). You can make a valid argument that most of those measures were in fact necessary in order to take an economy from peacetime to war mobilization in a matter of months.

The point is that men such as Wilson – “Progressives,” they called themselves – viewed the mobilization effort not as a temporary disruption of an otherwise largely unguided constellation of private arrangements, but as a template for human existence. Remember Wilson’s comments from 1912, well before the war, about how America cannot any longer be a place of unrestricted (highly important word selection there, by the way) individual enterprise. A good deal of the wormwood of the 1920s for the American left was watching the policies of the Wilson years get unwound, first by Harding out of corruption, and then by Coolidge out of principle. You can’t more clearly draw the contrast between Wilson and Coolidge than Silent Cal’s speech on the Declaration’s sesquicentennial (which should be mandatory reading in every American high school, I suggest).

Compare Wilson’s statements on the need for a völkisch Darwinian interpretation of the Constitution and the “nonsense” of inalienable rights with Coolidge’s observations of the same questions, in the context of the Declaration:

“About the Declaration there is a finality that is exceedingly restful. It is often asserted that the world has made a great deal of progress since 1776, that we have had new thoughts and new experiences which have given us a great advance over the people of that day, and that we may therefore very well discard their conclusions for something more modern. But that reasoning can not be applied to this great charter. If all men are created equal, that is final. If they are endowed with inalienable rights, that is final. If governments derive their just powers from the consent of the governed, that is final. No advance, no progress can be made beyond these propositions. If anyone wishes to deny their truth or their soundness, the only direction in which he can proceed historically is not forward, but backward toward the time when there was no equality, no rights of the individual, no rule of the people. Those who wish to proceed in that direction can not lay claim to progress. They are reactionary. Their ideas are not more modern, but more ancient, than those of the Revolutionary fathers.”

This has turned out to be a bit more than just a book review. I’ve spent more time on other’s treatments of Berg’s subject than is properly done as a general rule. It wasn’t done to suggest that Berg’s written a poor book, but rather to observe that I wish his publisher had let him write a longer one. Berg does a very good job showing us the gauges and needles on the dashboard and how the windows silently slide up and down and where the heater vents are, but I wish he’d popped the hood a bit wider open for us, and shone a stronger drop-light into the engine compartment. I still highly recommend the book, to be read together with Paris 1919: Six Months That Changed the World (speaking of Margaret MacMillan), and Chapter 3 of Liberal Fascism.

 

Of God and Memory

Forewarning: This is something of a stream-of-consciousness post, and thus an experiment. Whether it was a successful experiment I will leave to whomever stumbles across this.

The other day I was chatting with someone of my acquaintance who happens to be an Episcopal priest, and much enjoys theology. By “theology” I mean the formalized thinking – principally Christian, of course – about the nature of godhead, our relationship with God both as individuals and as members of the different overlapping and intersecting societal spheres we inhabit, and so forth. Given that Jesus was an observant Jew, of course, Jewish theology both at the time and as since developed provides not only a backdrop but also an important substantive cross-check and interpretive tool for pondering the Mysteries as revealed through the teachings of Jesus and the Church Fathers. 

[Aside: I must here confess that I am a bit leery of taking “theology” at least in its modern manifestation too seriously. It is on the one hand undeniably true that we live in a radically different world than the one in which Jesus moved and taught, and that we modern humans have utterly different relationships with many of the circumstances of our existence than did the people who flocked to hear Jesus (or Paul or any of the others) teach. Since “circumstances of our existence” includes each other – in fact you might make a very compelling argument that we are each other’s principal circumstances of existence and our relationships with each other define our existence, at least on a moral plane – that suggests also that the bonds between and among the people who heard Jesus teach and in the context of which they understood Him are of mixed utility in discerning the answer to The Great Big Question: How do I live my life in the world I confront now? Of course the response to that is that Jesus, being of one Substance with the Father, would have known all that before a word left His mouth, and would have taught the people accordingly. 

More importantly and on the other hand, and this is where I cannot avoid the niggling suspicion that way too many “theologians” are getting off the path, Jesus did not come to preach to the post-doctoral students. The people He taught were to a man, nearly, illiterate. They were dirt poor, hungry, and eaten alive with vermin, parasites, and pathogens. Huge numbers of them would have lived in what we today would describe as filth, their own and their animals’. As Mark Twain noted during his travels in the Holy Land, Jesus chose the most immediate and effective message he could have, among that people: He healed their sick. They also practiced slavery (and in jubilee years freed their slaves and forgave their debtors). Given the standards of medical care in antiquity and the reasonably foreseeable rate of death in childbirth, their domestic habits would likely not pass modern muster either. I’ll guarantee that, of the groom, bride’s parents, and others at the wedding feast where Jesus first manifested His divinity by performing His first miracle, nowadays the groom would be on his way to prison for at least statutory rape (and maybe rape of a child, if she was at the youngest end of the marriageable age spectrum) and the parents would be headed the same way for conspiracy and contributing. And does anyone want to bet how many people at that feast were 21 years of age or older and still managed to drink the wine casks dry? 

All of that is just to make a very simple point: Jesus was teaching to simple people whose understanding and ability to take His teaching and apply it in their own daily lives were extremely limited. Even though people back then spent what we today would consider a phenomenal amount of time and energy actively pondering and discussing theology, you can’t get away from the fact that these were not Learned People. All these modern esoteric doctrines of this-that-and-the-other, the mountains of what can only with charity be described as academic gibberish the principal aim of which seems to be the “proof” that practicing Christianity must necessarily dictate support for the farthest-left wing of the farthest-left parties, including political support for the most murderous and humanity-destroying philosophies ever devised by the mind of corrupted man and unqualified support for the legally-unfettered right to kill one’s unborn baby, and the rest of it really smacks as being presumptuous. Likewise even more outrageous is the suggestion that unless you can navigate the tomes of modern “theology” you can’t claim to understand Christianity in its essentials or details and therefore you should please shut up and do as you are told by the Deep Thinkers Who Understand Things Better Than You. Remind me again of how this differs from the insistence on keeping the Gospels available exclusively in a language not even spoken by an illiterate peasant mass. Jesus may have instructed His apostles to go forth and make disciples of all the world’s peoples, but that was only because He wasn’t going to keep mooning about the place. While Jesus walked among men, He taught directly to the lowest and meanest of the world’s poor. I cannot accept that He would have chosen to preach to them a message that they were unable to comprehend sufficiently to, as He invited, “Come and follow me.”] 

Having now unburdened myself of that little screed, I proceed on to my post. 

My interlocutor was discussing a funeral sermon to be delivered this Sunday. The subject of remembrance came up. As it was told to me, in Jewish understanding so long as any remains alive who can “call your name” (as we say out in the country), you are still a part of a living community of believers. That much seems reasonable and I’ll have to take it on faith, being personally unfamiliar with the nuances and so forth. Also brought up was the mythology of Isis and Osiris. They were brother and sister and also husband and wife (talk about ancient domestic arrangements, but then King Tut really was the product of just such an incestuous union). Osiris managed to offend the wrong sort of god, who slaughtered him and scattered his pieces up and down a long valley. Isis went looking for the pieces, weeping and lamenting; her tears formed the Nile. She found them all, it seems, except his . . . ahem . . . manhood, which seems to have come to grief in a marsh or something of that nature and been eaten by an animal. She put them back together and, being herself a goddess, managed to bring him back to life long enough to impregnate her (how that happened without . . . oh well, I suppose when you’re both gods you can arrange such things). In any event, the story was told in the context of reading the word “remembering” as “re-membering,” the re-assembly of fragments. 

While I’m not sure that’s sound etymology (Mr. Webster does not back the proposition), thinking about “remembering” in that manner does seem to make a bit of sense. For starts, our experiences of each other are necessarily fragmentary, even of those closest to us. Our recollective powers are likewise patchy and subject to the ravages of space and time. When one dies to us, all we have left are these piece-work glimpses, some fading, some remaining acutely vivid. In our re-membering the departed one, we re-assemble that person into a living presence, in the sense of a presence capable of offering joy, sorrow, hurt, hope, laughter, comfort, and all the other essentially human interactions. True enough: these interactions are no longer with a live human organism, but then the sensations which remain are no less real. When we communally “re-member” a person we gain not only just the number of points of recollection but also we restore, somewhat, the multi-dimensional character that person displayed while alive. What makes a diamond sparkle is not its surface but its depth. No one is the same person to everyone. Each of us, even if experiencing the same character attribute of a person, experiences that person as expressed in that attribute differently. 

Contemplation of these little snippets leads me to contemplate a subject that presents itself to me from time to time. The simple fact is that almost no one I know, at least not in my close circle of acquaintance — those in whose most immediate presence I spend my life — is interested in certain of the things which absolutely fascinate me. I know enough to accept that circumstance not as an indictment of anyone – why ought anyone find interesting what I do, after all? – but rather as a fundamental set of relationships with the world I move in. Well, perhaps a better way of stating that would be a lack of a set of relationships. Other people have their own interests, worries, hopes, and dreams, and it is unreasonable to expect them to respond the same way to the things which intrigue me. So over the years I’ve learned to enjoy what I enjoy and accept that I will likely never share the joy of it with anyone, or at least not face-to-face. Which is a pity, but the world is full of much greater pities. 

One of the sets of things which fascinates me is history in general, and the specifically human experiences that collectively make up “history.” Having a head which seems unfortunately suited to the retention of masses of trivial detail, it is packed solid (pun intended) with exactly that sort of detail. The names, dates, occurrences, and parallels to the world I know crowd around me. I can lose myself for long periods contemplating what the world looked like to the monks who first staffed up Cluny. I find intriguing pondering the sweep of a particular family, from the Habichtsburg above a tiny Swiss village in the 12th Century to the burial of Archduke Otto in Vienna in July, 2010. My home county is criss-crossed with the remains of old country roads. You can see them traced across open fields, a double line of trees about ten or twelve feet apart (trees don’t naturally grow like that, you know). I see them and instantly I’m transported back to 1910 or sometime, wondering what it must have felt like to be driving a horse-drawn farm wagon down one of those roads, lurching from hole to rock and back. What it must have sounded like, smelled like. Around here you can till up the ground for a garden and depending on where on the hill you’re working be pretty certain of digging up numerous fragments of arrow heads, spear heads, chippers, scrapers, and similar traces of long-ago camps. What were they talking about around that campfire as this chip was struck from the edge of this arrow head? Had the hunt been good that day? Could they have, perhaps in some religious trance or other halluncinatory interlude, have had the slightest inkling of Us, centuries later, stumbling across their hunting camp? A number of years ago I was in a museum in Freiburg, the Augustinermuseum. Among their exhibits are ecclesiastical carvings and so forth from around that area. One of them was an altar crucifix that had been carved sometime in the 1100s. I am unqualified to speak of the artistic merits of it, but what gripped me was the thought of all the thousands of people from that village and the surrounding farms who would have sat in front of that figure over the course of centuries. Through the Black Death; through the Reformation and the Peasants’ War; through the Thirty Years War and Napoleon’s invasions; through all manner of other wars, tumults, robber barons, famines, and festivals. Who were they? What were their worlds like, for them? 

I also and especially enjoy reading books written about then-current events. The author of course doesn’t know how the story ends; he cannot fully know which aspects of what he’s looking at may be Truly Significant. One such book that immediately comes to mind is Strong Man Rules, which went to press no later than June 29, 1934. The author was a professor at Hunter College, and the book is about this new political regime that’s just coming into focus, in Germany. It’s about who’s in, who’s out, who owes whom what favors, and so forth. Among the Rising Men (other than the Chancellor, of course) is mentioned Ernst Röhm, who is, the author opines, certain to be heard from further. Which is how I know the absolute latest date on which that book was turned over to the printers. But the whole thing is the author has no idea how the story ends. The crematoria, the thousands of starving prisoners, the corpse-filled trenches all across Eastern Europe, the embers of tens of thousands of houses, and the stink of the bodies buried beneath . . . those things would not, could not have occurred to him. Kristallnacht? What’s that supposed to be? 

And so on. 

I won’t say that such things and people are somehow “real” to me. In most cases I don’t – can’t – even know their names, or even when they might have existed, and my efforts to re-awaken by the feeble powers of my imagination are . . . well, feeble. I do know that they did exist, however, and in thinking about them and their world – the things they saw, heard, smelled, knew, and the things they couldn’t have known but I now do, just by having come along a matter of several decades or centuries later – I get the sensation of having them become a part of me, of how I greet the world. And by that feeble process of re-awakening them and their world it is as if, in some nebulous way, I am living not only today, the January of 2014, but all prior days, and all at once. Part of me thinks I can get on that road, now overgrown between its rows of bordering trees, and Go Where They Went. I can look at that crucifix and hear the sermons. I can re-create the sensation of not knowing, as the Duke of Wellington described it, what is on the other side of that hill. 

For me, it’s as though I get to live in all worlds up to now, and each day just adds to the pile. I don’t have to turn loose of anything that ever was; I can still hear the buzz of insects outside that village church door on an August afternoon. Wasn’t it Faulkner who said that around here, the past really isn’t even past? The old boy might have been on to something.

 

Chastised with Scorpions

Some weeks ago I ran across what was the beginnings of a book review, by Ta-Nehisi Coates in The Atlantic. I say “beginnings” because as he says, he only made it (listening in MP3 format) partway through one of the early chapters before he had to stop. As strong a stomach for portrayals of evil as he claims to have, he confesses himself revolted beyond endurance.

The book is Bloodlands: Europe Between Hitler and Stalin, Timothy Snyder’s 2010 history of a particular part of Europe during a very special period in its history. The “bloodlands” Snyder describes consist of the western rim of the Soviet Union (with reference to its pre-1945 borders), Poland, the Ukraine, and the Baltic republics of Estonia, Latvia, and Lithuania. This part of the world, largely cut off from the consciousness of the rest of the Western understanding first by war and societal collapse, then by revolution and civil war, then by war again, and finally by the Iron Curtain, got to experience its very own special kind of hell from the late 1920s to the late 1940s. During twelve of those roughly 20 years both Hitler and Stalin were in power, and both turned their blood-soaked attentions to it. 

Until comparatively recently few in Western Europe and fewer in the United States have known more than the bare outlines of what happened in the bloodlands, and even since the Soviet collapse the act of memory remains burdened by the purpose of memory, by which is largely meant the political purpose of memory. In terms of Getting the Story Out there were just too many people who had every reason to un-make the history. The only Western Europeans with any sort of broad personal exposure to what happened there – the Germans, both military, quasi-military, and civilians (even women civilians) – were understandably reluctant to call attention to what they did and saw in the bloodlands. The communists were likewise perpetrators on a grand scale, and thus for decades the white-washing of communist crimes by the Western intellectual elite confined their understanding of Stalin’s crimes to his purge of the Party in 1937-38 (Solzhenitsyn deals extensively with the myopia of the True Believers, as he calls them; for them the other millions of victims of the Great Terror just didn’t – and to this day don’t – pop up on the screen).  It’s not just the perps who have re-purposed the era, either. Even among the populations from whom the victims came, the martyr cult has been forced into a nationalist understanding of what happened and why. 

Just what did happen? The book opens with scenes from the destruction of the kulaks and the collectivization of Soviet agriculture. First came the “destruction of the kulaks as a class.”  And who was a kulak?  Anyone we say.  If you have two cows: you’re a kulak.  If your family has carefully tended its field for decades so that you produce more than the vodka-soaked farmer down the lane: you’re a kulak.  If you loaned a neighbor a few rubles to put a crop in this year: you’re a kulak.  A good proxy expression for “kulak” is “successful peasant.”  That’s important, because in the Russian village no less than anywhere else, it’s the successful to whom people look for leadership.  Those “kulaks” were not only in themselves objectionable from a class standpoint, they were also points around which resistance to Stalin’s further plans might coalesce. 

The “kulaks” thus had to be destroyed.  There were so many “kulaks” that it wasn’t even possible to shoot them all.  So what Stalin did was swoop down and pack the able-bodied men off to camps, then come back and sweep up the now-defenseless women and children to become “special settlers.”  Understand that these “special settlements” consisted of shoving the dispossessed farmers out of a train somewhere in Siberia, in a strange climate, with neither cattle nor seed corn nor farming tools, and telling them to (in an old white trash expression) “root hog or die.”  Hundreds of thousand did exactly that: die of cold, of starvation, of desperation.

Collectivization seems to have been an orthodox communist ideological policy of Stalin’s. He’s allowed Lenin’s New Economic Policy to run as far as he was going to, and dammit now we were going to embrace communism. That collectivization directly breached the Bolsheviks’ promise to the peasantry of land reform was immaterial. As Snyder points out in several places, the practice of “dialectics,” which in plain English means “the truth is what I say it is at this moment, without prejudice to my ability to declare its opposite ten minutes from now,” is a key to understanding the minds of the Soviet (and leftist in general) leadership.  So the remaining peasants were run off their land, by raw physical coercion or regulatory suppression (such as by denying permission to purchase seed), and forced into an agricultural factory (sorry, lefties, but “agri-business” is not an invention of Monsanto or Archer Daniels Midland).

While collectivization of agriculture was a Soviet-wide policy, there was more at play in the Ukraine than just a turn away from the Right Deviationists and a lunge towards Socialism in One Country.  The Ukraine was not only the Soviet Union’s breadbasket but also the home of Russia’s traditional belligerent cousins. If collectivization was going to succeed, and if the “national question” was to be solved, then the Ukraine had to be subdued. Conveniently this also meshed with the economic needs of the Soviets, as grain and lumber were about all they had (at that time; the phenomenal mineral riches of Siberia had yet to be tapped extensively, although the Kolyma was beginning its flowering into a byword for brutality and hopelessness) that anyone was interested in buying. And so the grain expropriations came. And came. And came. The “law of seven-eighths” (so nicknamed, as Solzhenitsyn reminds us, because of its promulgation on August 7) which criminalized possession of as little as an ear of corn, a moldy potato, or a handful of oats, sent tens of thousands to the Gulag. But more simply died. Stalin shut the peasants in the countryside, closing off the cities and denying the internal passports or the right to buy train tickets which would have been necessary for the peasants to go where there was food. Starving peasants who somehow managed to sneak their way to the cities were, if they survived long enough, shoved back onto trains and shipped right back to the howling wilderness of the countryside. 

By the simple expedients of taking all the food that was grown there and preventing the inhabitants from leaving, Stalin managed very intentionally to starve to death somewhere between 3 and 7 million Ukrainians — the numbers are all over the map — Snyder gives (working from memory here, so forgive me) something like 3.4 million; others, e.g. Robert Conquest, give much higher numbers) in about two years. The terror famine is now known as the Holodomor, and the Russian refusal to acknowledge it remains a sore spot to this day.  Walter Duranty, the NYT’s man in Russia, white-washed it for Western audiences and was rewarded with a Pulitzer, which the NYT has yet to disown.

De-kulakization, collectivization, and the Holodomor were just the start, however. By the late 1930s the Great Terror was in full swing. This is the Stalinist interlude that communists and their Western fellow travelers understand principally as the period during which the Party ate its own. And in truth the Party elites did manage to get thinned out. But even then the thinning was . . . mighty selective. Before it started, for example, the NKVD’s senior leadership was about a third Jewish. By the end it was less than 4% Jewish. And its other non-Russians had mysteriously gone away (for example, the Latvians had been a principal recruiting pool for the early Cheka). Don’t feel too badly for them, though; many of these chappies had zealously played their parts in the grain requisitions from the Ukrainian peasants.

It wasn’t just the Jews who attracted Stalin’s attentions during the purges. He was famously paranoid, and among his most ingrained fears was that of the national minorities. Here his personal demons intersected with communist doctrine.  The proletariat has no nation, no homeland.  Therefore in the dictatorship of the proletariat there can be no nationalities.  Those pesky Central Asian nomadic peoples are just going to have to give up their herds and settle down where the Great Helmsman chooses to put them.  [It’s impossible not to see some parallels between Soviet policy and the American reservation system for its aboriginal tribes.  Of course, in America the individual tribesmen were not compelled to remain with the tribe and settle.  While the tribes as tribes were confined to their reservations, on those reservations they were not forbidden to follow their ancestral ways (disregarding that the buffalo that was the foundation of those ways was nearly exterminated), nor were the tribes as social units extinguished.  Nor was the reservations’ produce expropriated and the people left to make shift.  So there are important substantive differences as well; however, honesty says we must still recognize the similarities.] The Soviet Union was home not only to the Ukrainians but also millions of ethnic Poles, Finns, Latvians, Estonians, Lithuanians, Kazakhs, Crimean Tatars, Koreans, and multiple others. Most of them lived in areas of the Soviet Union that were uncomfortably close to their “homelands,” at least by Stalin’s reckoning. And so he began to address the situation. Some he simply deported as a group, as with the Crimean Tatars, who within the space of a couple of days were shoved into trains and banished to interior. But how do you pack up an ethnic minority the size of Soviet Poles? You can’t. What you can do is shoot as many of them as you figure out a reason to, especially if they’re the kind of people who might be looked up to or take a leadership role among their fellows. Snyder points out that the ethnic minorities of the bloodlands were many times more likely than either ethnic Russians or Soviet citizens overall to die in an executioner’s cellar.

For several years by this point Hitler had been in power and devoting a great deal of thought to what he wanted to accomplish, and where. The over-arching scheme was set forth in the Generalplan Ost, the general plan of transforming the broad Eastern European lands into a land of German agricultural colonists. That those lands were already the homes of several million non-Germans didn’t matter. Some would have to be killed outright, some moved out of the way, some dragooned and worked to death, but many more simply starved. For nearly seven years, though, Hitler couldn’t lay hands on his victims.  And then came the war.

I’ve commented elsewhere on the collusion between Hitler and Stalin in carving up Poland and the Baltic republics.  The dialectics (q.v.) of the situation compelled the Western left to swallow its anger and grief, at least for nearly two years.  During that time the Angel of Death came to visit Poland and the Baltics . . . and settled down, hung up some prints and re-arranged the furniture. 

Where to start?  To the east of the Molotov-Ribbentropp Line, the Soviets set out to decapitate Polish society.  If you were educated, or wealthy, or a priest, or influential, or owned a business, or were an officer, teacher, policeman, professor, scientist, lawyer, prosperous farmer, engineer . . . etc., you were herded up and either deported or more commonly simply shot.  If you survived the shootings you might well yet end up crammed into a frozen cattle car to be deported to the wasteland of the Central Asian steppes.  The Katyn Forest massacre of the 14,000-odd Polish officers is just the best-known small part of a much larger story, one which the Western allies diligently suppressed even when presented with incontrovertible proof of it.  Janusz Bardach’s wonderful Man is Wolf to Man, his memoir of Gulag survival, starts with his experiencing the Soviet occupation of his native Poland.  In the Baltic Republics, Stalin was doing much the same thing: shoot everyone around whom society might coalesce, deport as many of the others as you can herd into the cattle cars, and call it a day.

Meanwhile, over on the other side of the line, Hitler was following a very similar course, although at the outset he wasn’t nearly as organized about it as Stalin.  Uncle Joe had many years and millions more corpses’ experience under his belt, you see.

Then came Barbarossa, the so-close-but-yet-so-far failure to knock the Soviet Union out of the war.  The Germans were taking hundreds of thousands of prisoners at a time.  They had no intention of feeding them at the expense of their own troops, who were told to live off the land.  And so they just herded the Soviet prisoners of war (other than the political officers, who were shot out of hand) behind barbed wire and left them to die of hunger in the open weather.  Millions died this way just in the first months of the war.

Right behind the front came the Einsatzgruppen and Einsatskommandos, roving groups of murderers who’d march entire villages into the woods and machine gun them over open pits.  The Jews of course were prime targets, and it was contemporaneously with Barbarossa that the truly massive-scale killing of Jews really got going.  One thing that I had not previously understood is that the mechanism of killing was different depending on which side of the Molotov-Ribbentropp Line one was.  East of the line the majority of killing was done by gunfire, either retail, with one shot per victim, or wholesale with machine guns hosing down lines of people.  West of the line was the more, errrmmm, technical side of things, with gas vans and gas chambers of various designs.

Everyone has heard of the Wannsee Conference, the meeting in early 1942 at which the “Final Solution” of liquidating the Jews as such was resolved as the, well, final “solution” to the “Jewish problem in Europe,” as the Nazis phrased it to themselves.  And when Westerners hear the expression “Final Solution” they think of Auschwitz. 

Snyder pays meticulous attention, however, not just to raw numbers killed but which groups were killed where, how, and in what order.  Auschwitz started as a slave-labor facility.  Granted, no one paid much attention to whether the slaves died of hunger or over-work, and so it was a tremendously lethal place from the start.  But it wasn’t until fairly late in the process that the famous gas chambers were built at Auschwitz-Birkenau.  Even then the arriving train-loads went through “selection,” with the able-bodied sent to be worked/starved to death and the balance herded into the chambers.  So it never lost its industrial character.  But (and I confess I hadn’t known this until reading the book) it was principally Jews from outside the bloodlands who were sent to Auschwitz, well over half the total.  And Roma and Sinti.  And non-Jews from the occupied territories.  All in much smaller numbers, of course; over 90% of Auschwitz’s victims were Jews, and it accounted for about one-sixth of the total Holocaust victims.  And Auschwitz’s peak killing didn’t occur until beginning in 1944, by which time the Germans had been nearly completely run out of the Soviet Union and much of the rest of what they’d conquered.  As Snyder points out, by 1944, something like three-quarters of the Jews who would eventually die in the Holocaust had already been killed.

The Jews of the bloodlands were exterminated much closer to home.  As mentioned, east of the Molotov-Ribbentropp line, the predominating method was shooting, which seems to have occurred in the very near proximity to the places of residence.  West of the line the Nazis built special purpose facilities with gas chambers fueled (if that’s the right expression) by carbon monoxide, usually generated by captured Soviet tank engines.  Another thing about the special facilities was that they weren’t “camps” in any meaningful sense of the word, because all they did was kill, unlike Auschwitz.  There were some bunk houses for the prisoners staffing them, but that involved a tiny number of people.  “Operation Reinhard” was the name bestowed on the operations of these places, and their names remain largely unfamiliar in Western society:  Treblinka, Sobibor, Chelmno, Majdanek, Belzec.  They were set up, their target populations were exterminated, and then the Germans did their level best to destroy every trace of them.

A further point of distinction:  As Snyder points out, roughly 100,000 people survived Auschwitz.  Of the Jews who saw the inside of an Operation Reinhard facility, fewer than 100 are known to have survived.

But all those are technical points, so to speak.  One thing which Snyder properly does is remind the reader that it’s somehow dehumanizing to speak of so-and-so-many “millions of victims.”  What we must remember is that to say that there were roughly 5.7-6 million Jewish victims of the Germans is to say that there were 5.7 million times one victims.  For each of the dead was not just a component number but rather a distinct point of humanity.  The woman who suckled her infant as she waited to be shot at Babi Yar was not a statistic; she was a mother, a daughter, a wife, a friend.  She had once had hopes and dreams.  She knew the ecstasy of creation and the pain of childbirth.  And they gunned her down, together with her child.

Snyder has good chapters on both the Warsaw Ghetto uprising and the physical destruction of the city itself.  And also some good material on how the post-war Polish communists did their level best to erase the specifically Jewish experience of the war from both.  In the service of nationalism.

And since the dying in the bloodlands didn’t stop with the guns, Snyder covers the ethnic cleansings that went on for another two years.  While not expressly exterminatory in intent, several hundred thousand people died in the course of creating ethnically homogenous national states.

Why the viciousness?  Some of it can be attributed to nothing more complicated than that the bloodlands were caught between two monsters.  There is a reason, after all, that more police officers die responding to domestic disputes than any other risk situation.  Hitler and Stalin both wanted those areas and they wanted them for very specific purposes, neither of which was compatible with survival of the societies who happened to live there.  But Snyder also points out two processes that played out, in slightly different patterns and at different times, on both sides.  Neither Stalin’s mass murder nor Hitler’s began as it ended up being. 

Stalin began by decapitating the rural population in preparation for collectivization.  He followed up with suppression of a long-time problem population.  But over time his blood-lust transferred itself to the national minorities as such, as (in his mind, at least) bearers of threats to Russia’s (and Russians’) political dominance in the Soviet Union and its border neighbors.  His paranoia required an object to focus on, and it found those objects in the Ukrainians, the Poles, the Baltic peoples, the Kazakhs, the Tatars, the Volga Germans, and so forth.  Their destructions, either physically or by wrenching separation from their homelands, became an end in itself (because of course not a damned one of them actually posed any threat whatsoever to the Soviet Union or Stalin’s rule).

Hitler’s extermination of the Jews in similar fashion transformed itself from something which was ancillary to the conquest of the Soviet Union and Lebensraum into a war aim as such.  Originally it was just part of de-populating the areas to be colonized once the war was won.  The Slavs were likewise to die, but they were to be starved/worked to death.  By December, 1941, so Snyder, it was apparent to the German high command that the war wasn’t going to be won.  Let’s see:  We went to war to conquer Lebensraum, and that isn’t going to happen.  We cannot say that we have failed in our war objectives, though; too many telegrams to mom back home about how her little Heinz had the honor to die for the Führer.  Thus:  The war is now about the smashing of Jewish domination of Europe.  This of course dovetailed nicely with the fact that all the other options to “solving” the “Jewish problem in Europe” that had been explored had played out and were no longer physically possible.

I will say that the least satisfying parts of Snyder’s book (once you’ve struggled through all the descriptions of the killing; I defy a parent to read of the killing of mothers and children without wanting to vomit) is the final chapter on comparison and comprehension.  I’m not sure that comparison of Stalin and Hitler is terribly useful.  It’s not like we’re running some cadaver sweepstakes here and in any event both Stalin and Hitler put together pale in comparison with the 45-60 million dead Chinese that Mao racked up in the Great Leap Forward . . . in four short years.  Comprehension and memory likewise both come up as not-quite-dead-ends.  There are multiple, partially-overlapping groups who died in their millions.  Each has a legitimate claim on the special aspect of what happened to them.  But for God’s sake!  They’re DEAD.  They’re all dead.  Each one of those victims is no more and no less dead than any other.  Each was no more and no less human.  Each one’s death is a gaping, suppurating wound of justice that heaven alone can remedy.

And can you even claim to understand What Happened?  Sure, you can punch through the archives, you can assemble pictures, documents, film footage, and so forth.  You can compile data.  You can, in some places at least, go see where it happened.  But we today have no more ability to stand at the edge of the ravine at Babi Yar than we do to fly a kite in Jupiter’s Great Red Spot.  And it is in those moments when the bullets were slamming into that mother and her infant; when the little Ukrainian boy who imagined that he saw food and kept proclaiming, “Now we will live!” until one day he didn’t; when Tania in besieged Leningrad noted the deaths by starvation of her entire family until, “Only Tania is left,” and then she wasn’t either; when the Polish officer, writing in his last moments of life about his wedding ring, heard the click of the pistol as it was cocked behind his ear:  In those moments It Happened, and we are forever shut off from them.

If we cannot know, cannot understand, then we can at least defy forgetfulness.  Snyder’s book tells a story that all of us have a duty to hear.  So we can Not-Forget.  And we can mourn.  We can examine our own souls and hearts, and forever ask ourselves whether we harbor within us the death of humanity that starved, shot, gassed, beat, and burned the bloodlands for nearly twenty long years.  In this respect I would suggest that Ta-Nehisi Coates has it exactly backwards when he closes his piece with the observation that it’s chaos “out there” and always has been.  That “out there” springs from “in here,” and the only place that any of us has mastery of is our own individual “in here.”