FAIR USE NOTICE

FAIR USE NOTICE

A BEAR MARKET ECONOMICS BLOG


This site may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in an effort to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. we believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law.

In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml

If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates
FAIR USE NOTICE FAIR USE NOTICE: This page may contain copyrighted material the use of which has not been specifically authorized by the copyright owner. This website distributes this material without profit to those who have expressed a prior interest in receiving the included information for scientific, research and educational purposes. We believe this constitutes a fair use of any such copyrighted material as provided for in 17 U.S.C § 107.

Read more at: http://www.etupdates.com/fair-use-notice/#.UpzWQRL3l5M | ET. Updates

All Blogs licensed under Creative Commons Attribution 3.0

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Saturday, April 27, 2013

Cold War secrets: We tried to weaponize the weather and anything else


SALON



We tried to weaponize the weather

Cold War secrets: Melting polar ice cap with nukes, changing the sea level, even LSD weapons were all on the table



We tried to weaponize the weather


The years between the first hydrogen bomb tests and the Limited Test Ban Treaty in 1963 saw more than just increased anxiety about the effects of nuclear testing on weather. They also saw increased interest in large-scale, purposeful environmental modification. Most climate modification enthusiasts spoke of increasing global temperatures, in the hopes that this would increase the quantity of cultivated land and make for fairer weather. Some suggested blackening deserts or snowy areas, to increase absorption of radiation.
Covering large areas with carbon dust, so the theory went, would raise temperatures. Alternatively, if several hydrogen bombs were exploded underwater, they might evaporate seawater and create an ice cloud that would block the escape of radiation. Meteorologist Harry Wexler had little patience for those who wanted to add weather and climate modification to the set of tools in man’s possession. But by 1958 even he acknowledged that serious proposals for massive changes, using nuclear weapons as tools, were inevitable. Like most professional meteorologists, in the past he had dismissed the idea that hydrogen bombs had affected the weather. But with the prospect of determined experiments designed to bring about such changes, he warned of “the unhappy situation of the cure being worse than the ailment.”

Whatever one might have thought about the wisdom of tinkering with the weather in peacetime, the manipulation of nature on a vast scale for military purposes seemed to be a perfectly legitimate application of scientific knowledge. While planning a total war against the Soviet Union, every avenue begged for exploration. Let’s explore how the scientific advisors of America’s key allies in NATO saw the alliance fighting in the future. Numerous ideas for creating catastrophic events through natural processes were presented, especially using hydrogen bombs as triggers. In these discussions, held as early as 1960, top scientists debated the fundamental environmental question — can humans have a long-term effect on the global environment?

The desire for novel military technology seemed especially urgent by the early 1960s. Although officially part of the International Geophysical Year, the Soviet Union’s launch of Sputnik in October 1957 had clear military ramifications. Not only did it begin the space race but it also took the arms race to a new stage that included communications satellites and intercontinental ballistic missiles. The launch of Sputnik made the world seem smaller and made the most far-fetched visions of the future seem possible. The gee-whiz, Buck Rogers feel of the immediate postwar years returned. But this wave of technological enthusiasm was darker, because instead of coming on the tide of a war victory, it came as a foreboding new competition. For years the Americans had been preparing for the missile age, gathering data on the atmosphere and on the earth’s gravity over the poles. The Soviets clearly had kept the pace. Sputnik served as a justification for a vast array of projects to use scientific knowledge to tamper with nature on a large scale.

Reinforcing the sense of urgency, President Eisenhower’s special committee on weather modification submitted its final report in January 1958, just months after Sputnik’s launch. The committee’s chairman, retired Navy Captain Howard T. Orville, said at a press conference that he suspected that the Soviets already had begun a large, secret program on weather control. Despite routine dismissals of the idea throughout the decade by meteorologists, the high-level committee ranked weather control ahead of hydrogen bombs and satellites in military significance. Orville urged the government to support research on controlling large-scale weather systems, not just rainmaking. He further suggested that finding ways to manipulate the heat balance between the sun and earth might be the key to weather and climate control. The earth already had been heated up by man’s efforts, by introducing carbon dioxide into the atmosphere through the burning of fossil fuels. This carbon dioxide helped to trap the heat and create, as the New York Times put it, a “greenhouse effect.” It might be possible to harness this greenhouse effect. “If such steps are feasible,” journalist John Finney reported, “then New York City might be put under a few hundred feet of ice or a few hundred feet of water depending on whether the temperature was raised or lowered.”

Rumors spread quickly about scientists in the United States and Soviet Union experimenting with unprecedented tools for controlling nature. Were the Soviets planning to dam the Bering Strait? Were the Americans able to steer storms? Naysayers pointed out that meteorologists could not even predict naturally occurring weather, so how could anyone control it? One author opined in the New York Times, “For would it not be foolish for anyone to talk of controlling an intricate piece of apparatus until he knew precisely how it worked?” After the report of Eisenhower’s special committee was made public, scientists in allied countries received strange, sheepish letters from their defense establishments, asking if the latest rumors about American research could be true. For example, a British Air Ministry scientific advisor, E. V. Truefitt, presented his countryman, oceanographer George Deacon, with “one or two questions which have come up in odd conversations.” He called them “wild cat” ideas that he did not really take seriously, yet they appeared to be in discussion in the United States. Despite his instinct that they could not possibly be real, he felt obligated to run them by a competent man of science.

One of the ideas was to melt the polar ice cap by exploding nuclear weapons on it, thus raising the global sea level. The Soviets might be considering it, so the rumor went, to drown cities in the United States and Western Europe. Another idea was to change ocean currents or temperatures to interfere with an enemy’s climate and food production. Truefitt had no idea how assess an ocean-initiated climate change, but he had made a rough calculation to determine what was needed to melt the polar ice cap. He believed that it would take about a million tons of fissile material to melt enough to raise sea level by 30 feet. “This is a large amount of fissile material whichever way you look at it,” he wrote to Deacon, “and consequently my guess is that it is not the kind of project that even the Americans would embark on under the influence of Sputniks.”

Desperate to find “weapon of the future”

The truth was that the immediate post-Sputnik years had a peculiar air, both of desperation and of opportunity. Doors were wide open to a range of technological possibilities. Nearly anything that was technically feasible made it to the highest levels of discussion. For starters, that meant revisiting the questions surrounding biological, chemical, and radiological weapons. But it also sparked discussion of the ambitious, the horrendous, and the quirky. Like wildcatters exploring for oil, American scientists grasped desperately around them, striving to find the next weapon of the future.

There were several post-Sputnik efforts to push the limits of the “possible,” to explore exotic ideas that might prove decisive 5, 10, or 20 years into the future. Some actions to direct this scientific work were high profile and public. President Eisenhower created a science advisory committee to guide the course of American technology and ensure that the Americans did not fall behind the Soviet Union. This President’s Science Advisory Committee (PSAC) also existed to rein in some of the wilder ideas, to avoid wasteful spending. Other brain trusts, often dominated by physicists with expertise in nuclear affairs, sprang up behind closed doors to advise military establishments. One of these was “JASON,” an elite group of scientists who got together during the summer months to assess major scientific and technological problems of military significance. Paid by government contract through several different bodies throughout its existence, “the Jasons,” as they called themselves, were drawn from the cream of civilian academic science. Despite their outsider status, the Jasons gained the respect and trust of officials in the Defense Department and the armed services, and their advice often revolutionized military thinking during the nuclear era.

Sputnik did not just spark new scientific projects, however. It also revolutionized military strategy, making it grimmer than ever. The American and Soviet air forces realized they were going to have to rethink the basic notion of national vulnerability. No longer could the Air Force’s Strategic Air Command count on scrambling bombers and flying them over the North Pole. Most of the war’s damage would have been done before bombers left the Western Hemisphere.

More than that, as a secret National Academy of Sciences group advised the Air Force in 1958, the range of possible wars soon would expand exponentially. Conflicts were going to become both more total and more limited at the same time. On the one hand, the United States was losing its ability to incapacitate enemy forces. In practice that meant that the most attractive targets all over the world would be centers of population — cities — rather than armies or airfields. Making an effective attack against enemy military forces seemed a dwindling prospect in an era when missiles could be put into hardened silos, mobile rocket units, or submarines patrolling the oceans. Cities, by contrast, would be ripe for plucking. “Weapon yields, delivery accuracies, and force level requirements for city destruction are modest,” these scientists concluded, while attacking heavily fortified bunkers would require large and accurate payloads. That meant that finding ways of maximizing civilian death would assume an even greater importance than it already had.

On the other hand, nuclear parity would make full-blown conflict less likely, meaning that all of the armed services would have to reorient themselves back to conventional warfare. As RAND Corporation game theorists had long feared, the atomic bomb was a wasting asset—and the window of opportunity to “win” decisively in a war against the Soviet Union had passed. By the late 1950s, the new orthodoxy in strategic thinking accepted that the Soviet Union was committed to avoiding a nuclear holocaust and that it intended to encourage “brushfire” wars instead. Small wars like those in Malaya and Korea would become more common. As the 1960s dawned, military strategists wondered about the fate of Vietnam, which the French had failed to hold. By treaty the country had split between North and South. Should the communist North Vietnamese invade, would the Americans consider using nuclear weapons?
Some may have argued that nuclear bombs were America’s answer to the human population imbalance against, say, people in China or Southeast Asia. But new studies at RAND had dismissed this possibility, showing that nuclear weapons would be ineffective against guerrilla forces in Southeast Asia and would visit enormous collateral damage upon friendly population centers. So the military would need to let go of President Eisenhower’s preferred strategy of massive retaliation as America’s basic posture.

The Air Force would have to stop relying on aircraft designed purely to deliver nuclear weapons. Instead, it would need to find ways of fighting men, tanks, rockets, and airplanes—all without nuclear weapons. A decade earlier the Navy had bitterly opposed the Air Force’s claims that the era of aircraft carriers and battleships had ended. Now it seemed that the Navy had been right. The new conventional wisdom, which President Kennedy (a former Navy man) soon would establish as the doctrine of “flexible response,” was that the nations with the greatest range, flexibility, and cleverness in weapons systems would stand strongest. This meant conducting research on weapons at various levels of destruction up to and including nuclear bombs and being creative about their uses.

It also meant combining modes of warfare across scientific disciplines. Geophysical and biological knowledge might be united, for example, in developing dispersal mechanisms for pathogens. In trying to achieve large area coverage, one might fall back on cloud-seeding techniques—with the important difference that the “seeds” would not be silver iodide to cause rain but pathogens to spread disease far and wide. For example, certain phenomena in air masses, such as “Polar Outbreaks” (thrusts of cold air from the poles toward the equator), seemed to have great potential for such seeding, especially given the Soviet Union’s meteorological vulnerability from the north.

Military research embraces science-fiction

The post-Sputnik national pall of gloom encouraged American scientists to explore unorthodox weapons, and they left no stone unturned. The U.S. military forged ahead with research on weapons using radiation, particle beams, nuclear energy, and kinetic energy. The Army Chemical Corps even investigated the use of lysergic acid diethylamide (LSD) and cannabis as non-lethal, incapacitating agents. The National Academy of Sciences noted this approvingly in 1958 and suggested that the Air Force begin administering LSD to airmen as soon as possible, to judge whether to add it to the arsenal of chemical weapons.

With so many wide-ranging ideas being vetted, NATO allies worried that the Americans were moving in too many directions at once. It was fine to support science in the United States and to speak grandly about possibly controlling forces of nature—but which ideas could be incorporated into actual NATO war plans? In 1960 NATO members agreed to convene a special group of scientists and military leaders to assess the long-term prospects of war. They wanted to know what would really be feasible by the 1970s, and what was just science fiction.

This kind of science forecasting was not just a matter of intelligent people guessing the future. By 1960 it had a distinguished history of shaping policy, particularly in some parts of the American military establishment. The Air Force, for example, understood in the 1950s that much of its strength relied on continuous research and development (R&D). Toward the end of World War II, General Henry “Hap” Arnold, commander of the then-Army Air Force, famously said that “for twenty years the Air Force was built around pilots, pilots, and more pilots. . . . The next twenty years is going to be built around scientists.” Throughout the Cold War, such brain trusts—in think tanks like RAND, secret groups like JASON, and many others—exercised a remarkable influence on policies.

When NATO tried, in 1960, to estimate the next 10 to 15 years of weapons, it enlisted the leadership of Theodore von Kármán, the grand old man of science forecasting. By then he was 79 years old. Born in Hungary, von Kármán had been one of the world’s foremost experts in aerodynamics. He even had helped the Austrian military design aircraft during the First World War. In 1929 he came to the United States to head up an aeronautical laboratory at Caltech, helping to kick-start the aviation industry in southern California. Acting as scientific advisor to United States air forces during World War II, von Kármán had initiated a long-term study of air power that amassed some of the best brains in physics and aeronautics. The resultant report, Where We Stand, became a road map for postwar air power research. In subsequent years, von Kármán repeated this process with other studies, and in fact he chaired the 1958 secret committee advising the Air Force, under the auspices of the National Academy of Sciences. In 1960 he embarked on a study that would be the capstone of his long career: NATO’s attempt to grasp the future face of battle over the entire earth.

Battle: earth

Known simply as the Von Kármán Committee, the new group included the chief scientific advisor of each national defense organization in the United States, Britain, Canada, France, and West Germany. With several working groups of scientists under them, they ran the gamut of new weapons in an era of “total war.” They included the typical range of military subjects, including aircraft, weaponry, and ships. But they also delved deeply into the implications of the global physical environment, particularly in light of the extraordinary size of thermonuclear weapons, the global reach of ballistic missiles, and the extent of global monitoring begun during the International Geophysical Year.

The buzzword of the IGY had been “synoptic.” Taken literally, it meant observing more than one place at the same time—viewing together. The IGY’s concept was to take a huge number of observations, spread out over a variety of geophysical disciplines and geographic areas, all within an 18-month period. Doing so would provide a portrait of the earth that was

more true and comprehensive than anything ever attempted.
The Von Kármán Committee adopted the word “synoptic” too, but applied it to weapons. Weapons of a “synoptic scale” meant control and domination of whole physical systems. In military shorthand, the word synoptic called to mind vastness, encompassing large portions of the earth—or perhaps all of it. The IGY had brought this idea into military planners’ field of vision. But while the IGY was concerned with synoptic-scale measurement, NATO was concerned with synoptic-scale manipulation.

Once they began to meet, the members of the Von Kármán Committee realized that they all agreed on at least one thing: the global observations initiated in the IGY would have to continue indefinitely. The geophysical factors of modern war involved knowledge of an operational environment—in other words, how would the sea, land, or air affect troops and ships? NATO forces needed to be able to operate in any kind of environment. If it was on planet Earth, NATO should be prepared to fight there and win.

In fact the U.S. armed services already were developing environment-specific training centers to give American forces mastery of three classes of extreme conditions: polar, desert, and jungle. Given that the northern polar region was “the only large uncommitted area lying between the world’s strongest antagonists,” polar operations weighed heavily on defense planners’ minds. Already polar and Arctic training centers existed at locations in Greenland, Canada, and in the state of Alaska. The United States also operated a desert warfare center in Yuma, Arizona. Still needed were centers approximating Mediterranean conditions and tropical ones.

To take advantage of the apparent shrinkage of the earth due to ballistic missiles, NATO advisors also pointed out the need to revolutionize the field of geodesy—earth measurement. Mapmakers relied on data taken from a variety of oceanic or terrestrial expeditions, sometimes decades or more old. No one had seen the earth from space, much less taken accurate measurements based on satellites. Intercontinental ballistic missiles would require precision. But NATO literally did not know where the Soviet Union was. “On a world wide scale, we are not sure of the position of North America in relation to the Eurasian continent.” Knowledge of anything in the Southern Hemisphere was even less accurate. The only decent data came from the Americas, Western Europe, Japan, and some of the former and current European colonial territories. The Soviets could target the West with accuracy, but the West could not do the same. Any kind of exact targeting of the Soviet Union would prove impossible before satellites could take comprehensive measurements. In the meantime, constant earth measurement from the air would prove essential. Fortunately, international scientific projects were providing that data.

The IGY had convinced scientists and military planners of the usefulness of synoptic data collection. If done in real time, or close to it, data collection could be automated and collected over a large territory, perhaps even globally. Individual scientists might never analyze the vast amounts of data, but the data could be fed into computers in order to monitor and
predict environmental conditions. Already the Americans were working on an anti-submarine warfare “environmental prediction system.” It collected oceanographic information—to estimate sonar performance—and combined it with meteorological information to predict future oceanographic conditions.

Had the members of the Von Kármán Committee been military historians, there is little doubt about what they would have cast as the “decisive moment” in the history of global strategy. Time and again they called to mind the changes brought about by the advent of earth-orbiting satellites. It would prove to be, they believed, a dividing line between military eras. It promised total monitoring of the global environment, a vision of the future that was pervasive across the range of sciences and military operations. By 1970, these NATO advisors predicted, scientists would be able to identify and track thunderstorms as they occurred all over the entire earth and to keep the earth’s radiation under constant surveillance. Old charts would be discarded, in favor of a constantly refreshing set of data beamed down from the heavens. Automated data systems would be necessary to achieve accuracy of measurement and improved forecasting. As the committee put it: “The concept of inaccessible geographical areas is no longer valid—observations over enemy-held, oceanic and uninhabited areas are as easily made as elsewhere.” Reliance on existing charts and data, collected laboriously by error-prone humans, rarely uniform from country to country, seemed archaic. New methods of continuous, uniform data collection of the oceans, land, and space would provide the kind of mastery of the global environment that the Von Kármán committee envisioned.

Climate change as warfare

Aside from this unprecedented ability to forecast conditions and improve global accuracy, the NATO science advisors also predicted ambitious, large-scale manipulation of the environment. The brass ring of military geophysics was weather control. Scientists already had achieved modest results in increasing rainfall or dissipating fogs. But these successes required optimal conditions and certainly could not be projected over a large area or from a long distance. But what about climate control?

In a 1956 Fortune article, mathematician John von Neumann had suggested that militaries would be able to make large-scale changes to climate. He pointed out various ways to alter oceans and seas. One was to blanket ice sheets with blackening agents, to absorb more light and melt them. If it could be done to Greenland, its ice sheet alone would raise sea levels by about 10 feet “and cause great discomfort to most world ports.” Another scheme was to divert the Gulf Stream, which would severely change the climate of Northern Europe. Still another idea was to dam the Bering Strait. Such alterations would have clear, long-term effects on world climate. And these changes seemed possible. Reflecting on von Neumann’s predictions, the NATO group believed that an extraordinary tool lay in the hands of military planners: the hydrogen bomb. “It is perhaps true,” the committee concluded, “that means presently within man’s reach could be employed so as to alter global climate for long periods.”

Given the later controversy about the role of carbon dioxide in inducing global climate change, the focus on the hydrogen bomb might seem surprising. But the reason for this was simple. Advised by physicists, the defense establishments of NATO’s strongest members believed that in order for “synoptic scale” weapons to be feasible, man had to achieve physical power that was comparable to nature’s power. The only tool that seemed likely to provide that was the hydrogen bomb. Although professional meteorologists had insisted that hydrogen bomb tests had not created the extreme winters of 1954, 1958, and 1962, these military advisors were less adamant. They knew that the energies of nature were vast, but felt they might be shaped by man. It seemed that the Soviets were working hard on the problem. Canadian scientists repeated the oft-heard rumor that the Soviets were planning large-scale manipulation of the oceans, along with drastic modification of climate, by damming up the Bering Strait. The Canadians reasoned: surely the Russians had in mind the use of nuclear bombs?

NATO scientists found the prospects of such power over nature intriguing. They called it environmental warfare. “This kind of warfare has the peculiarity that it could look like our image of nuclear war, or could be so subtle that the ‘weapons’ and ‘battles’ are hard to identify.” The enemy might undertake a vast engineering project to change the climate of a whole region, “leading gradually to economic ruin and loss of strength.” This could be done even without declaring war.

Once again ecological vulnerability emerged as a crucial area in need of study for military purposes. The NATO science advisors did not yet understand their true vulnerability to what they called “living weapons.” But new data were coming in. Since the late 1950s, American engineers had planned to use thermonuclear explosions to excavate a harbor in Alaska—a project dubbed “Plowshare.” Beforehand they put together what today might be called an environmental impact statement and discovered that the effect on the Eskimos’ diet might not be as negligible as originally assumed. For this and other reasons, the project was scrapped.

But that knowledge had been useful for military thinking. Scientists had traced the pathway of radioactivity through the food chain. NATO scientists now used the example of the Eskimos’ ecosystem to argue for more advanced knowledge of ecological warfare. Within that ecosystem, Eskimos lived interdependently with seals, otter, fish, caribou, and plankton. If the plankton were all killed, an Eskimo’s ecological community would be utterly destroyed. “At best he would have to move,” the group pointed out. “At worst he would die.” This kind of thinking could be tailored to particular regions: “The people of Asia depend on rice and a very few other crops. Something like a lethal rice-rust or blight could make life in Asia much more difficult and perhaps untenable.”

As a weapon system, ecological links went further than killing—they also promised biological coercion. Destruction of the enemy need not be the goal. Getting rid of plankton, for example, would make the Eskimos’ entire food system collapse and force them to be entirely dependent on food supplied from outside the region. To achieve this, toxic agents “may be developed to attack essential links in various ecological chains.” The aim would be to shape an existing interdependent web along new lines, “to force the ecology to accept dependence on some crop or animal which cannot live at all in the homeland.” Doing this would put the victim in an extremely disadvantageous position, “leading to a gradual loss of power and position and inevitable vassalage.”
Von Kármán died shortly after the first of his committee reports was completed.
As colleagues remembered his contributions to aeronautics and to scientific advising, his death lent the committee’s findings an extraordinary amount of authority within NATO. The reports had the air of a final act of service; the chairman’s passing only augmented the committee’s importance. With Von Kármán gone, the reports themselves were a foreboding, Cassandra-like vision of the future that military planners could ignore only at their peril. This was especially true of subjects that the committee felt it did not yet understand fully.

Environmental warfare becomes real

Environmental warfare had captured the imagination of the committee but the results had been unsatisfying. It seemed in keeping with the direction of science—toward global, synoptic-scale activities. Yet it was unclear how it might shape weaponry. The experience of the Von Kármán Committee established “environmental warfare” as a distinct concept, and it was not long before NATO reconvened the members to look into the subject more fully. They realized that there were commonalities between the work on geophysics and the ongoing work on radiological, biological, and chemical weapons. Both involved alterations to the natural world with potentially devastating human consequences. Military technology seemed on the verge of an unprecedented ability to tap the forces of nature on a massive scale.

Thus in late 1962, NATO summoned scientists and military planners to Paris to hammer out what might legitimately come out of “environmental warfare” and what the long-term consequences might be. The man who tried to fill Von Kármán’s shoes was another Hungarian, nuclear physicist Edward Teller, who joined the group as a “special advisor.” Known widely as the father of the hydrogen bomb, Teller already was deeply committed to using nuclear explosions for massive earthmoving projects, such as the construction of harbors. He also saw great potential in developing novel uses of nuclear weapons in wartime. Along with Teller, committee members were drawn from national defense establishments and from the U.S. Advanced Research Projects Agency (ARPA).

The central question almost always remained the same: were natural forces susceptible to human influence on a large, even global, scale? In methodical fashion, these military planners broke down environmental warfare into distinct spheres of possibility, corresponding with the layers of the earth and its atmosphere as it extended into space: lithosphere and hydrosphere (land and oceans), troposphere (lower atmosphere), stratosphere and ionosphere (upper atmosphere), and exosphere (outer space). Some of the earlier “wildcat” ideas were quickly dispensed with as impractical, such as using hydrogen bombs to melt the polar ice caps. But other wildcat ideas were feasible, particularly using nuclear weapons as triggers for tsunamis in the oceans, or for altering the weather.

One only had to open a newspaper to see what natural catastrophes could accomplish. In 1958, in Alaska’s Lituya Bay, there was a landslide so powerful that it carried the energy equivalent to a one-kiloton explosion. In May 1960, a wall of water smashed the Chilean coast over a stretch of several hundred miles, with wave heights of 5.5 to 13.5 meters. The Chilean earthquake sent storm waves across a large area of the Pacific at speeds in excess of 400 miles per hour. Even as far away as Hawaii, low-lying areas were flooded. Thousands of Chileans were killed, and millions were left homeless. Reporters described the relentless devastation:
The quakes went on for all of the week, demolishing or damaging thousands of homes and other buildings, and burying some small communities under landslides. Whole villages were swept away by tsunamis as high as twenty-four feet. The quakes were so violent that mountains disappeared, new lakes were formed and the earth’s surface dropped as much as 1,000 feet in twenty-five miles. The worst quake, last Sunday, released energy of 240 megatons, equal to that of 1,200 atomic bombs of the type dropped on Hiroshima and far more than the 174 megatons released by all the nuclear explosions to date.
Noting deaths all over the Pacific Rim, the New York Times reported that the Chilean earthquake “gave tragic testimony that in this age of the conquest of the atom and of triumphs in outer space man is still helpless against the vast and still largely unpredictable forces that frequently go berserk in his immediate environment—hurricanes, volcanoes and earthquakes.”

NATO saw it differently. Environmental cataclysms could become part of the alliance’s arsenal, with the help of a well-placed nuclear explosion. The cascading effects of energy release from the existing instabilities of nature could be, quite literally, earth shattering. The power over nature was tempting: “The large engineering capability which is provided by multi-megaton nuclear weapons might open up the possibility of changing the course of ocean streams which are known to affect climate and continents.” Narrow straits could indeed be dammed up, as some feared the Soviets planned for the Bering Straits. Peninsulas could be turned into islands, changing the patterns of water flow and mixing. With enough nuclear bombs, the sea floor in some areas might be reconfigured entirely.

Even weather control seemed poised to make a quantum leap forward with the nuclear bomb as a tool. “Real weather control,” NATO scientists argued, “would mean control of synoptic scale disturbances—the centers of high pressure and low pressure found on the daily weather maps.” Such large-scale systems seemed inherently susceptible to influence, despite the huge energies required to do it. The sun imparted energy into the air masses constantly, but only some of it became kinetic energy. Most of the energy was stored, ready to be released. The results could be quite violent, as in the case of cyclones. A relatively small release of energy—say, a nuclear bomb—could trigger a much larger release of natural energy.

One reason that such widespread and even long-term changes in the earth’s systems seemed feasible—at least in theory—was the growing realization of how serious an effect humans already were having upon the upper atmosphere. High in the sky, major effects seemed well within NATO’s grasp. Nuclear explosions could create electron clouds some 70–90 kilometers up, disrupting high-frequency communication. One of the leading researchers on electron cloud disruption, Jerome Pressman, had been advising the U.S. Army Signal Corps, the Air Force, and ARPA on this subject for years. He told the rest of the environmental warfare committee that even a single nuclear burst could disrupt long-distance communication over a stretch of a thousand kilometers. If nuclear weapons were exploded in the atmosphere as a defense against incoming missiles, the range of this electron cloud would be vast indeed. High-frequency communication equipment and long-distance radar systems might be rendered useless.

Out in space—the exosphere—NATO saw great promise in the radiation belts that American and Soviet satellites had measured during the International Geophysical Year. The Van Allen belts were actually giant regions of charged particles trapped by the earth’s magnetic field. They were sources of intense, persistent radiation that endangered any equipment or living thing in space. Although the Van Allen belts were natural phenomena, similar belts could be created artificially by exploding a nuclear weapon at an altitude of at least 400 kilometers. Large bombs at even higher altitudes would create an extraordinarily powerful radiation environment in space. The belts would cloak the earth, challenging any exit or entrance by missile, satellite, or spacecraft. Because the belts would be= trapped by the earth’s magnetic field, there would be holes in the radiation cloak at the north and south geomagnetic poles.
Whoever controlled these entry points would have comparatively easy access to space. That would make the poles even more important as strategic regions.
In fact, manipulation of the Van Allen belts already had begun. In 1958 the United States discovered that its high-altitude tests of “small kilotonnage” had created electron shells around the earth, about 60 miles thick. Because the operation in which these tests occurred had been dubbed “ARGUS,” the creation of the shell became the “ARGUS effect.” Just a few months prior to these NATO meetings, the United States detonated an even larger explosion at high altitude—the “Starfish” experiment. As Edward Teller reported, “this is the first time that the Argus effect was demonstrated on a really big scale.” An immense number of electrons were caught in the earth’s magnetic field and “are forming now a new Van Allen belt greater in electron density than any of the known Van Allen belts.” He confided that the electrons had damaged the solar cells in
American satellites.

“Why not just drop a bomb?”

Despite their fascination with these weapons, the committee members struggled to overcome the possibilities that defied the logic of nuclear warfare. The military significance of triggering natural catastrophes was not readily obvious. “If the weapon can be exploded a few miles offshore, it can probably be delivered on, or close to, the target itself, and a far larger proportion of the energy available would be expended on the target and not on long lengths of unimportant coast line.” The same argument could be made against any effort to influence the flow of ocean currents and thus modify the world’s climate. Why not just drop a bomb on a city? It seemed more logical.

On the other hand, there might be great value in environmental devastation in a total war. NATO advisors had already moved beyond “cities” as targets and had begun to imagine much larger swathes of territory. Aside from the blast and radioactive contamination, thermonuclear bombs could have wide-ranging horrific consequences. Disruptions of dams and levees would lead to widespread flooding. Drowning and starvation would result, posing a serious threat to those who managed to survive the bombs.

The most ghastly environmental threat was the prospect of large-scale fire. In Whole World on Fire (2004), Lynn Eden has written that military planners routinely ignored the consequences of huge firestorms caused by a nuclear explosion’s thermal radiation. She suggests that this led nuclear strategists to underestimate the catastrophic effects of nuclear explosions throughout the Cold War. While war plans typically focused on blast effects, not everyone ignored the totality of death and destruction from fires. Some military planners considered it part of environmental warfare. In the early 1960s, scientists and military planners at the highest levels of NATO faced a stomach-churning analysis that cast them as a way of arming the countryside against the enemy even when his cities were destroyed.

These fires would instantaneously ignite a huge area due to the explosion’s initial thermal radiation, regardless of blast effects. Rather than just use bombs directly against cities, one could explode a large bomb of about 100 megatons high in the atmosphere, at about 80 kilometers. Doing so would maximize the amount of thermal radiation that would reach the earth. Such radiation would ignite flammable material instantly, over an area of nearly a million square kilometers. As a point of comparison, the largest recorded forest fire in the United States occurred in 1871 in Wisconsin and Michigan, which claimed 1,683 lives and spread over 15,000 square kilometers. Setting fire to forests, in an area of a million square kilometers, would pose intractable problems to an enemy. Outside the bombed-out cities, the countryside would provide no shelter, no food, and no hope of survival.

A fire from thermal radiation would differ from a typical forest fire because it would not need to spread—instead, the whole area would go up in flames at the same time. Oxygen would rapidly deplete, leaving any survivors suffocating to death. It would be impossible to run from it. Rushes of air would create firestorms with “strong winds of up to hurricane force,” far more intense than the deadly firestorms created in German and Japanese cities during World War II. Edward Teller guessed that the energy released in a fire would exceed that of the nuclear explosion, roughly the equivalent of a thousand megatons. “This is the most violent and wide-spread environmental change which can be expected from a nuclear attack,” he said. If total war were the goal, fires from thermal radiation could achieve it on a continental scale.

These discussions, recorded for posterity in NATO meeting minutes, have a surreal feel to them. Scientists argued about whether hydrogen bombs were more effective as triggers of vast environmental events, or if they should just be dropped directly on their targets. Scientists quibbled over the extent of damage from a fire-raising weapon. Some doubted, for example, that hurricane-force winds would ensue. It was difficult to argue with the conclusion, however: “The immediate result would be beyond all experience.” But some insisted that it would only “likely” be beyond all experience.

Such intellectualized detachment from human experience reached new heights when the long-term ecological consequences of nuclear weapons were imagined. The NATO group recognized that using nuclear weapons in this way might have severe consequences for the earth in the long run. But while acknowledging that the effect on weather and climate might be significant, scientists had little data with which to generate specific predictions. As for the devastation of the land, NATO was confident that a succession of vegetation “would sooner or later re-establish itself, and over a few decades there would be some ecological recovery.”

The only thing not in doubt in these discussions was that maximizing human death was the principal goal. Which was better, Teller and his colleagues asked—drowning villages along the coast, igniting the countryside with thermal radiation, or simply laying waste a city? Should humans be contaminated through the food chain, or beat into submission through ecological dependence? While praising the ingenuity of these wildcat ideas, Teller’s own preference was to bomb cities. If death and devastation were the goals, he reasoned, why not keep it simple? Mammals, including humans, were more sensitive to radioactivity than insects, seed plants, or bacteria. It made little sense to attempt to contaminate man through these less susceptible organisms when the bomb would do the trick. “Thus the most economic way to attack populations with nuclear radiation,” the committee concluded, “is to do so directly rather than through some element of their surroundings.”

For many in NATO, looking at the world as a zero-sum game between the nuclear-armed United States and the Soviet Union, environmental warfare seemed like an inefficient sideshow. As interesting as ocean manipulation and weather control might be, nuclear explosions would be required to produce them. In that case, presumably a real war would have begun, and the enemy could be bombed directly without resorting to exotic methods such as these. Even in the case of biological, radiological, and chemical weapons, changing the environment would be a more circuitous route than attacking directly.
In trying to imagine uses of environmental weapons, military analysts working with NATO confronted the same question that has stood at the center of environmental issues ever since: can human actions have long-lasting, detrimental consequences upon the earth? As an advocate of peacetime nuclear testing, Teller had reason to minimize the long-term impacts of human action, particularly nuclear fallout. He spoke at length to the committee about how some scientists had exaggerated these effects, and his point of view prevailed.
The NATO committee concluded that the danger of sickness and disease from contamination “are no worse than the other hazards which would have to be faced by the survivors of a nuclear war.” As for the long-term genetic effects upon future generations, the committee toed the protesting line that the ultimate effects on future generations could not be predicted with certainty.

Nevertheless, some on the committee were convinced that humans were capable of making large alterations to the environment. Throughout the Von Kármán reports were repeated references to unpredictable consequences of human action on the atmosphere. Increasing or decreasing the ozone concentration in the atmosphere was certainly possible, altering the amount of ultraviolet light reaching the earth. Deliberate creation of an ozone hole might confuse surveillance systems; deteriorate aircraft materials such as rubber, plastic, glass; and harm humans and crops. Less purposeful might be the introduction of chemicals from rocket fuel or other sources, resulting in “large inadvertent changes” in atmospheric properties.

NATO concluded its assessment of environmental warfare with a warning that major changes might already be under way. “Much of the military planning of today assumes that the earth’s atmosphere will remain substantially as it is,” it wrote.


Reprinted from “Arming Mother Nature: The Birth of Catastrophic Environmentalism” with permission from Oxford University Press USA. Copyright © Oxford University Press 2013




















Friday, April 12, 2013

In Texas, Police in Schools Criminalize 300,000 Students Each Year




Civil Liberties  


The "good guy with a gun" seems to do a lot more policing than protecting.

 
 
 
Photo Credit: SHUTTERSTOCK

 
 
In Texas, hundreds of thousands of students are winding up in court for committing very serious offenses such as cursing or farting in class. Some of these so-called dangerous criminals (also known as teenagers) will face arrest and even incarceration, like the honors student who spent a night in jail for skipping class, or the 12-year-old who was arrested for spraying perfume on her neck. These cases have at least one thing in common in that they were carried out by special police officers walking a controversial beat: the hallways and classrooms of public schools.

As political pressure from both sides of the aisle mounts to increase police presence in American schools, evidence suggests adding armed guards will only thrust more disadvantaged youth into the criminal justice system. Civil rights groups say policing our schools will further the institutionalization of what's known as the "school-to-prison pipeline."

To understand the potential consequences of putting police inside public schools, we can take a look at Texas, where students face one of the most robust school-to-prison pipelines in the country. According to the youth advocacy group Texas Appleseed, school officers issued 300,000 criminal citations to students in 2010, some handed to children as young as six years old.

As the New York Times notes, Texas Appleseed and a local NAACP chapter filed a complaint in February against a school district with a particular knack for criminalizing children, especially minorities. The complaint says Bryan Independent School District of Texas’ Brazos County, disproportionately ticketed black students for misdemeanors, potentially violating the Civil Rights Act of 1964. Black students accounted for 46 percent of tickets issued in 2011 to 2012, despite only making up 21 percent of the student body.

Most of the criminal citations levied against students were for “Class C” misdemeanors, compelling them to miss classes in order to attend court, and often face addition disciplinary action from the district. As the complaint notes, “These students can then face sentences including fines, court costs, community service, probation and mandatory participation in ‘First Offender’ programs.”
The complaint also adds that the problems often don’t end there. If students fail to appear in court, or if their parents can’t afford to pay fines, then the state issues an arrest warrant for them when they turn 17. Thus, these tickets “can follow students past high school into their adult lives with many of the same consequences as a criminal conviction for a more serious offense, including having to report their convictions on applications for college, the military or employment.”

Advocacy groups add that many behavioral problems warranting tickets in Texas schools seem to be rather trivial for something that can lead to a criminal conviction. For example, some “Class C” misdemeanors under the state’s penal code include using profanity, making offensive gestures, creating “by chemical means” an “unreasonable odor” and “making unreasonable noise in a public place” In other words, yelling, farting, wearing Axe body spray and generally being a teenager is officially illegal in Texas.

Many commentators and several Democratic lawmakers scoffed when NRA executive vice president Wayne LaPierre suggested in the wake of the Newtown shooting that armed guards in schools is “the one thing that would keep people safe,” notoriously adding that “the only thing that stops a bad guy with a gun is a good guy with a gun.” Yet, not long after LaPierre’s press conference, the White House released a plan calling for an additional 1,000 “specially trained police officers that work in schools.” And just last week, an NRA task force released a report fleshing out its proposal to put armed guards in every school. The head of that task force, former GOP Congressman Asa Hutchinson, announced his intentions to run for Arkansas Governor days after the report was released.

"Obviously, we believe [armed guards] will make a difference in the various layers that make up school safety," said Asa Hutchinson in a news conference.
Several academics and judges dispute Mr. Hutchinson’s claim, agreeing with Texas Appleseed’s reports that police in schools turn them less into safe havens than juvenile centers.

“There is no evidence that placing officers in the schools improves safety,” University of Maryland criminologist Denise C. Gottfredson told the Times. “And it increases the number of minor behavior problems that are referred to the police, pushing kids into the criminal system.”

Even Texas chief Supreme Court justice Wallace B. Jefferson called out his state for its role in the school-to-prison pipeline. "We are criminalizing our children for nonviolent offenses," he said in a biennial address on the state of the judiciary, referring to the 300,000 or so tickets issued to students in Texas schools each year.
 
Steven Hsieh is an editorial assistant at AlterNet and writer based in Brooklyn. Follow him on Twitter @stevenjhsieh.

Wednesday, April 3, 2013

Wikileaks Was Just a Preview: We're Headed for an Even Bigger Showdown Over Secrets

Rolling Stone 

POLITICS








 
Bradley Manning
 
U.S. Army Private Bradley Manning
Alex Wong/Getty Images

I went yesterday to a screening of We Steal Secrets, Oscar-winning director Alex Gibney's brilliant new documentary about Wikileaks. The movie is beautiful and profound, an incredible story that's about many things all at once, including the incredible Shakespearean narrative that is the life of Julian Assange, a free-information radical who has become an uncompromising guarder of secrets.

I'll do a full review in a few months, when We Steal Secrets comes out, but I bring it up now because the whole issue of secrets and how we keep them is increasingly in the news, to the point where I think we're headed for a major confrontation between the government and the public over the issue, one bigger in scale than even the Wikileaks episode.

We've seen the battle lines forming for years now. It's increasingly clear that governments, major corporations, banks, universities and other such bodies view the defense of their secrets as a desperate matter of institutional survival, so much so that the state has gone to extraordinary lengths to punish and/or threaten to punish anyone who so much as tiptoes across the informational line.

This is true not only in the case of Wikileaks – and especially the real subject of Gibney's film, Private Bradley Manning, who in an incredible act of institutional vengeance is being charged with aiding the enemy (among other crimes) and could, theoretically, receive a death sentence.

Did the Mainstream Media Fail Bradley Manning?


There's also the horrific case of Aaron Swartz, a genius who helped create the technology behind Reddit at the age of 14, who earlier this year hanged himself after the government threatened him with 35 years in jail for downloading a bunch of academic documents from an MIT server. Then there's the case of Sergey Aleynikov, the Russian computer programmer who allegedly stole the High-Frequency Trading program belonging to Goldman, Sachs (Aleynikov worked at Goldman), a program which prosecutors in open court admitted could, "in the wrong hands," be used to "manipulate markets."

Aleynikov spent a year in jail awaiting trial, was convicted, had his sentence overturned, was freed, and has since been re-arrested by a government seemingly determined to make an example out of him.

The Brilliant Life and Tragic Death of Aaron Swartz


And most recently, there's the Matthew Keys case, in which a Reuters social media editor was charged by the government with conspiring with the hacker group Anonymous to alter a Los Angeles Times headline in December 2010. The change in the headline? It ended up reading, "Pressure Builds in House to Elect CHIPPY 1337," Chippy being the name of another hacker group accused of defacing a video game publisher's website.

Keys is charged with crimes that carry up to 25 years in prison, although the likelihood is that he'd face far less than that if convicted. Still, it seems like an insane amount of pressure to apply, given the other types of crimes (of, say, the HSBC variety) where stiff sentences haven't even been threatened, much less imposed.

A common thread runs through all of these cases. On the one hand, the motivations for these information-stealers seem extremely diverse: You have people who appear to be primarily motivated by traditional whistleblower concerns (Manning, who never sought money and was obviously initially moved by the moral horror aroused by the material he was seeing, falls into that category for me), you have the merely mischievous (the Keys case seems to fall in this area), there are those who either claim to be or actually are free-information ideologues (Assange and Swartz seem more in this realm), and then there are other cases where the motive might have been money (Aleynikov, who was allegedly leaving Goldman to join a rival trading startup, might be among those).
But in all of these cases, the government pursued maximum punishments and generally took zero-tolerance approaches to plea negotiations. These prosecutions reflected an obvious institutional terror of letting the public see the sausage-factory locked behind the closed doors not only of the state, but of banks and universities and other such institutional pillars of society. As Gibney pointed out in his movie, this is a Wizard of Oz moment, where we are being warned not to look behind the curtain.

What will we find out? We already know that our armies mass-murder women and children in places like Iraq and Afghanistan, that our soldiers joke about smoldering bodies from the safety of gunships, that some of our closest diplomatic allies starve and repress their own citizens, and we may even have gotten a glimpse or two of a banking system that uses computerized insider trading programs to steal from everyone who has an IRA or a mutual fund or any stock at all by manipulating markets like the NYSE.

These fervent, desperate prosecutions suggest that there's more awfulness under there, things that are worse, and there is a determination to not let us see what those things are. Most recently, we've seen that determination in the furor over Barack Obama's drone assassination program and the so-called "kill list" that is associated with it.

Weeks ago, Kentucky Senator Rand Paul – whom I've previously railed against as one of the biggest self-aggrandizing jackasses in politics – pulled a widely-derided but, I think, absolutely righteous Frank Capra act on the Senate floor, executing a one-man filibuster of Obama's CIA nominee, John Brennan.

Paul had been mortified when he received a letter from Eric Holder refusing to rule out drone strikes on American soil in "extraordinary" circumstances like a 9/11 or a Pearl Harbor. Paul refused to yield until he extracted a guarantee that no American could be assassinated by a drone on American soil without first being charged with a crime.

He got his guarantee, but the way the thing is written doesn't fill one with anything like confidence. Eric Holder's letter to Paul reads like the legal disclaimer on a pack of unfiltered cigarettes:
Dear Senator Paul,
It has come to my attention that you have now asked an additional question: "Does the president have the additional authority to use a weaponized drone to kill an American not engaged in combat on American soil?" The answer is no.
Sincerely,
Eric Holder
You could drive a convoy of tanker trucks through the loopholes in that letter. Not to worry, though, this past week, word has come out via Congress – the White House won't tell us anything – that no Americans are on its infamous kill list. The National Journal's report on this story offered a similarly comical sort of non-reassurance:
The White House has wrapped its kill list in secrecy and already the United States has killed four Americans in drone strikes. Only one of them, senior al-Qaida operative Anwar al-Awlaki, was the intended target, according to U.S. officials. The others – including Awlaki's teenage son – were collateral damage, killed because they were too near a person being targeted.
But no more Americans are in line for such killings – at least not yet. "There is no list where Americans are on the list," House Intelligence Chairman Mike Rogers told National Journal. Still, he suggested, that could change.
"There is no list where Americans are on the list" – even the language used here sounds like a cheap Orwell knockoff (although, to be fair, so does V for Vendetta, which has unfortunately provided the model for the modern protest aesthetic). It's not an accident that so much of this story is starting to sound like farce. The idea that we have to beg and plead and pull Capra-esque stunts in the Senate just to find out whether or not our government has "asserted the legal authority" (this preposterous phrase is beginning to leak into news coverage with alarming regularity) to kill U.S. citizens on U.S. soil without trial would be laughable, were it not for the obvious fact that such lines are in danger of really being crossed, if they haven't been crossed already.

This morning, an Emory University law professor named Mary Dudziak wrote an op-ed in the Times in which she pointed out several disturbing aspects to the drone-attack policy. It's bad enough, she writes, that the Obama administration is considering moving the program from the CIA to the Defense Department. (Which, Dudziak notes, "would do nothing to confer legitimacy to the drone strikes. The legitimacy problem comes from the secrecy itself — not which entity secretly does the killing.") It's even worse that the administration is citing Nixon's infamous bombing of Cambodia as part of its legal precedent.
But beyond that, Obama's lawyers used bad information in their white paper:
On Page 4 of the unclassified 16-page "white paper," Justice Department lawyers tried to refute the argument that international law does not support extending armed conflict outside a battlefield. They cited as historical authority a speech given May 28, 1970, by John R. Stevenson, then the top lawyer for the State Department, following the United States' invasion of Cambodia.
Since 1965, "the territory of Cambodia has been used by North Vietnam as a base of military operations," he told the New York City Bar Association. "It long ago reached a level that would have justified us in taking appropriate measures of self-defense on the territory of Cambodia. However, except for scattered instances of returning fire across the border, we refrained until April from taking such action in Cambodia."
But, Dudziak notes, there is a catch:
In fact, Nixon had begun his secret bombing of Cambodia more than a year earlier. (It is not clear whether Mr. Stevenson knew this.) So the Obama administration's lawyers have cited a statement that was patently false.
Now, this "white paper" of Obama's is already of dubious legality at best. The idea that the President can simply write a paper expanding presidential power into extralegal assassination without asking the explicit permission of, well, somebody, anyway, is absurd from the start. Now you add to that the complication of the paper being based in part on some half-assed, hastily-cobbled-together, factually lacking precedent, and the Obama drone-attack rationale becomes like all rationales of blunt-force, repressive power ever written – plainly ridiculous, the stuff of bad comedy, like the Russian military superpower invading tiny South Ossetia cloaked in hysterical claims of self-defense.

 

Why Rand Paul's Filibuster Matters


The Wikileaks episode was just an early preview of the inevitable confrontation between the citizens of the industrialized world and the giant, increasingly secretive bureaucracies that support them. As some of Gibney's interview subjects point out in his movie, the experts in this field, the people who worked on information security in the Pentagon and the CIA, have known for a long time that the day would come when all of our digitized secrets would spill out somewhere.

But the secret-keepers got lucky with Wikileaks. They successfully turned the story into one about Julian Assange and his personal failings, and headed off the confrontation with the major news organizations that were, for a time, his allies.
But that was just a temporary reprieve. The secrets are out there and everyone from hackers to journalists to U.S. senators are digging in search of them. Sooner or later, there's going to be a pitched battle, one where the state won't be able to peel off one lone Julian Assange or Bradley Manning and batter him into nothingness. Next time around, it'll be a Pentagon Papers-style constitutional crisis, where the public's legitimate right to know will be pitted head-to-head with presidents, generals and CEOs.

My suspicion is that this story will turn out to be less of a simplistic narrative about Orwellian repression than a mortifying journey of self-discovery. There are all sorts of things we both know and don't know about the processes that keep our society running. We know children in Asia are being beaten to keep our sneakers and furniture cheap, we know our access to oil and other raw materials is being secured only by the cooperation of corrupt and vicious dictators, and we've also known for a while now that the anti-terror program they say we need to keep our airports and reservoirs safe involves mass campaigns of extralegal detention and assassination.

We haven't had to openly ratify any of these policies because the secret-keepers have done us the favor of making these awful moral choices for us.
But the stink is rising to the surface. It's all coming out. And when it isn't Julian Assange the next time but The New York Times, Der Spiegel and The Guardian standing in the line of fire, the state will probably lose, just as it lost in the Pentagon Papers case, because those organizations will be careful to only publish materials clearly in the public interest – there's no conceivable legal justification for keeping us from knowing the policies of our own country (although stranger things have happened).

When that happens, we'll be left standing face-to-face with the reality of how our state functions. Do we want to do that? We still haven't taken a very close look at even the Bradley Manning material, and my guess is because we just don't want to. There were thousands of outrages in those files, any one of which would have a caused a My-Lai-style uproar decades ago.




Did you hear the one about how American troops murdered four women and five children in Iraq in 2006, including a woman over 70 and an infant under five months old, with all the kids under five? All of them were handcuffed and shot in the head. We later called in an airstrike to cover it up, apparently. But it barely registered a blip on the American consciousness.

What if it we're forced to look at all of this for real next time, and what if it turns out we can't accept it? What if murder and corruption is what's holding it all together? I personally don't believe that's true – I believe it all needs to come out and we need to rethink everything together, and we can find a less totally evil way of living – but this is going to be the implicit argument from the secret-keeping side when this inevitable confrontation comes. They will say to us, in essence, "It's the only way. And you don't want to know." And a lot of us won't.
It's fascinating, profound stuff. We don't want to know, but increasingly it seems we can't not know, either. Sooner or later, something is going to have to give.