By the way if you haven't read "The Day AFter Roswell" by DIA Colonel Corso under President Eisenhower, it is the most definitive book I've ever read on the subject.
begin quote from:
http://www.galactic.no/rune/spesBoker/the_day_after_roswell.pdf
Horizon were over.
81
CHAPTER 12
The Integrated Circuit Chip: From the Roswell Crash Site to Silicon
Valley
WITH THE NIGHT-VISION IMAGE INTENSIFIER PROJECT UNDER way at Fort Belvoir and the Project
Horizon team trying to swim upstream against the tide of civilian management of the U.S. space program, I
turned my attention to the next of the Roswell crash fragments that looked especially intriguing: the charred
semiconductor wafers that had broken off some larger device. I hadn’t made these my priorities at first, not
knowing what they really were, until General Trudeau asked me to take a closer look.
“Talk to some of the rocket scientists down at Alamogordo about these things, Phil, “ he said.
“I think they’ll know what we should do with them. “
I knew that in the days immediately following the crash, General Twining had met with the Alamogordo
group of the Air Materiel Command and had described some of the debris to them. But I didn’t know how
detailed his descriptions were or whether they even knew about the wafers we had in our file.
“I want to talk to some of the scientists up here, too, “ I said. “Especially, I want to see some of
the engineers from the defense contractors. Maybe they can figure out what the engineering
process is for these things. “
“Go over to Bell Labs, Phil, “ General Trudeau also suggested. “The transistor came out of
their shop and these things look a lot like transistorized circuits.“
I’d heard that General Twining had worked very closely with both Bell Labs and Motorola on
communications research during the war, afterwards at the Alamogordo test site for V2 missile launches,
and after the Roswell crash. Whether he had brought them any material from the crash or showed them the
tiny silicon chips was a matter of pure speculation. I only know that the entire field of circuit miniaturization
took a giant leap in 1947 with the invention of the transistor and the first solid state components.
By the late 1950s,transistors had replaced the vacuum tube in radios and had turned the wall-sized wooden
box of the 1940s into the portable plastic radio you could hear blaring away at the shore on a hot July
Sunday. The electronics industry had taken a major technological jump in less than ten years, and I had to
wonder privately whether any Roswell material had gotten out that I didn’t know about prior to my taking
over Foreign Technology in 1961. I didn’t realize it at first when I showed those silicon wafers to General
Trudeau, but I was to become very quickly and intimately involved with the burgeoning computer industry
and a very small, completely invisible, cog in an assembly line process that fifteen years later would result
in the first microcomputer systems and the personal computer revolution. Over the course of the years
since I joined the army in 1942, my career took me through the stages of vacuum tube based devices, like
our radios and radars in World War II, to component chassis.
These were large circuitry units that, if they went down, could be changed in sections, smaller sections, and
finally to tiny transistors and transistorized electronic components. The first army computers I saw were
room sized, clanking vacuum tube monsters that were always breaking down and, by today’s standards,
took an eternity to calculate even the simplest of answers. They were simply oil filled data pots. But they
amazed those of us who had never seen computers work before.
At Red Canyon and in Germany, the tracking and targeting radars we used were controlled by new
transistorized chassis computers that were compact enough to fit onto a truck and travel with the battalion.
So when I opened up my nut file and saw the charred matte gray quarter sized, cracker shaped silicon
wafers with the gridlines etched onto them like tiny printed lines on the cover of a match book, I could make
an educated guess about their function even though I’d never seen anything of the like before. I knew,
however, that our rocket scientists and the university researchers who worked with the development
laboratories at Bell, Motorola, and IBM would more than understand the primary function of these chips and
figure out what we needed to do to make some of our own.
82
But first I called Professor Hermann Oberth for basic background on what, if any, development might
have taken place after the Roswell crash. Dr. Oberth knew the Alamogordo scientists and probably received
second hand the substance of the conversations General Twining had with his Alamogordo group in the
hours after the retrieval of the vehicle. And if General Twining described some of the debris, did he describe
these little silicon chips? And if he did, in those months when the ENIAC - the first working computer - was
just cranking up at the Aberdeen Ordnance Testing Grounds in Maryland, what did the scientists make of
those chips?
“They saw these at the Walker Field hangar, “ Dr. Oberth told me. “All of them at Alamogordo
flew over to Roswell with General Twining to oversee the shipment to Wright Field. “
Oberth described what happened that day after the crash when a team of AMC rocket scientists pored over
the bits and pieces of debris from the site. Some of the debris was packed for flight on B29s. Other
material, especially the crates that wound up at Fort Riley, were loaded onto deuce and a halfs for the drive.
Dr. Oberth said that years later, von Braun had told him how those scientists who literally had to stand in
line to have their equations processed by the experimental computer in Aberdeen Maryland were in awe of
the microscopic circuitry etched into the charred wafer chips that had spilled out of the craft.
Von Braun had asked General Twining whether anyone at Bell Labs was going to be contacted about this
find. Twining seemed surprised at first, but when von Braun told him about the experiments on solid state
components - material whose electrons don’t need to be excited by heat in order to conduct current -
Twining became intrigued. What if these chips were components of a very advanced solid state circuitry?
von Braun asked him. What if one of the reasons the army could find no electronic wiring on the craft were
the layers of these wafers that ran throughout the ship? These circuit chips could be the nervous system of
the craft, carrying signals and transmitting commands just like the nervous system in a human body.
General Twining’s only experience had been with the heavily insulated vacuum tube devices from World
War II, where the multistrand wires were covered with cloth. He’d never seen metallic printed chips like
these before. How did they work? he’d asked von Braun.
The German scientist wasn’t sure, although he guessed they worked on the same principle as the
transistors that laboratories were trying to develop to the point where they could be manufactured
commercially. It would completely transform the electronics industry, von Braun explained to General
Twining, nothing short of a revolution. The Germans had been desperately trying to develop circuitry of this
sort during the war, but Hitler, convinced the war would be over by 1941, told the German computer
researchers that the Wehrmacht had no need for computers that had a development timetable greater than
one year. They’d all be celebrating victory in Berlin before the end of the year.
But the research into solid state components that the Germans had been doing and the early work at Bell
Labs was nothing compared to the marvel that Twining had shown von Braun and the other rocket
scientists in New Mexico. Under the magnifying glass, the group thought they saw not just a single solid
state switch but a whole system of switches integrated into each other and comprising what looked like an
entire circuit or system of circuits. They couldn’t be sure because no one had ever seen anything even
remotely like this before.
But it showed them an image of what the future of electronics could be if a way could be found to
manufacture this kind of circuit on Earth. Suddenly, the huge guidance control systems necessary to control
the flight of a rocket, which, in 1947, were too big to be squeezed into the fuselage of the rocket, could be
miniaturized so that the rocket could have its own automatic guidance system. If we could duplicate what
the EBEs had, we, too, would have the ability to explore space. In effect, the reverse engineering of solid
state integrated circuitry began in the weeks and months after the crash even though William Shockley at
Bell Labs was already working on a version of his transistor as early as 1946.
In the summer of 1947, the scientists at Alamogordo were only aware of the solid state circuit research
under way at Bell Labs and Motorola. So they pointed Nathan Twining to research scientists at both
companies and agreed to help him conduct the very early briefings into the nature of the Roswell find. The
army, very covertly, turned some of the components over to research engineers for an inspection, and by
the early 1950s the transistor had been invented and transistorized circuits were now turning up in
consumer products as well as in military electronics systems. The era of the vacuum tube, the single piece
of eighty year old technology upon which an entire generation of communications devices including
television and digital computers was built, was now coming to a close with the discovery in the desert of an
entirely new technology.
The radio vacuum tube was a legacy of nineteenth century experimentation with electric current. Like many
83
historic scientific discoveries, the theory behind the vacuum tube was uncovered almost by chance, and
nobody really knew what it was or cared much about it until years later. The radio vacuum tube probably
reached its greatest utility from the 1930s through the 1950s, until the technology we discovered at Roswell
made it all but obsolete.
The principle behind the radio vacuum tube, first discovered by Thomas Edison in the 1880s while he was
experimenting with different components for his incandescent lightbulb, was that current, which typically
flowed in either direction across a conductive material such as a wire, could be made to flow in only one
direction when passed through a vacuum. This directed flow of current, called the “Edison effect, “ is the
scientific principle behind the illumination of the filament material inside the vacuum of the incandescent
lightbulb, a technology that has remained remarkably the same for over a hundred years.
But the lightbulb technology that Edison discovered back in the1880s, then put aside only to experiment
with it again in the early twentieth century, also had another equally important function. Because the flow of
electrons across the single filament wire went in only one direction, the vacuum tube was also a type of
automatic switch. Excite the flow of electrons across the wire and the current flowed only in the direction
you wanted it to. You didn’t need to throw a switch to turn on a circuit manually because the vacuum tube
could do that for you.
Edison had actually discovered the first automatic switching device, which could be applied to hundreds of
electronic products from the radio sets that I grew up with in the1920s to the communications networks and
radar banks of World War II and to the television sets of the 1950s. In fact, the radio tube was the single
component that enabled us to begin the worldwide communications network that was already in place by
the early twentieth century.
Radio vacuum tubes also had another important application that wasn’t discovered until experimenters in
the infant science of computers first recognized the need for them in the 1930s and then again in the
1940s. Because they were switches, opening and closing circuits, they could be programmed to reconfigure
a computer to accomplish different tasks. The computer itself had, in principle, remained essentially the
same type of calculating device that Charles Babbage first invented in the 1830s. It was a set of internal
gears or wheels that acted as counters and a section of “memory” that stored numbers until it was their turn
to be processed. Babbage’s computer was operated manually by a technician who threw mechanical
switches in order to input raw numbers and execute the program that processed the numbers.
The simple principle behind the first computer, called by its inventor the “Analytical Engine, “ was that the
same machine could process an infinite variety and types of calculations by reconfiguring its parts through
a switching mechanism. The machine had a component for inputting numbers or instructions to the
processor; the processor itself, which completed the calculations; a central control unit, or CPU, that
organized and sequenced the tasks to make sure the machine was doing the right job at the right time; a
memory area for storing numbers; and finally a component that output the results of the calculations to a
type of printer: the same basic components you find in all computers even today.
The same machine could add, subtract, multiply, or divide and even store numbers from one arithmetical
process to the next. It could even store the arithmetical computation instructions themselves from job to job.
And Babbage borrowed a punch card process invented by Joseph Jacquard for programming weaving
looms. Babbage’s programs could be stored on series of punch cards and fed into the computer to control
the sequence of processing numbers. Though this may sound like a startling invention, it was Industrial
Revolution technology that began in the late eighteenth century for the purely utilitarian challenge of
processing large numbers for the British military. Yet, in concept, it was an entirely new principle in machine
design that very quietly started the digital revolution.
Because Babbage’s machine was hand powered and cumbersome, little was done with it through the
nineteenth century, and by the1880s, Babbage himself would be forgotten. However, the practical
application of electricity to mechanical appliances and the delivery of electrical power along supply grids,
invented by Thomas Edison and refined by Nikola Tesla, gave new life to the calculation machine. The
concept of an automatic calculation machine would, inspire American inventors to devise their own
electrically powered calculators to process large numbers in a competition to calculate the 1890 U.S.
Census.
The winner of the competition was Herman Hollerith, whose electrically powered calculator was a monster
device that not only processed numbers but displayed the progress of the process on large clocks for all to
see. He was so successful that the large railroad companies hired him to process their numbers. By the
turn of the century his company, the Computing Tabulating and Recording Company, had become the
single largest developer of automatic calculating machines. By 1929, when Hollerith died, his company had
84
become the automation conglomerate, IBM.
Right about the time of Hollerith’s death, a German engineer named Konrad Zuse approached some of the
same challenges that had confronted Charles Babbage a hundred years earlier: how to build his own
version of a universal computing machine that could reconfigure itself depending upon the type of
calculation the operator wanted to perform. Zuse decided that instead of working with a machine that
operated on the decimal system, which limited the types of arithmetic calculations it could perform, his
machine would use only two numbers, 0 and 1, the binary system.
This meant that he could process any type of mathematical equation through the opening or closing of a
series of electromagnetic relays, switches that would act as valves or gates either letting current through or
shutting it off. These relays were the same types of devices that the large telephone companies, like the
Bell system in the United States, were using as the basis of their networks. By marrying an electrical power
supply and electric switches to the architecture of Babbage’s Analytical Engine and basing his
computations in a binary instead of a decimal system, Zuse had come up with the European version of the
first electrical digital computer, an entirely new device. It was just three years before the German invasion of
Poland and the outbreak of World War II.
In the United States at about the same time as Zuse was assembling his first computer in his parents’ living
room, Harvard mathematics professor Howard Aiken was trying to reconstruct a theoretical version of
Babbage’s computer, also using electromagnetic relays as switching devices and relying on a binary
number system. The difference between Aiken and Zuse was that Aiken had academic credentials and his
background as an innovative mathematician got him into the office of Thomas Watson, president of IBM, to
whom he presented his proposal for the first American digital computer. Watson was impressed, authorized
a budget for $1 million, and, right before the attack on Pearl Harbor, the project design was started up at
Cambridge, Massachusetts. It was then moved to IBM headquarters in New York during the war.
Because of their theoretical ability to calculate large sets of numbers in a relatively short period of time,
digital computers were drafted into the war effort in the United Kingdom as a code breaking device. By
1943, at the same time that IBM’s first shiny stainless steel version of Aiken’s computer was up and running
in Endicott, New York, the British were using their dedicated crypto analytical Colossus computer to break
the German codes and decipher the code creating ability of the German Enigma - the code machine that
the Nazis believed made their transmissions indecipherable to the Allies.
Unlike the IBM-Aiken computer at Harvard and Konrad Zuse’s experimental computer in Berlin, the
Colossus used radio vacuum tubes as relay switches and was, therefore, hundreds of times faster than any
experimental computer using electromagnetic relays. The Colossus, therefore, was a true breakthrough
because it married the speed of vacuum tube technology with the component design of the Analytical
Engine to create the first modern era digital computer.
The British used the Colossus so effectively that they quickly felt the need to build more of them to process
the increasingly large volume of encrypted transmissions the Germans were sending, ignorant of the fact
that the Allies were decoding every word and outsmarting them at every turn. I would argue even to this day
that the technological advantage the Allies enjoyed in intelligence gathering apparatus, specifically code
breaking computers and radar, enabled us to win the war despite Hitler’s initial successes and his early
weapon advantages. The Allies’ use of the digital computer in World War II was an example of how a
superior technological advantage can make the difference between victory and defeat no matter what kinds
of weapons or numbers of troops the enemy is able to deploy.
The American and British experience with computers during the war and our government’s commitment to
developing a viable digital computer led to the creation, in the years immediately following the war, of a
computer called the Electronic Numerical Integrator and Calculator, or ENIAC. ENIAC was the brain child of
Howard Aiken and one of our Army R&D brain trust advisers, the mathematician John von Neumann.
Although it operated on a decimal instead of a binary system and had a very small memory, it relied on
radio vacuum tube switching technology. For its time it was the first of what today are called “number
crunchers. “
When measured against the way computers developed over the years since its first installation, especially
the personal computers of today, ENIAC was something of a real dinosaur. It was loud, hot, cumbersome,
fitful, and required the power supply of an entire town to keep it going. It couldn’t stay up for very long
because the radio tubes, always unreliable even under the best working conditions, would blow out after
only a few hours’ work and had to be replaced. But the machine worked, it crunched the numbers it was
fed, and it showed the way for the next model, which reflected the sophisticated symbolic architectural
design of John von Neumann.
85
Von Neumann suggested that instead of feeding the computer the programs you wanted it to run every time
you turned it on, the programs themselves could be stored in the computer permanently. By treating the
programs themselves as components of the machine, stored right in the hardware, the computer could
change between programs, or the routines of subprograms, as necessary in order to solve problems. This
meant that larger routines could be processed into subroutines, which themselves could be organized into
templates to solve similar problems. In complex applications, programs could call up other programs again
and again without the need of human intervention and could even change the subprograms to fit the
application. von Neumann had invented block programming, the basis for the sophisticated engineering and
business programming of the late 1950s and 1960s and the great, great grandmother of today’s object
oriented programming.
By 1947, it had all come together: the design of the machine, the electrical power supply, the radio vacuum
tube technology, the logic of machine processing, von Neumann’s mathematical architecture, and practical
applications for the computer’s use. But just a few years shy of the midpoint of the century, the computer
itself was the product of eighteenth and nineteenth century thinking and technology. In fact, given the short
comings of the radio tube and the enormous power demands and cooling requirements to keep the
computer working, the development of the computer seemed to have come to a dead end.
Although IBM and Bell Labs were investing huge sums of development money into designing a computer
that had a lower operational and maintenance overhead, it seemed, given the technology of the digital
computer circa 1947, that there was no place it could go. It was simply an expensive to build, expensive to
run, lumbering elephant at the end of the line. And then an alien spacecraft fell out of the skies over
Roswell, scattered across the desert floor, and in one evening everything changed.
In 1948 the first junction transistor - a microscopically thin silicon sandwich of w-type silicon, in which some
of the atoms have an extra electron, and p-type silicon, in which some of the atoms have one less electron -
was devised by physicist William Shockley. The invention was credited to Bell Telephone Laboratories,
and, as if by magic, the dead end that had stopped the development of the dinosaur like ENIAC generation
of computers melted away and an entirely new generation of miniaturized circuitry began.
Where the radio tube circuit required an enormous power supply to heat it up because heat generated the
electricity, the transistor required very low levels of powers and no heating up time because the transistor
amplified the stream of electrons that flowed into its base. Because it required only a low level of current, it
could be powered by batteries. Because it didn’t rely on a heat source to generate current and it was so
small, many transistors could be packed into a very small space, allowing for the miniaturization of circuitry
components. Finally, because it didn’t burn out like the radio tube, it was much more reliable.
Thus, within months after the Roswell crash and the first exposure of the silicon wafer technology to
companies already involved in the research and development of computers, the limitations on the size and
power of the computer suddenly dropped like the removal of a roadblock on a highway and the next
generation of computers went into development. This set up for Army R&D, especially during the years I
was there, the opportunity for us to encourage that development with defense contracts calling for the
implementation of integrated circuit devices into subsequent generations of weapons systems.
More than one historian of the microcomputer age has written that no one before 1947 foresaw the
invention of the transistor or had even dreamed about an entirely new technology that relied upon
semiconductors, which were silicon based and not carbon based like the Edison incandescent tube. Bigger
than the idea of a calculating machine or an Analytical Engine or any combination of the components that
made up the first computers of the 1930s and 1940s, the invention of the transistor and its natural evolution
to the silicon chip of integrated circuitry was beyond what anyone could call a quantum leap of technology.
The entire development arc of the radio tube, from Edison’s first experiments with filament for his
incandescent lightbulb to the vacuum tubes that formed the switching mechanisms of ENIAC, lasted about
fifty years. The development of the silicon transistor seemed to come upon us in a matter of months. And,
had I not seen the silicon wafers from the Roswell crash with my own eyes, held them in my own hands,
talked about them with Hermann Oberth, Wernher von Braun, or Hans Kohler, and heard the reports
from these now dead scientists of the meetings between Nathan Twining, Vannevar Bush, and researchers
at Bell Labs, I would have thought the invention of the transistor was a miracle. I know now how it came
about.
As history revealed, the invention of the transistor was only the beginning of an integrated circuit technology
that developed through the 1950s and continues right through to the present. By the time I became
personally involved in 1961, the American marketplace had already witnessed the retooling of Japan and
Germany in the 1950s and Korea and Taiwan in the late 1950s through the early 1960s. General Trudeau
86
was concerned about this, not because he considered these countries our economic enemies but because
he believed that American industry would suffer as a result of its complacency about basic research and
development.
He expressed this to me on many occasions during our meetings, and history has proved him to be correct.
General Trudeau believed that the American industrial economy enjoyed a harvest of technology in the
years immediately following World War II, the effects of which were still under way in the 1960s, but that it
would soon slow down because R&D was an inherently costly undertaking that didn’t immediately
contribute to a company’s bottom line. And you had to have a good bottom line, General Trudeau always
said, to keep your stockholders happy or else they would revolt and throw the existing management team
right out of the company. By throwing their efforts into the bottom line, Trudeau said, the big American
industries were actually destroying themselves just like a family that spends all its savings.
“You have to keep on investing in yourself, Phil, “ the General would like to say when he’d look
up from his Wall Street Journal before our morning meetings and remark about how stock
analysts always liked to place their value on the wrong thing.
“Sure, these companies have to make a profit, but you look at the Japanese and the Germans
and they know the value of basic research, “ he once said to me.
“American companies expect the government to pay for all their research, and that’s what you
and I have to do if we want to keep them working. But there’s going to come a time when we
can’t afford to pay for it any longer. Then who’s going to foot the bill?”
General Trudeau was worrying about how the drive for new electronics products based upon miniaturized
circuitry was creating entirely new markets that were shutting out American companies. He said that it was
becoming cheaper for American companies to have their products manufactured for them in Asia, where
companies had already retooled after the war to produce transistorized components, than for American
companies, which had heavily invested in the manufacturing technology of the nineteenth century, to do it
themselves.
He knew that the requirement for space exploration, for challenging the hostile EBEs in their own territory,
relied on the development of an integrated circuit technology so that the electronic components of
spacecraft could be miniaturized to fit the size requirements of rocket propelled vehicles. The race to
develop more intelligent missiles and ordnance also required the development of new types of circuitry that
could be packed into smaller and smaller spaces. But retooled Japanese and German industries were the
only ones able to take immediate advantage of what General Trudeau called the “new electronics. “
For American industry to get onto the playing field the basic research would have to be paid for by the
military. It was something General Trudeau was willing to fight for at the Pentagon because he knew that
was the only way we could get the weapons only a handful of us knew we needed to fight a skirmish war
against aliens only a handful of us knew we were fighting.
Arthur Trudeau was a battlefield general engaged in a lonely military campaign that national policy and
secrecy laws forbade him even to talk about. And as the gulf of time widened between the Roswell crash
and the concerns over postwar economic expansion, even the people who were fighting the war alongside
General Trudeau were, one by one, beginning to die away. Industry could fight the war for us, General
Trudeau believed, if it was properly seeded with ideas and the money to develop them. By 1961, we had
turned our attention to the integrated circuit.
Government military weapons spending and the requirements for space exploration had already heavily
funded the transistorized component circuit. The radars and missiles I was commanding at Red Canyon,
New Mexico, in 1958 relied on miniaturized components for their reliability and portability. New generations
of tracking radars on the drawing boards in 1960 were even more sophisticated and electronically intelligent
than the weapons I was aiming at Soviet targets in Germany. In the United States, Japanese and
Taiwanese radios that fit into the palm of your hand were on the market.
Computers like ENIAC, once the size of a small warehouse, now occupied rooms no larger than closets
and, while still generating heat, no longer blew out because of overheated radio vacuum tubes.
Minicomputers, helped by government R&D funding, were still a few years away from market, but were
already in a design phase. Television sets and stereophonic phonographs that offered solid state
electronics were coming on the market, and companies like IBM, Sperry-Rand, and NCR were beginning to
87
bring electronic office machines onto the market. It was the beginning of a new age of electronics, helped,
in part, by government funding of basic research into the development and manufacture of integrated circuit
products.
But the real prize, the development of what actually had been recovered at Roswell, was still a few years
away. When it arrived, again spurred by the requirements of military weapons development and space
travel, it caused another revolution.
The history of the printed circuit and the microprocessor is also the history of a technology that allowed
engineers to squeeze more and more circuitry into a smaller and smaller space. It’s the history of the
integrated circuit, which developed throughout the 1960s, evolved into large scale integration by the early
1970s, very large scale integration by the middle 1970s, just before the emergence of the first real personal
computers, and ultra large scale integration by the early 1980s. Today’s 200 plus megahertz, multigigabyte
hard drive desktop computers are the results of the integrated circuit technology that began in the 1960s
and has continued to the present. The jump from the basic transistorized integrated printed circuit of the
1960s to large scale integration was made possible by the development of the microprocessor in 1972.
Once the development process of engineering a more tightly compacted circuit had been inspired by the
invention of the transistor in 1948, and fueled by the need to develop better, faster, and smaller computers,
it continued on a natural progression until the engineers at Intel developed the first microprocessor, a four
bit central processing unit called the 4004, in 1972.
This year marked the beginning of the microcomputer industry, although the first personal microcomputers
didn’t appear on the market until the development of Intel’s 8080ª. That computer chip was the heart of the
Altair computer, the first product to package a version of a high level programming language called BASIC,
which allowed non-engineering types to program the machine and create applications for it. Soon
companies like Motorola and Zilog had their own microprocessors on the market, and by 1977 the Motorola
6502-powered Apple II was on the market, joined by the 8080ª Radio Shack, the Commodore PET, the
Atari, and the Heathkit.
Operationally, at its very heart, the microprocessor shares the same binary processing functions and large
arrays of digital switches as its ancestors, the big mainframes of the 1950s and 1960s and the
transistorized minis of the late 1960s and early 1970s. Functionally, the microprocessor also shares the
same kinds of tasks as Charles Babbage’s Analytical Engine of the 1830s: reading numbers, storing
numbers, logically processing numbers, and outputting the results. The microprocessor just puts everything
into a much smaller space and moves it along at a much faster speed. In 1979, Apple Computer had begun
selling the first home computer floppy disk operating system for data and program storage that kicked the
microcomputer revolution into a higher gear. Not only could users input data via a keyboard or tape
cassette player, they could store relatively large amounts of data, such as documents or mathematical
projections, on transportable, erasable, and interchangeable Mylar disks that the computer was able to
read. Now the computer reached beyond the electronics hobbyist into the work place.
By the end of the year, MicroPro’s introduction of the first fully functional word processor called WordStar,
and Personal Software’s release of the very first electronic spreadsheet called VisiCalc, so transformed the
workplace that the desktop computer became a necessity for any young executive on his or her way up the
corporate ladder. And by the early 1980s, with the introduction of the Apple Macintosh and the object
oriented computer environment, not only the workplace but the whole world looked like a very different
place than it did in the early 1960s.
Even Dr. Vannevar Bush’s concept of a type of research query language based not on a linear outline but
on an intellectual relationship to something embedded in a body of text became a reality with the release of
a computer program by Apple called HyperCard.
It was as if from the year 1947 to 1980 a fundamental paradigm shift in the ability of human kind to process
information took place. Computers themselves almost became something like a silicon based life form,
inspiring the carbon based life forms on planet Earth to develop them, grow them, and even help them
reproduce. With computer directed process control programs now in place in virtually all major industries,
software that writes software, neural network based expert systems that learn from their own experience in
the real world, and current experiments under way to grow almost microscopically thin silicon based chips
in the weightless environment of earth orbit may be the forerunner of a time when automated orbital
factories routinely grow and harvest new silicon material for microprocessors more sophisticated than we
can even imagine at the present.
Were all of this to be true, could it not be argued that the silicon wafers we recovered from Roswell were
the real masters and space travelers and the EBE creatures their hosts or servants? Once implanted
88
successfully on Earth, our culture having reached a point of readiness through its development of the first
digital computers, would not the natural development stream, starting from the invention of the transistor,
have carried us to the point where we achieve a symbiotic relationship with the silicon material that carries
our data and enables us to become more creative and successful?
Maybe the Roswell crash, which helped us develop the technological basis for the weapons systems to
protect our planet from the EBEs, was also the mechanism for successfully implanting a completely alien
non-humanoid life form that survives from host to host like a virus, a digital Ebola that we humans will carry
to another planet someday. Or what if an enemy wanted to implant the perfect spying or sabotage
mechanism into a culture?
Then the implantation of the microchip based circuit into our technology by the EBEs would be the perfect
method. Was it implanted as sabotage or as something akin to the gift of fire? Maybe the Roswell crash in
1947 was an event waiting to happen, like poisoned fruit dropping from the tree into a playground. Once
bitten, the poison takes effect.
“Hold your horses, Phil, “ General Trudeau would say when I would speculate too much.
“Remember, you’ve got a bunch of scientists you need to talk to and the people at Bell Labs
are waiting to see your report when you’ve finished talking to the Alamogordo group. “
It was 1961 and the miniaturization of computer and electronic circuitry had already begun, but my report to
the general and appointments he was arranging for me at Sperry-Rand, Hughes, and Bell Labs were for
meetings with scientists to determine how their respective companies were proceeding with applying
miniaturized circuitry into designs for weapons systems. The inspiration for microcircuitry had fallen out of
the sky at Roswell and set the development of digital computers off in an entirely new direction. It was my
job now to use the process of weapons development, especially the development of guidance systems for
ballistic missiles, to implement the application of microcircuitry systems to these new generations of
weapons.
General Trudeau and I were among the first scouts in what would be the electronic battlefield of the 1980s.
“Don’t worry, General, I’ve got my appointments all set up, “ I told him. I knew how carried
away I could get, but I was an intelligence officer first, and that meant you start with a blank
page and fill it in. “But I think the people at Bell Labs have already seen these things before.“
And they actually did -
No comments:
Post a Comment