Robert
Mannell
One of the earliest machines
designed to assist people in calculations was the abacus which is still
being used some 5000 years after its invention.
In 1642 Blaise Pascal (a famous
French mathematician) invented an adding machine based on mechanical gears in
which numbers were represented by the cogs on the wheels.
Englishman, Charles Babbage,
invented in the 1830's a "Difference Engine" made out of brass and
pewter rods and gears, and also designed a further device which he called an
"Analytical Engine". His design contained the five key
characteristics of modern computers:-
- An input device
- Storage for numbers waiting to be processed
- A processor or number calculator
- A unit to control the task and the sequence of its calculations
- An output device
Augusta Ada Byron (later Countess of
Lovelace) was an associate of Babbage who has become known as the first computer
programmer.
An American, Herman Hollerith,
developed (around 1890) the first electrically driven device. It utilised
punched cards and metal rods which passed through the holes to close an
electrical circuit and thus cause a counter to advance. This machine was able
to complete the calculation of the 1890 U.S. census in 6 weeks compared with 7
1/2 years for the 1880 census which was manually counted.
In 1936 Howard Aiken of Harvard
University convinced Thomas Watson of IBM to invest $1 million in the
development of an electromechanical version of Babbage's analytical engine. The
Harvard Mark 1 was completed in 1944 and was 8 feet high and 55 feet long.
At about the same time (the late
1930's) John Atanasoff of Iowa State University and his assistant Clifford
Berry built the first digital computer that worked electronically, the ABC
(Atanasoff-Berry Computer). This machine was basically a small calculator.
In 1943, as part of the British war
effort, a series of vacuum tube based computers (named Colossus) were developed
to crack German secret codes. The Colossus Mark 2 series (pictured) consisted
of 2400 vacuum tubes.
John Mauchly and J. Presper Eckert
of the University of Pennsylvania developed these ideas further by proposing a
huge machine consisting of 18,000 vacuum tubes. ENIAC (Electronic Numerical
Integrator And Computer) was born in 1946. It was a huge machine with a huge
power requirement and two major disadvantages. Maintenance was extremely
difficult as the tubes broke down regularly and had to be replaced, and also
there was a big problem with overheating. The most important limitation,
however, was that every time a new task needed to be performed the machine need
to be rewired. In other words programming was carried out with a soldering
iron.
In the late 1940's John von Neumann
(at the time a special consultant to the ENIAC team) developed the EDVAC (Electronic
Discrete Variable Automatic Computer) which
pioneered the "stored program concept". This allowed programs to be
read into the computer and so gave birth to the age of general-purpose
computers.
The
Generations of Computers
It used to be quite popular to refer
to computers as belonging to one of several "generations" of
computer. These generations are:-
The First Generation (1943-1958): This generation is often described as starting with the
delivery of the first commercial computer to a business client. This
happened in 1951 with the delivery of the UNIVAC to the US Bureau of the
Census. This generation lasted until about the end of the 1950's (although some
stayed in operation much longer than that). The main defining feature of the
first generation of computers was that vacuum tubes were used as
internal computer components. Vacuum tubes are generally about 5-10 centimeters
in length and the large numbers of them required in computers resulted in huge
and extremely expensive machines that often broke down (as tubes failed).
The Second Generation (1959-1964): In the mid-1950's Bell Labs developed the transistor.
Transistors were capable of performing many of the same tasks as vacuum tubes
but were only a fraction of the size. The first transistor-based computer was
produced in 1959. Transistors were not only smaller, enabling computer size to
be reduced, but they were faster, more reliable and consumed less electricity.
The other main improvement of this
period was the development of computer languages. Assembler languages or
symbolic languages allowed programmers to specify instructions in words
(albeit very cryptic words) which were then translated into a form that the
machines could understand (typically series of 0's and 1's: Binary code). Higher
level languages also came into being during this period. Whereas assembler
languages had a one-to-one correspondence between their symbols and actual
machine functions, higher level language commands often represent complex
sequences of machine codes. Two higher-level languages developed during this
period (Fortran and Cobol) are still in use today though in a much more
developed form.
The Third Generation (1965-1970): In 1965 the first integrated circuit (IC) was
developed in which a complete circuit of hundreds of components were able to be
placed on a single silicon chip 2 or 3 mm square. Computers using these IC's
soon replaced transistor based machines. Again, one of the major advantages was
size, with computers becoming more powerful and at the same time much smaller
and cheaper. Computers thus became accessible to a much larger audience. An
added advantage of smaller size is that electrical signals have much shorter
distances to travel and so the speed of computers increased.
Another feature of this period is
that computer software became much more powerful and flexible and for the first
time more than one program could share the computer's resources at the same
time (multi-tasking). The majority of programming languages used today are
often referred to as 3GL's (3rd generation languages) even though some of them
originated during the 2nd generation.
The Fourth Generation (1971-present): The boundary between the third and fourth generations is
not very clear-cut at all. Most of the developments since the mid 1960's can be
seen as part of a continuum of gradual miniaturisation. In 1970 large-scale
integration was achieved where the equivalent of thousands of integrated
circuits were crammed onto a single silicon chip. This development again
increased computer performance (especially reliability and speed) whilst
reducing computer size and cost. Around this time the first complete
general-purpose microprocessor became available on a single chip. In
1975 Very Large Scale Integration (VLSI) took the process one step
further. Complete computer central processors could now be built into one chip.
The microcomputer was born. Such chips are far more powerful than ENIAC
and are only about 1cm square whilst ENIAC filled a large building.
During this period Fourth
Generation Languages (4GL's) have come into existence. Such languages are a
step further removed from the computer hardware in that they use language much
like natural language. Many database languages can be described as 4GL's. They
are generally much easier to learn than are 3GL's.
The Fifth Generation (the future): The "fifth generation" of computers were defined
by the Japanese government in 1980 when they unveiled an optimistic ten-year
plan to produce the next generation of computers. This was an interesting plan
for two reasons. Firstly, it is not at all really clear what the fourth
generation is, or even whether the third generation had finished yet. Secondly,
it was an attempt to define a generation of computers before they had come into
existence. The main requirements of the 5G machines was that they incorporate
the features of Artificial Intelligence, Expert Systems, and Natural
Language. The goal was to produce machines that are capable of performing
tasks in similar ways to humans, are capable of learning, and are capable of
interacting with humans in natural language and preferably using both speech
input (speech recognition) and speech output (speech synthesis). Such goals are
obviously of interest to linguists and speech scientists as natural language
and speech processing are key components of the definition. As you may have
guessed, this goal has not yet been fully realised, although significant
progress has been made towards various aspects of these goals.
Parallel
Computing
Up until recently most computers
were serial computers. Such computers had a single processor chip containing a
single processor. Parallel computing is based on the idea that if more than one
task can be processed simultaneously on multiple processors then a program
would be able to run more rapidly than it could on a single processor. The
supercomputers of the 1990s, such as the Cray computers, were extremely
expensive to purchase (usually over $1,000,000) and often required cooling by
liquid helium so they were also very expensive to run. Clusters of networked
computers (eg. a Beowulf culster of PCs running Linux) have been, since 1994, a
much cheaper solution to the problem of fast processing of complex computing
tasks. By 2008, most new desktop and laptop computers contained more than one
processor on a single chip (eg. the Intel "Core 2 Duo" released in
2006 or the Intel "Core 2 Quad" released in 2007). Having multiple
processors does not necessarily mean that parallel computing will work automatically.
The operating system must be able to distribute programs between the processors
(eg. recent versions of Microsoft Windows and Mac OS X can do this). An
individual program will only be able to take advantage of multiple processors
if the computer language it's written in is able to distribute tasks within a
program between multiple processors. For example, OpenMP supports parallel
programming in Fortran and C/C++.
Tidak ada komentar:
Posting Komentar