HISTORY OF COMPUTERS

 HISTORY OF COMPUTERS


From the earliest times the need to carry out calculations has been developing. The first steps involved the development of counting and calculation aids such as the counting board and the abacus.

Pascal (1623-62) was the son of a tax collector and a mathematical genius. He designed the first mechanical calculator (Pascaline) based on gears. It performed addition and subtraction.

 

Leibnitz (1646-1716) was a German mathematician and built the first calculator to do multiplication and division. It was not reliable due to accuracy of contemporary parts.


Babbage (1792-1872) was a British inventor who designed an ‘analytical engine’ incorporating the ideas of a memory and card input/ouput for data and instructions. Again the current technology did not permit the complete construction of the machine.

 

Babbage is largely remembered because of the work of Augusta Ada (Countess of Lovelace) who was probably the first computer programmer.

 

Burroughs (1855-98) introduced the first commercially successful mechanical adding machine of which a million were sold by 1926.

Hollerith developed an electromagnetically punched-card tabulator to tabulate the data for 1890 U.S. census. Data was entered on punched cards and could be sorted according to the census requirements. The machine was powered by electricity. He formed the Tabulating Machine Company which became International Business Machines (IBM). IBM is still one of the largest computer companies in the world.

 

Aiken (1900-73) a Harvard professor with the backing of IBM built the Harvard Mark I computer (51ft long) in 1944. It was based on relays (operate in milliseconds) as opposed to the use of gears. It required 3 seconds for a multiplication.

 

Eckert and Mauchly designed and built the ENIAC in 1946 for military computations. It used vacuum tubes (valves) which were completely electronic (operated in microseconds) as opposed to the relay which was electromechanical.

 

It weighed 30 tons, used 18000 valves, and required 140 kwatts of power. It was 1000 times faster than the Mark I multiplying in 3 milliseconds. ENIAC was a decimal machine and could not be programmed without altering its setup manually.

 

Atanasoff had built a specialised computer in 1941 and was visited by Mauchly before the construction of the ENIAC. He sued Mauchly in a case which was decided in his favour in 1974!

 

Von Neumann was a scientific genius and was a consultant on the ENIAC project. He formulated plans with Mauchly and Eckert for a new computer (EDVAC) which was to store programs as well as data.

 

This is called the stored program concept and Von Neumann is credited with it. Almost all modern computers are based on this idea and are referred to as von neumann machines.

 

He also concluded that the binary system was more suitable for computers since switches have only two values. He went on to design his own computer at Princeton which was a general purpose machine.

 

Alan Turing was a British mathematician who also made significant contributions to the early development of computing, especially to the theory of computation. He developed an abstract theoretical model of a computer called a Turing machine which is used to capture the notion of computable i.e. what problems can and what problems cannot be computed. Not all problems can be solved on a computer.

 

Note: A Turing machine is an abstract model and not a physical computer.

 

From the 1950’s, the computer age took off in full force. The years since then have been divided into periods or generations based on the technology used.

 

First Generation Computers (1951-58): Vacuum Tubes

 

These machines were used in business for accounting and payroll applications. Valves were unreliable components generating a lot of heat (still a problem in computers). They had very limited memory capacity. Magnetic drums were developed to store information and tapes were also developed for secondary storage.

 

They were initially programmed in machine language (binary). A major breakthrough was the development of assemblers and assembly language.

Second Generation (1959-64): Transistors

 

The development of the transistor revolutionized the development of computers. Invented at Bell Labs in 1948, transistors were much smaller, more rugged, cheaper to make and far more reliable than valves.

 

Core memory was introduced and disk storage was also used. The hardware became smaller and more reliable, a trend that still continues.

 

Another major feature of the second generation was the use of high-level programming languages such as Fortran and Cobol. These revolutionized the development of software for computers. The computer industry experienced explosive growth.

 

Third Generation (1965-71): Integrated Circuits (ICs)

 

IC’s were again smaller, cheaper, faster and more reliable than transistors. Speeds went from the microsecond to the nanosecond (billionth) to the picoseconds (trillionth) range. ICs were used for main memory despite the disadvantage of being volatile. Minicomputers were developed at this time.

 

Terminals replaced punched cards for data entry and disk packs became popular for secondary storage.

 

IBM introduced the idea of a compatible family of computers, 360 family, easing the problem of upgrading to a more powerful machine.

 

Substantial operating systems were developed to manage and share the computing resources and time sharing operating systems were developed. These greatly improved the efficiency of computers.

Computers had by now pervaded most areas of business and administration.

 

The number of transistors that be fabricated on a chip is referred to as the scale of integration (SI). Early chips had SSI (small SI) of tens to a few hundreds. Later chips were MSI (Medium SI): hundreds to a few thousands,. Then came LSI chips (Large SI) in the thousands range.

 

Fourth Generation (1971 - ): VLSI (Very Large SI)

 

VLSI allowed the equivalent of tens of thousand of transistors to be incorporated on a single chip. This led to the development of the microprocessor a processor on a chip.

 

Intel produced the 4004 which was followed by the 8008,8080, 8088 and 8086 etc. Other companies developing microprocessors included Motorolla (6800, 68000), Texas Instruments and Zilog.

 

Personal computers were developed and IBM launched the IBM PC based on the 8088 and 8086 microprocessors.

 

Mainframe computers have grown in power. Memory chips are in the megabit range. VLSI chips had enough transistors to build 20 ENIACs.

 

Secondary storage has also evolved at fantastic rates with storage devices holding gigabytes (1000Mb = 1 Gb) of data.

 

On the software side, more powerful operating systems are available such as Unix. Applications software has become cheaper and easier to use. Software development techniques have vastly improved.

 

Fourth generation languages 4GLs make the development process much easier and faster.

 

[Languages are also classified according to generations from machine language (1GL), assembly language (2GL), high level languages (3GL) to 4Gls].

 

Software is often developed as application packages. VisiCalc a spreadsheet program, was the pioneering application package and the original killer application.

 

Killer application: A piece of software that is so useful that people will buy a computer to use that application.

 

Fourth Generation Continued (1990s): ULSI (Ultra Large SI)

ULSI chips have millions of transistors per chip e.g. the original Pentium had over 3 million and this has more than doubled with more recent versions. This has allowed the development of far more powerful processors.

 

The Future

 

Developments are still continuing. Computers are becoming faster, smaller and cheaper. Storage units are increasing in capacity.

 

Distributed computing is becoming popular and parallel computers with large numbers of CPUs have been built.

 

The networking of computers and the convergence of computing and communications is also of major significance.

 

From Silicon to CPUs !

 

One of the most fundamental components in the manufacture of electronic devices, such as a CPU or memory, is a switch. Computers are constructed from thousands to millions of switches connected together. In modern computers, components called transistors act as electronic switches.

 

A brief look at the history of computing reveals a movement from mechanical to electromechanical to electronic to solid state electronic components being used as switches to construct more and more powerful computers as illustrated below:

Electromagnetically: Relays

 Electronic: Vacuum tubes (valves)

  Solid State: Transistors

 

 Integrated Circuits (ICs): n Transistors

( n ranges from less than 100 for SSI ICs

 to millions for ULSI ICs)

 

 

Figure 1: Evolution of switching technology

 

Transistors act as electronic switches, i.e. they allow information to pass or not to pass under certain conditions. The development of integrated circuits (ICs) allowed the construction of a number of transistors on a single piece of silicon (the material out of which IC’s are made).

 

IC’s are also called silicon chips or simply chips. The number of transistors on a chip is determined by its level of integration.

N0. of Transistors

Integration level

Abbreviation

Example

2 -50

small-scale integration

SSI

 

50 - 5000

medium-scale integration

MSI

 

 

5000 - 100,000

large scale integration

LSI

Intel 8086 (29,000)

100K - 10 million

very large scale

integration

VLSI

Pentium (3 million)

10 million to 1000

million

ultra large scale

integration

ULSI

Pentium III (30 million)

1000 million -

super large scale

integration

SLSI

 

 

.Moore’s Law

 

The number of transistors on an IC will double every 18 months.

 

(Gordon Moore chairman of Intel at the time 1965,).

 

This prediction has proved very reliable to date and it seems likely that it will remain so over the next ??? years.

 

Chip Fabrication

 

Silicon chips have a surface area of similar dimensions to a thumb nail (or smaller) and are three dimensional structures composed of microscopically thin layers (perhaps as many as 20) of insulating and conducting material on top of the silicon. The manufacturing process is extremely complex and expensive.

 

Silicon is a semiconductor which means that it can be altered to act as either a conductor allowing electricity to flow or as an insulator preventing the flow of electricity. Silicon is first processed into circular wafers and these are then used in the fabrication of chips. The silicon wafer goes through a long and complex process which results in the circuitry for a semiconductor device such as a microprocessor or RAM being developed on the wafer. It should be noted that each wafer contains from several to hundreds of the particular device being produced. Figure 3 illustrates an 8-inch silicon wafer containing microprocessor chips.

 Figure 3: A single silicon wafer can contain a large number of microprocessors

 The percentage of functioning chips is referred to as the yield of the wafer. Yields vary substantially depending on the complexity of the device being produced, the feature size used and other factors. While manufacturers are slow to release actual figures, yields as low as 50% are reported and it is accepted that 80-90% yields are very good.

A single short circuit, caused by two wires touching in a 30 million plus transistor chip, is enough to

cause chip failure!

 

Feature Size

 

The feature size refers to the size of a transistor or to the width of the wires connecting transistors on the chip. One micron (one thousandth of a millimetre) was a common feature size.

 

State of the art chips are using sub-micron feature sizes from 0.25 (1997) to 0.13 (2001) (250 -130 nanometres)

 

The smaller the feature size, the more transistors there are available on a given chip area.

 

This allows more microprocessors for example to be obtained from a single silicon wafer. It also means that a given microprocessor will be smaller, runs faster and uses less power than its predecessor using a larger feature size. Since more of these smaller chips can be obtained from a single wafer, each chip will cost less which is one of the reasons for cheaper processor chips.

 

In addition, reduced feature size it makes it possible to make more complex microprocessors, such as the Pentium III which uses around of 30 million transistors.

 

Die Size

An obvious way to increase the number of transistors on a chip is to increase the area of silicon used for each chip - the die size.

 

However, this can lead to problems. Assume that a fixed number faults occur randomly on the silicon wafer illustrated in Figure 3. A single fault will render an individual chip useless.

 

The larger the die size for the individual chip, the greater the waste in terms of area of silicon, when a fault arises on a chip.

 

For example, if a wafer were to contain 40 chips and ten faults occur randomly, then up to 10 of the 40 chips may be useless giving up to 25% wastage.

 

On the other hand, if there are 200 chips on the wafer, we would only have 5% wastage with 10 faults. Hence, there is a trade-off between die size and yield, i.e. a larger die size leads to a decrease in yield.

Comments

Popular posts from this blog

NATURAL RESOURCES OF PAKISTAN