Тексты по английскому языку
   На этой странице помещена вторая часть текста по английскому языку про "Историю компьютера", которую нужно устно превести на русский группам A и B1, занимающимся у Цапаевой Ю.А.
   Если хотите, можете скачать заархивированную Word-овскую версию этого английского текста (11 674 байт, RAR 2.9).
   Use of Punched Cards by Hollerith
   A step towards automated computing was the development of punched cards, which were first successfully used with computers in 1890 by Herman Hollerith and James Powers, who worked for the US Census Bureau. They developed devices that could read the information that had been punched into the cards automatically, without human help. Because of this, reading errors were reduced dramatically, work flow increased, and, most importantly, stacks of punched cards could be used as easily accessible memory of almost unlimited size. Furthermore, different problems could be stored on different stacks of cards and accessed when needed.
   These advantages were seen by commercial companies and soon led to the development of improved punch-card using computers created by International Business Machines (IBM), Remington (yes, the same people that make shavers), Burroughs, and other corporations. These computers used electromechanical devices in which electrical power provided mechanical motion - like turning the wheels of an adding machine. Such systems included features to: feed in a specified number of cards automatically, add, multiply, and sort feed out cards with punched results.
   As compared to today's machines, these computers were slow, usually processing 50 - 220 cards per minute, each card holding about 80 decimal numbers (characters). At the time, however, punched cards were a huge step forward. They provided a means of I/O, and memory storage on a huge scale. For more than 50 years after their first use, punched card machines did most of the world's first business computing, and a considerable amount of the computing work in science.
   Electronic Digital Computers
   The start of World War II produced a large need for computer capacity, especially for the military. New weapons were made for which trajectory tables and other essential data were needed. In 1942, John P. Eckert, John W. Mauchly, and their associates at the Moore school of Electrical Engineering of University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC (Electrical Numerical Integrator And Calculator).
   The size of ENIAC's numerical "word" was 10 decimal digits and it could multiply two of these numbers at a rate of 300 per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was therefore about 1,000 times faster then the previous generation of relay computers.
   ENIAC used 18,000 vacuum tubes, about 1,800 square feet of floor space, and consumed about 180,000 watts of electrical power. It had punched card I/O, 1 multiplier, 1 divider/square router, and 20 adders using decimal ring counters, which served as adders and also as quick-access (.0002 seconds) read-write register storage. The executable instructions making up a program were embodied in the separate "units" of ENIAC, which were plugged together to form a "route" for the flow of information.
   These connections had to be redone after each computation, together with presetting function tables and switches. This "wire your own" technique was inconvenient (for obvious reasons), and with only some latitude could ENIAC be considered programmable. It was, however, efficient in handling the particular programs for which it had been designed.
   ENIAC is commonly accepted as the first successful high-speed electronic digital computer (EDC) and was used from 1946 to 1955. A controversy developed in 1971, however, over the patentability of ENIAC's basic digital concepts, the claim being made that another physicist, John V. Atanasoff had already used basically the same ideas in a simpler vacuum-tube device he had built in the 1930's while at Iowa State College. In 1973 the courts found in favor of the company using the Atanasoff claim.
   The Modern Stored-Program EDC
   Fascinated by the success of ENIAC, the mathematician John Von Neumann undertook, in 1945, an abstract study of computation that showed that a computer should have a very simple, fixed physical structure, and yet be able to execute any kind of computation by means of a proper programmed control without the need for any change in the unit itself.
   Von Neumann contributed a new awareness of how practical; yet fast computers should be organized and built. These ideas, usually referred to as the stored-program technique, became essential for future generations of high-speed digital computers and were universally adopted.
   The Stored-Program technique involves many features of computer design and function besides the one that it is named after. In combination, these features make very-high-speed operation attainable. A glimpse may be provided by considering what 1,000 operations per second means. If each instruction in a job program were used once in consecutive order, no human programmer could generate enough instruction to keep the computer busy.
   Arrangements must be made, therefore, for parts of the job program (called subroutines) to be used repeatedly in a manner that depends on the way the computation goes. Also, it would clearly be helpful if instructions could be changed if needed during a computation to make them behave differently. Von Neumann met these two needs by making a special type of machine instruction, called a Conditional control transfer - which allowed the program sequence to be stopped and started again at any point - and by storing all instruction programs together with data in the same memory unit, so that, when needed, instructions could be arithmetically changed in the same way as data.
   As a result of these techniques, computing and programming became much faster, more flexible, and more efficient with work. Regularly used subroutines did not have to be reprogrammed for each new program, but could be kept in "libraries" and read into memory only when needed. Thus, much of a given program could be assembled from the subroutine library.
   The all-purpose computer memory became the assembly place in which all parts of a long computation were kept, worked on piece by piece, and put together to form the final results. The computer control survived only as an "errand runner" for the overall process. As soon as the advantage of these techniques became clear, they became a standard practice.
   The first generation of modern programmed electronic computers to take advantage of these improvements were built in 1947. This group included computers using Random-Access-Memory (RAM), which is a memory designed to give almost constant access to any particular piece of information. These machines had punched card or punched tape I/O devices and RAM's of 1,000-word capacity and access times of .5 Greek MU seconds (0.5*10-6 seconds). Some of them could perform multiplications in 2 to 4 MU seconds. Physically, they were much smaller than ENIAC. Some were about the size of a grand piano and used only 2,500 electron tubes, a lot less then required by the earlier ENIAC. The first-generation stored-program computers needed a lot of maintenance, reached probably about 70 to 80% reliability of operation (ROO) and were used for 8 to 12 years. They were usually programmed in ML, although by the mid 1950's progress had been made in several aspects of advanced programming. This group of computers included EDVAC and UNIVAC the first commercially available computers.