In this tutorial of history of operating system, we will take a look at the successive generations of computers to see what their operating systems were like.
In the year from 1792 to 1871, an English Mathematician named Charles Babbage, designed the first true digital computer.
Charles Babbage spent most of his life in building his analytical engine, but he never got it working properly only because it was purely mechanical or you can say an operating system was not installed in that analytical engine.
Then after sometime, Charles Babbage realised that he would need a software for that analytical engine. Therefore he hired a programmer named Ada Lovelace (the first programmer of the world).
Later, Ada is a programming language named on her.
Here you will learn about the first generation (1945-1955) computers, Vacuum Tubes and Plugboards.
After the unsuccessful efforts of Charles Babbage, there was made some little progress in constructing the digital computers.
Around the mid 1940s, there were different programmers and scientists named Howard Aiken, John Von Neumann, J. Presper Eckert, William Mauchley and Knorad Zuse at/in Harward, Institute for Advanced Study in Princeton, University of Pennsylvania Germany. All succeeded in building the calculating engines.
At that time, the first ones used mechanical relays but they were very slow with cycle times measured in seconds. Vacuum Tubes later replaced that relays. These machines were enormous, filling up the entire or whole rooms with just tens of thousands of Vacuum Tubes. And for your information, they were still millions of times slower than the cheapest personal computers (PCs) available today.
At that time, all the programming was done just in absolute machine language, often by wiring up plugboards to control the basic functions of the machine. Because in those days, programming language were not known to all the computer scientists, even assembly language was also not known. And no any computer scientists heard about operating system in those days.
In those days, sometime Vacuum Tubes burns during the run of few more hours than usual. In short, there were a lot of problems occurred during the calculation using the first generation computers.
Here you will learn about the second generation (1955-1965) computers, Transistors and Batch Systems.
In the mid 1950s, the introduction of transistor changed the view of computers.
After the introduction of transistor on those days, computers became reliable and they could be manufactured for sold to paying customers with the expectation that they could continue to function long enough to get some required and useful work done by those computers.
These machines, later called Mainframes, that were locked away in a specially AC (Air Conditioned) computers rooms, with staff of professional operators to run that computer.
In those days, no one can afford those high cost computers except big corporations or major government agencies or universities.
To run any job or program on those computer, a programmer would first write the program on a paper (FORTRAN or assembler), then punch it on the cards. He would then bring the card deck down to the input room and then hand it to one of the operators and leave the computer for sometime until the required output was ready.
Those second generation computers were used mostly for the scientific and engineering calculations. They were programmed in FORTRAN and assembly language.
In those days, operating systems were FMS (stands for FORTRAN Monitor System), and IBSYS, operating system of IBM for the 7094.
Here you will learn about the third generation (1965-1980) computers, ICs and Multiprogramming.
The third generation operating systems produced by some computer manufacturers satisfied most of their customers. Those computer manufacturers also popularized several important techniques, the main was multiprogramming.
On the 7094, when the current job paused to wait for a tape or any other Input/Output operation to done or complete, the CPU (Central Processing Unit) sat idle until the Input/Output finished. Now with heavily CPU-bound scientific calculations, then the Input/Output operations becomes infrequent.
Therefore this just wasted the time only, which was not significant. And now with commercial data processing, the Input/Output operation wait time can often be 80-90 percent of the total time.
Therefore to avoid having the CPU (Central Processing Unit) be idle so much, something must had to be done.
The solution that evolved the above problem was just to partition the memory into some several pieces, with a different work in each partition. While one work was waiting for the Input/Output to complete, another work could be using the Central Processing Unit.
The third generation operating systems were well suited for the big-scientific calculations and massive commercial data processing runs, but they were still basically batch systems.
Here you will learn about the fourth generation (1980 - Present) Personal Computers (PCs).
Chips containing thousands of transistors on a square centimeter of silicon, just with the development of Large Scale Integration (LSI). The age of personal computer dawned.
Personal Computers (PCs) initially called as microcomputers.