History of Operating Systems

In this operating systems history tutorial, we will look at the various generations of computers to see what their operating systems were like. For information regarding the generation of computers, refer to a separate post or article.

Charles Babbage, an English mathematician, created the first true digital computer between 1792 and 1871.

Charles Babbage spent the majority of his life developing his analytical engine, but he never got it to work properly because it was purely mechanical, or an operating system was not installed.

After a while, Charles Babbage realized he'd need software for that analytical engine. As a result, he hired Ada Lovelace, a programmer (the first programmer of the world). Ada is later named after her as a programming language.

Operating system in first generation

Following Charles Babbage's unsuccessful efforts, some progress was made in the construction of digital computers.

Around the mid-1940s, various programmers and scientists named Howard Aiken, John Von Neumann, J. Presper Eckert, William Mauchley, and Knorad Zuse worked at/in Harward, Institute for Advanced Study in Princeton, and the University of Pennsylvania in Germany. All of them were successful in building the calculating engines.

The first ones used mechanical relays, but they were extremely slow, with cycle times measured in seconds. Later, vacuum tubes took their place. These machines were massive, filling entire or entire rooms with tens of thousands of Vacuum Tubes. And, for the record, they were still millions of times slower than today's cheapest personal computers (PCs). Because programming languages were not known to all computer scientists at the time, and assembly language was also unknown. And no computer scientists had heard of operating systems back then.

During those days, Vacuum Tubes would occasionally burn out after running for a few more hours than usual. In short, numerous issues arose during the calculation using first-generation computers.

Operating system in second generation

The introduction of the transistor in the mid-1950s altered people's perceptions of computers.

After the transistor was introduced in those days, computers became reliable, and they could be manufactured for sale to paying customers with the expectation that they would continue to function long enough to perform some required and useful work.

These machines, later dubbed Mainframes, were housed in specially designed AC (Air Conditioned) computer rooms, with a staff of professional operators to run them. Back then, only large corporations, major government agencies, and universities could afford such expensive computers.

A programmer would first write the program on paper (FORTRAN or assembler) before punching it on the cards to run any job or program on those computers. He'd then take the card deck down to the input room, hand it to one of the operators, and leave the computer for a while until the required output was ready.

Second-generation computers were primarily used for scientific and engineering calculations. They were written in FORTRAN and assembly code.

Operating systems back then were FMS (for FORTRAN Monitor System) and IBSYS, IBM's operating system for the 7094.

Operating system in third generation

Some computer manufacturers' third generation operating systems satisfied the majority of their customers. Those computer manufacturers also popularized a number of important techniques, the most important of which was multi-programming.

When the current job on the 7094 was paused to wait for a tape or any other Input/Output operation to finish, the CPU (Central Processing Unit) sat idle until the Input/Output finished. Input/Output operations become infrequent with heavily CPU-bound scientific calculations.

As a result, this only squandered time, which was insignificant. In commercial data processing, the Input/Output operation wait time can now be as much as 80-90 percent of the total time.

To avoid having the CPU (Central Processing Unit) idle all the time, something had to be done.

The above problem was solved by simply partitioning the memory into several pieces, with a different work in each partition. While one work was awaiting the completion of the Input/Output, another work could be using the Central Processing Unit.

The third-generation operating systems were well suited for large-scale scientific calculations and massive commercial data processing runs, but they remained batch systems.

Operating system in fourth generation

With the development of Large Scale Integration, chips containing thousands of transistors on a square centimetre of silicon are now possible (LSI). The personal computer era began.

Personal computers (PCs) were originally known as microcomputers.

When it comes to operating systems for the fourth generation, the following are some of the most well-known:

Operating System Quiz


« Previous Topic Next Topic »


Liked this post? Share it!