The Evolution of Operating Systems

The Evolution of Operating Systems

The Evolution of Operating Systems

Fast and efficient digital transactions are taken for granted, these days – whether they be over the counter purchases, a hiccup-free database search, or a smooth download from the Web. None of this would be possible if the operating environments for our computers hadn’t gone through radical changes since their inception.

Operating, On…?

Concealing the hardware, ideally. An operating system is essentially a layer of software which manages the workings of the hardware beneath. It distributes system resources between programs and users.

Open Shops

Back in the mid-1950s, there was no system, as such. Computers were operated by their users – manually – in what’s known as an open shop set-up. For example, users of IBM’s first computer (the 701) might be allocated 15 minutes, to set up and complete a computing operation. 10 of these would typically be spent just setting up the equipment.

First, Batch

Operating systemsSoftware to Manage Workloads took a great leap forward when it was realized that computers could use software to manage their workloads.

Idle time was reduced by having users set up their data and programs on punch cards offline before submitting them to the computer room for execution. With faster computers, satellite workstations were set up and batch processing became the norm.

Operators would collect users’ punched cards and input several jobs onto magnetic tape. Tapes would be mounted at a fast tape station, which was connected to the main computer. Jobs would then be run one at a time, sequentially from the tape. Running jobs would record data on a different tape, which would be taken to a satellite station, and output on a line printer there.

It was slow. Many hours might pass, before the output from a single job. And jobs could only be completed consecutively from the tapes.

Multi-Programming

Multi-Programming Memory and ProcessorsThe 1960s saw the emergence of hardware interrupts, data channels, large core memory, and random access on secondary storage. The running of several programs at once could now be simulated by processors , which handled input/output operations simultaneously, in a scheme called multi-programming.

With secondary storage, large buffers could be held on a single computer fitted with drums or disks in a spooling system. Tapes were no longer required, and priority jobs could be preferentially scheduled.

Time, to Share

Spooling gave way to time-sharing, a scheme by which a computer would interact with many users at the same time as they sent service requests to it from remote consoles. With large secondary storage capacity (so separate input/output units for cards or tape weren’t required), users had the experience of using a fast, local computer.

The Unix time-sharing system (developed at Bell Labs between 1969 and 1971) was rewritten in C, in 1973. Unix set the standard for time-sharing systems, by the mid-1980s.

Concurrent Issues

Operating systems like 1975’s Solo – a single-user system for developing programs in Pascal – established the possibility of OS creation using a secure programming language, with no features dictated by the hardware.

The Personal Touch

Hardware Costs in the 70s and Personal Computers

Hardware costs came down in the 1970s, as micro-processors and semi-conductor memory technology made it easier to construct the first personal computers.

Xerox PARC developed the Alto, which had a bitmap display, Ethernet interface and mouse, and shipped with 64K of memory and a 2.5Mb removable disk pack. Its single-user operating system was developed between 1973 and 1976, but could only execute one process at a time.

Kudos, or…?

The Microsoft Disk Operating System (MS-DOS) developed by Tim Paterson also began as a single-user, single-task system. It’s known to some as QDOS (Quick and Dirty Operating System), but in fairness to it, MS-DOS made the File Allocation Table (FAT) system a standard and led to the development of a wide range of software applications.

Application launching, managing files, and the management of allocated memory were the MS-DOS core functions. It was optimized for Intel’s 8088 and 8086 processors of the early 1980s, and did not support third-party graphic displays or printers natively.

Memory was also a problem, as early PCs reserved a partition of only 640KB for DOS and DOS applications.

Opened, with Windows

Microsoft’s Windows started life as a Graphical User Interface (GUI) – an extension to MS-DOS, rather than an OS in its own right.

Windows 2.0 had a limited multi-tasking capability – a sort of co-operative switching system, where an application would continue to run until it relinquished control to Windows Applications. The Windows 3.x range provided host services for a set of APIs that allowed support for 32-bit applications. But the links to MS-DOS still remained.

Apple’s, from the Same Tree?

Apples Graphically Based OSMore like a similar, but separate evolutionary strand. Apple’s Macintosh operating system was graphically based, like Windows and pioneered pop-up menus, menu bars and mouse-initiated file operations.

Introduced in 1984 with a flat file system, the MacOS graduated to multi-tasking in 1986. It’s main limitation to widespread adoption was that it was (still is, to a large extent) hardware-specific, and shipped with only a few applications.

OS/2, Too

Multi-tasking and multi-threading were built into IBM’s OS/2 from the outset, which also provided crash protection and a standardized set of interfaces for adding operating system extensions like device and networking support.

Fitting New Windows

Windows 95 MS-DOS - GUI Plug and Play

1995’s Windows 95 saw Microsoft begin breaking its OS links with MS-DOS. Elements of the now familiar Windows GUI, like Plug and Play, and the Start Menu appeared.

Windows NT (1997) and the subsequent Windows XP broke the MS-DOS link up even more, and introduced the NTFS file system. The OS has continued to evolve up to today’s Windows 10 flavor.

Perhaps Microsoft’s greatest achievement was to make the operating system a consumer commodity, easy to install and configure, and not tied to the hardware you buy.

The Open-Source Movement

Free Open Source Software Movement1982 saw the University of Newcastle add a software layer to Unix, which made each computer both a file server and a user client.

Unix United was a five-user system which eventually morphed into Sun Microsystems’ NFS. Both were early network file systems; users on their client machines could access files from servers, as if they were local.

Unix was a relatively expensive OS, but a popular one in academic circles. In 1994, Linus Torvalds developed Linux, a Unix derivative and the OS which gave rise to the still strong open-source software movement. Productivity applications and operating systems coded and tweaked by independent developers (often working in collaboration) are now available from numerous sources – free of charge.

Distributed Systems

Distributed Systems - SETIAdvances in network technology in the 1980s gave rise to distributed operating systems. These use resources from several disparate computers, as if they were acting as one. Parts of a program can even be run simultaneously, on different machines.

The Berkeley Open Infrastructure for Network Computing (BOINC) is an example of this. Its “virtual super-computer”, put together from spare resources contributed by an informal global network of computers is part of the Search for Extra-Terrestrial Intelligence (SETI).

Embedded, Now

Embedding OS - Phones Tablets DevicesThe growth of mobile technology has made it necessary to embed operating systems in all manner of devices. Phones, tablets, and phablets are running on the likes of Android, iOS, Windows Mobile, Palm OS, Symbian and Linux.

Virtual, Too?

The software emulation of virtual machines makes it possible for an operating system to exist in a notional layer, separate from the hardware it’s governing. This may ultimately lead to a scenario where a separate, local OS for each device is no longer necessary.

William Thompson is the Marketing Manager at Power Admin, a server monitoring software business in the Kansas City area. You can find him on Google+ and Twitter. William has been a professional in website design, digital marketing and 3D/graphic design for over 20 years.


Posted

in

, ,

by

Tags: