Explain different types of Operating Systems

What is an Operating System? Explain different types of Operating Systems. [PU Fall 2017, Spring 2014]

An operating system (OS) is a system software that manages computer hardware and software resources and provides common services for computer programs.

System software is computer software designed to provide a platform to other software. Examples of system software include operating systems, computational science software, game engines, industrial automation, and software as a service applications.

In contrast to system software, software that allows users to do things like create text documents, play games, listen to music, or surf the web is called application software.

Types of OS: 

Sequential, Batch, Multiprogramming (multitasking), Multiprocessing (multiprocessor), Time Sharing, Real Time, Distributed, Embedded, Kernel (syllabus)

  1. Sequential OS: Sequential access compared to random access. In computer science, sequential access means that a group of elements (such as data in a memory array or a disk file or on magnetic tape data storage) is accessed in a predetermined, ordered sequence. Sequential access is sometimes the only way of accessing the data, for example if it is on a tape. It may also be the access method of choice, for example if all that is wanted is to process a sequence of data elements in order.
  2. Batch OS:  The users of a batch operating system do not interact with the computer directly. Each user prepares his job on an off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar needs are batched together and run as a group. The programmers leave their programs with the operator and the operator then sorts the programs with similar requirements into batches.

    The problems with Batch Systems are as follows −

    1. Lack of interaction between the user and the job.
    2. CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
    3. Difficult to provide the desired priority.
  3. Multiprogramming OS: 

    To overcome the problem of underutilization of CPU and main memory, the multiprogramming was introduced. The multiprogramming is interleaved execution of multiple jobs by the same computer. In a multiprogramming system, when one program is waiting for I/O transfer; there is another program ready to utilize the CPU. So it is possible for several jobs to share the time of the CPU. But it is important to note that multiprogramming is not defined to be the execution of jobs at the same instance of time. Rather it does mean that there are a number of jobs available to the CPU (placed in main memory) and a portion of one is executed then a segment of another and so on.
    A program in execution is called a “Process”, “Job” or a “Task”. The concurrent execution of programs improves the utilization of system resources and enhances the system throughput as compared to batch and serial processing. In this system, when a process requests some I/O to allocate; meanwhile the CPU time is assigned to another ready process. So, here when a process is switched to an I/O operation, the CPU is not set idle. Read More

  4.  Multiprocessor Operating System: 

    It refers to the use of two or more central processing units (CPU) within a single computer system. These multiple CPUs are in a close communication sharing the computer bus, memory and other peripheral devices. These systems are referred as tightly coupled systems. These types of systems are used when very high speed is required to process a large volume of data. These systems are generally used in environment like satellite control, weather forecasting etc. 
    Multiprocessing system is based on the symmetric multiprocessing model, in which each processor runs an identical copy of operating system and these copies communicate with each other. In this system processor is assigned a specific task. A master processor controls the system. This scheme defines a master-slave relationship. These systems can save money in compare to single processor systems because the processors can share peripherals, power supplies and other devices. The main advantage of multiprocessor system is to get more work done in a shorter period of time. Moreover, multiprocessor systems prove more reliable in the situations of failure of one processor. In this situation, the system with multiprocessor will not halt the system; it will only slow it down. 
    Read More

  5. Time Sharing Operating System: 

    Time-sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor’s time which is shared among multiple users simultaneously is termed as time-sharing.

    The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of Multiprogrammed batch systems, the objective is to maximize processor use, whereas in Time-Sharing Systems, the objective is to minimize response time.

    Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receive an immediate response. For example, in a transaction processing, the processor executes each user program in a short burst or quantum of computation. That is, if nusers are present, then each user can get a time quantum. When the user submits the command, the response time is in few seconds at most.

    The operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time. Computer systems that were designed primarily as batch systems have been modified to time-sharing systems.

    Advantages of Timesharing operating systems are as follows −

    • Provides the advantage of quick response.
    • Avoids duplication of software.
    • Reduces CPU idle time.

    Disadvantages of Time-sharing operating systems are as follows −

    • Problem of reliability.
    • Question of security and integrity of user programs and data.
    • Problem of data communication.
  6. Real Time Operating System: 

    A real-time system is defined as a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. The time taken by the system to respond to an input and display of required updated information is termed as the response time. So in this method, the response time is very less as compared to online processing.

    Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow of data and real-time systems can be used as a control device in a dedicated application. A real-time operating system must have well-defined, fixed time constraints, otherwise the system will fail. For example, Scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc. Read More

  7. Distributed OS: 

    Distributed systems use multiple central processors to serve multiple real-time applications and multiple users. Data processing jobs are distributed among the processors accordingly.

    The processors communicate with one another through various communication lines (such as high-speed buses or telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in a distributed system may vary in size and function. These processors are referred as sites, nodes, computers, and so on.

    The advantages of distributed systems are as follows −

    • With resource sharing facility, a user at one site may be able to use the resources available at another.
    • Speedup the exchange of data with one another via electronic mail.
    • If one site fails in a distributed system, the remaining sites can potentially continue operating.
    • Better service to the customers.
    • Reduction of the load on the host computer.
    • Reduction of delays in data processing.
  8. Embedded OS: An embedded operating system is an operating system for embedded computer systems. This type of operating system is typically designed to be resource-efficient and reliable. Resource efficiency comes at the cost of losing some functionality or granularity that larger computer operating systems provide, including functions which may not be used by the specialized applications they run. Depending on the method used for multitasking, this type of OS is frequently considered to be a real-time operating system. Read More
  9. Kernel:  A kernel is the core component of an operating system. Using interprocess communication and system calls, it acts as a bridge between applications and the data processing performed at the hardware level.

    When an operating system is loaded into memory, the kernel loads first and remains in memory until the operating system is shut down again. The kernel is responsible for low-level tasks such as disk management, task management, and memory management. Read More

What the fuck is a process?

Well, what the fuck is a program and how the fuck is it related to a process?

Before we go into what the fuck a process is, let’s try to understand what the fuck a program is first. 

A set of instructions. 

Well now, that wasn’t very clear, was it? Let’s try to kinda redefine it.. I say “A program is a set of instructions that a computer can follow to achieve a goal.” But it’s still a little jargony, isn’t it? Think of it as a recipe, a handbook, or a manual.. a set of instructions… now it clicks, doesn’t it? It’s basically a programmer’s written note to the computer that says.. do this.. do this.. and do that after that… and finally do that and that’s how you get the citizenship of Saudi Arabia.

But here’s the catch. A program is merely a set of instructions. You do have to run it, to reach the goal. And that’s where process comes in but more on that later. Think of a program as like a book named “How to get inside a girl’s pants in 10 steps” and process as the action you take based on the book to get inside her pants. 

A process is a program in execution.

For example, when you write a program in Java and compile it, the JVM creates a bytecode thingy. The original code and the bytecode thingy, both are programs. When you actually run the binary code, it becomes a process.

A process is an ‘active’ entity as opposed to a program which is considered to be a ‘passive’ entity. A single program can create many processes when it’s run multiple times, for example when we open a .exe or binary file multiple times, many instances begin (many processes are created).

Process models (Uniprogramming, Multiprogramming, Multiprocessing)

Uniprogramming => Only one process at a time.

Multiprogramming => Multiple processes at a time i.e A computer running more than one program at a time (like running MS Word and Google Chrome simultaneously to be able to copy Wikipedia for the group project report).

Multiprocessing => System with multiple processors i.e A computer using more than one CPU at a time.

 Multitasking – Tasks sharing a common resource (like a single CPU).

As the name itself suggests, multitasking refers to the execution of multiple tasks (say processes, programs, threads etc.) at a time. But.. But.. hold on a minute…

Isn’t it same as multiprogramming? How does it differ from multiprogramming?

Multitasking is a logical extension of multiprogramming. The major way in which multitasking differs from multiprogramming is that multiprogramming works on the concept of context switching whereas multitasking is solely based on time-sharing systems. So, basically, a multitasking system basically divides CPU’s time to be able to do what it does.

K.. I’m bored now. In the next article… we’ll probably look at the states of process and process control block which means how the process looks inside a memory. But for now… hosta la vista,  amigos. 

 

What the fuck is Peterson’s algorithm? 

What the fuck is Peterson’s algorithm? 

As we all know when two or more processes need to share a single resource, conflict occurs. Remember fighting with your mom for that TV remote back in the days when you watched TV. You still do? ..hah.. what a loser. So, moving on, to solve this conflict we kinda need to use a thing called mutual exclusion which basically means TV serials are your mom’s time and you are not allowed near that TV when she’s watching.

Peterson’s algorithm is a mutual exclusion algorithm that allows two or more processes to share a single-use resource without conflict. 

The algorithm is named after Gary L. Peterson who literally made the algorithm. His original work only focused on two processes (i.e you and your mom because I needed an opportunity to say your mom). But, it can be generalized for more than two processes. So, the algorithm kinda works for two or more than two processes. So, that means you dad and your mom’s boyfriend can join in too if they wish to. 

Note: Dekker also tried to solve the issue between you and your mom but Peterson’s solution is simpler because he knows that you’re a dumbfuck. 

So, how does this algorithm thingy work? 

Here’s a little rote memorization trick for you because you’re too dumb to understand anything. The algorithm used two variables flag and turn. A flag[0]=true indicates that Po wants to enter the critical section. Entrance to critical section is given to P0 if P1 does not want to enter its critical section or P1 has given priority to Po by setting turn to 0

P0:      flag[0] = true;
P0_gate: turn = 1;
         while (flag[1] == true && turn == 1)
         {
             // busy wait
         }
         // critical section
         ...
         // end of critical section
         flag[0] = false;
P1:      flag[1] = true;
P1_gate: turn = 0;
         while (flag[0] == true && turn == 0)
         {
             // busy wait
         }
         // critical section
         ...
         // end of critical section
         flag[1] = false;

P1 and P0 can never be in the critical section at the same time. If P0 is in its critical section, then flag[0] is true. In addition, either flag[1] is false ( meaning P1 has left its critical section), or turn is 0 (meaning P1 is just now trying to enter the critical section but is waiting, or P1 is at label P1_gate (trying to enter its critical section, after setting flag[1] to true  but before setting turn to 0 and busy waiting). So if both processes are in their critical sections then we conclude that the state must satisfy flag[0] and flag[1]  and turn = 0 and turn = 1. No state can satisfy both turn = 0 and turn = 1, so there can be no state where both processes are in their critical sections.