Multi-Tasking Theory

  Author: Chris Miceli
  Dated: 2018-12-06
  Uploaded: 2019-01-09
  Last Edited: 5 years ago

Abstract

This report outlines prevalent concepts and associated issues regarding concurrency and parallelism at various levels of computer design, including operating systems, databases and programming languages. Processor scheduling and memory management techniques are mentioned, taking into account the required criteria for deadlock to occur and available methods of recovery. Characteristics of parallel programming are discussed in relation to user experience, and an evaluation is then provided on the support available for concurrency in modern programming languages.

 

Introduction

The concepts and advancement behind concurrency have allowed programmers to accomplish tasks more effectively and in a shorter amount of time. They have also introduced both theoretical and practical issues, requiring the adaptation and implementation of advanced techniques at various levels to achieve a stable computing environment. These can range from low level operating system algorithms to the refactoring of program libraries to gain functionality and improve effectiveness.

 

Concurrency

“Concurrency is the interleaving of processes in time to give the appearance of simultaneous execution” (University of Western Australia, 2018). When provided with sufficient resources, a processor is able to imitate parallel processing, despite only executing a command from one program at any given time (Algonquin College, 2003). Concurrent processes are executed on threads, leaving it up to the operating system to implement processor scheduling and memory management techniques to ensure synchronisation between them.

 

Processes & Threads

Processes and threads exist as the two base units of execution in concurrent programming (Oracle, 2017). A process exists within its own execution environment and maintains a private set of run-time resources within its address space (Shan He, 2013). A program may contain multiple processes, each distributed between one or more threads. A thread is responsible for processing various sections of code, and is often used in conjunction with other simultaneously running threads in order to complete a larger task (Peter M. Chen, 2014). It is important for multiprocessing applications to implement synchronisation techniques in order to maintain resource integrity.

 

Processor Scheduling

A running process can be viewed as moving through five specific process states during its lifetime as depicted in Image A below.

[Image Here]

The scheduler is a component of the operating system responsible for moving processes between these process states to facilitate the illusion of parallel processing (Jurgen Borg, 2014). Depending on the application, it may employ one of various techniques to determine the optimal order for tasks to be processed in, such as first come first serve, shortest job first, round robin and priority & multilevel queue scheduling. See Appendix A for an overview of these techniques.

 

Memory Management

A dynamic memory allocator is used to request or return memory blocks to a pool of allocatable segmented memory blocks knows as a heap, each of which is set at a fixed size (Steven Saunders, 2002). This, coupled with a tool used to efficiently manage the storage of memory blocks called a garbage collector, effectively handles shared memory between running tasks. Techniques employed to facilitate shared memory management include segmentation, paging and contiguous memory partitioning. Refer to Appendix B for information on these techniques. Many programming languages such as java and ruby handle low level memory management processes, however few such as the C language leave it up to the programmer to implement (Stephen Gilmore, 2007).

 

DBMS Consistency

When multiple concurrent users are writing to a database, it is important that the management system is able to overcome issues of consistency so that data is accurately maintained. This is achieved through concurrency control; the implementation of a transactional model whereby scenarios requiring concurrent resource access are systematically addressed, preserving their integrity (Fandom, 2018). Many models exist to address this requirement, however the SSER (Strictly Serialisable) model is considered to be a baseline for comparison (Marc Shapiro; Pierre Sutra, 2018). In heavy multitasking systems such as DBMS’s, resources need to be shared between a set of transactions (processes), allowing the possibility for deadlock to occur.

Deadlock

The concept of deadlock involves a situation where one or many transactions in a set cannot progress further due to the dependance on a resource which is being locked (claimed) by another transaction (Sriman Boora, 2015). For two or more processes to be involved in deadlock, four conditions must hold simultaneously: 

  1. Mutual exclusion: A resource is defined as being non-sharable.
  2. Hold and wait: A process holding one or many resources requests more resources.
  3. No pre-emption: Processes hold a lock on resources until they have finished working.
  4. Circular wait: Each process waits to acquire a resource which is held by another process.

(MengChu Zhou; Maria Pia Fanti, 2005)

The resource allocation graphs below depicts two instances of deadlock. 

 

Not all situations where processes are competing for resources can be classified as deadlock, as illustrated in Image C.

 

(Jurgen Borg, 2014) 

 

Prevention & Recovery

Deadlock prevention aims to eliminate the possibility for at least one of the required conditions to occur, usually through the implementation of constraints in the system’s design (Johns Hopkins University, 2016). The amount of restrictions typically in place result in prevention being inefficient for concurrency. 

A solution involves deadlock avoidance, a technique whereby the operating system aims to maintain a safe state, as opposed to preventing unsafe states though constraint. This is achieved by utilising a data structure containing all resources to predict if allocation will result in deadlock (Colorado State University, 2003). Regardless, there is still a possibility that deadlock can occur due to unforeseen circumstances, therefore many concurrent systems implement detection and recovery.

The identification of deadlock is achieved through use of an algorithm which analyses the resource allocation data structure. Methods of recovery include process abortion, process rollback, successive termination and successive preemption of resources, which are applied to transactions involved in deadlock in an attempt to recover. (Abraham Silberschatz; Greg Gagne; Peter Baer Galvin, 2013)

 

Parallel Programming

Parallel programming follows a modular design technique which makes use of many execution units such as threads to allow the simultaneous processing of tasks comprising a program (Richard Gerber; Andrew Binstock, 2010). It is helpful to execute tasks simultaneously using multi-threading, as many events in the world do not occur serially, but rather in parallel; from our interactions with a computer program to the simulation of weather patterns and planetary movements (Blaise Barney, 2018). See Appendix C for a comparison between serial and parallel environments. Modularisation allows for the construction of a complex program through the combination of simpler, often abstracted procedures, making it ideal for combining the many simple standalone functions of a word processor (Ian Foster, 1995). Multi-threaded applications often see performance benefits on single processor computers, and are essential to multi-processor machines if one wants to make efficient use of all available resources in order to save time and solve problems of a greater complexity. Threads also allow for simple resource sharing within the same address space as well as powerful synchronisation techniques; both of which benefits to the operation of a multi-threaded word processor. (University of Alberta, 2004).

On single-processor machines, multithreading can enhance the user experience in situations where some tasks take much longer to execute than others, and where user interaction causes a simultaneous update of the application’s interface (Tamra Kerns, 1998). Such is the case in a word processing application, where the program is waiting for a comparatively long amount of time for user input, and when input is detected, the interface requires a concurrent update.

 

Support for Language Concurrency

A significant amount of research has been carried out into effective designs for concurrent programming languages, as well as methods of porting both object-oriented (OO) and non-OO programs (Michael Papathomas, 1995). Some early methods include language extension and library recreation, which have since been replaced by more modern approaches involving advanced algorithmic analysis and recreation (Simon Kågström (2008). The majority of modern programming languages make a great effort to integrate concurrency concepts beyond basic threading capabilities (Benjamin Erb, 2012). A remarkable language in this regard is Javascript; once thought of as a web-based language, it has since evolved to incorporate a variety of fundamental concurrency features (Alex Handy, 2017). 

 

When we analyse Karl Rupp’s collection of microprocessor trend data, plotted in Image D above, we can see that frequency (MHz) has stagnated over recent years, and in a similar timeframe the number of logical cores has multiplied. As a result, many programming languages have either had to adjust their internal mechanics over time to accommodate and take advantage of these changes, or be replaced by alternatives which do (Alex Crichton, 2017). This has lead to a wide variety of concurrency-oriented languages being developed, each addressing the needs of a growing potential market requiring enhanced support for the future of computing (Chris Dannen, 2014).

Research and development into concurrent programming languages have managed to take what were once undetectable runtime errors, and through the implementation of advanced techniques, identify them to the programmer prior to compilation or distribution. This was previously a challenge due to the nature of concurrent errors being logical as opposed to syntactical, however modern languages such as Go and Rust have introduced a great deal of support for concurrency features, allowing the programmer to  both write and refactor code that is free of subtle bugs (Rust, 2018). As a result, it is beginning to replace a significant amount of C and C++ development in the industry, and is becoming popular among web developers looking for a more convenient language which also offers performance improvements (Mitch Pronschinske, 2016).

 

Conclusion

Since the speed of processing units has come to a comparative standstill in recent years, programmers have been required to improve responsiveness and functionality by taking advantage of the increase in number of logical cores. This has contributed towards the growing requirement for programmers to thoroughly understand and implement concurrent concepts, further expanding the amount of information available and the effectiveness of methods used to teach abstract concurrency concepts.

 

References

 

Further Reading

 

Appendices

 

Appendix A: Processor Scheduling Techniques

Using a First Come First Serve scheduling approach, processes are moved to the CPU in the order in which they arrive in the ready state. This is a non-preemptive approach, meaning that once a process is given access to the CPU, it keeps it notwithstanding waits and termination. This technique suffers from a long average turnaround time. 

 

Scenario 1 

[Image Here]

Avg Turnaround: (20 + 25 + 35) / 3 = 26.6 ms 

 

Scenario 2 

[Image Here]

Avg Turnaround: (10 + 15 + 35) / 3 = 20 ms 

 

The Shortest Job First (SJF) algorithm analyses all the processes in the ready state and dispatches the one with the smallest service (burst) time.

 

[Image Here]

Avg Turnaround: (5 + 15 + 35)/3 = 18.3ms 

 

The difficulty with the SJF algorithm lies in knowing the length of a process’ service time before it actually executes. One approach is to predict an estimate of the service time by considering various probability factors and the process’ past performance. Getting the prediction wrong may quickly lead to a deterioration of the SJF algorithm. Moreover, trying to guess a process’ service time will contribute substantially to the scheduling algorithm’s overhead, making it unsuitable for use in short-term scheduling. 

This algorithm is often implemented as a non-preemptive algorithm, however a preemptive version involves interrupting the currently running process if a process with a shorter service time than the time remaining for the running process enters the ready state. 

 

Round Robin is a preemptive, starvation-free scheduling method involving the distribution of processing time equitably among all ready processes through the definition of a time slice (or time quantum); an amount of time each process is granted before being preempted and returned to the ready state, allowing another process its turn. Eventually the preempted process will be given another time slice on the CPU, repeating the procedure until the process eventually gets all the time it needs and terminates. 

 

Case 1 

[Image Here]

Time Quantum: 10ms

Context Switch Time: 2ms

Turnaround Time: 35ms

 

Case 2 

[Image Here]

Time Quantum: 5ms

Context Switch Time: 2ms

Turnaround Time: 47ms

 

If the time quantum is large relative to the average process service time, the round robin scheduling approximates first come first serve scheduling. On the other hand, if the quantum is extremely small, this approach is called processor sharing and in theory creates the appearance that each of the n processes has its own processor running at 1/n the speed of the real processor. In reality the time taken for the dispatcher to perform the context switch has to be taken into account as well, especially if this is comparable to the time quantum. Frequent calls to the dispatcher will increase the overhead on the system and will result in a longer average turnaround time, resulting in a performance drop of the  scheduler.

 

A Priority & Multilevel Queue Scheduling algorithm associates an execution priority with each process, allocating CPU time to the process with the highest priority. These algorithms are often preemptive, whereby a higher priority process will interrupt a currently running, lower-priority process. 

A common issue with priority scheduling is indefinite blocking or starvation. In a heavily loaded computer system, a steady stream of high-priority processes can leave some low priority processes waiting indefinitely. A solution to this problem is ageing, a technique where the priority of a process is increased after is has been waiting in the system’s ready queue for a duration of time. 

Multilevel queue scheduling is an evolution of priority scheduling, requiring that the ready queue be partitioned into several priority-unique queues, each having its own scheduling algorithm. This manages how equal priority processes are handled. The diagram below depicts this situation. 

[Image Here]

(University of Illinois at Chicago, 2006)

 

Appendix B: Memory Management Techniques

 

Contiguous Memory Partitioning assigns each process a fixed-size partition based on the program’s size and data requirements. The memory map ensuing from such a scheme is illustrated in the diagram below. The ‘lowest’ partition is usually reserved for the operating system. 

[Image Here]

Partitioning requires that the operating system maintains two pieces of information about a process: the base memory address and the length of the partition (the limit). This information is retained in a partition table.

 

Paging is a memory management scheme through which a program can be contained in non-contiguous memory areas. The concept is illustrated in the diagram below.

[Image Here]

With paging, a process is logically divided into pages; likewise, main memory is divided into frames. A page from the process is stored in any available frame in main memory. The operating system maintains a local page table for each process, which maps its pages to corresponding frames in main memory. 

 

Segmentation is also non-contiguous, but it logically divides a program in segments of variable length, which are stored in memory wherever there is enough free space. A segment table is maintained for each process, which is used to record the segment number, the base memory address, and the limit (length) of the segment. This is illustrated below.

[Image Here]

(Jurgen Borg, 2014)

 

Appendix C: Serial vs Parallel Environments

 

Serial Payroll Program 

[Image Here]

Parallel Payrol Program 

[Image Here]

Blaise Barney (2018)




Ratings

Load More