Multi-Tasking Theory
Author: Chris Miceli | |
Dated: 2018-12-06 | |
Uploaded: 2019-01-09 | |
Last Edited: 5 years ago |
Abstract
This report outlines prevalent concepts and associated issues regarding concurrency and parallelism at various levels of computer design, including operating systems, databases and programming languages. Processor scheduling and memory management techniques are mentioned, taking into account the required criteria for deadlock to occur and available methods of recovery. Characteristics of parallel programming are discussed in relation to user experience, and an evaluation is then provided on the support available for concurrency in modern programming languages.
Introduction
The concepts and advancement behind concurrency have allowed programmers to accomplish tasks more effectively and in a shorter amount of time. They have also introduced both theoretical and practical issues, requiring the adaptation and implementation of advanced techniques at various levels to achieve a stable computing environment. These can range from low level operating system algorithms to the refactoring of program libraries to gain functionality and improve effectiveness.
Concurrency
“Concurrency is the interleaving of processes in time to give the appearance of simultaneous execution” (University of Western Australia, 2018). When provided with sufficient resources, a processor is able to imitate parallel processing, despite only executing a command from one program at any given time (Algonquin College, 2003). Concurrent processes are executed on threads, leaving it up to the operating system to implement processor scheduling and memory management techniques to ensure synchronisation between them.
Processes & Threads
Processes and threads exist as the two base units of execution in concurrent programming (Oracle, 2017). A process exists within its own execution environment and maintains a private set of run-time resources within its address space (Shan He, 2013). A program may contain multiple processes, each distributed between one or more threads. A thread is responsible for processing various sections of code, and is often used in conjunction with other simultaneously running threads in order to complete a larger task (Peter M. Chen, 2014). It is important for multiprocessing applications to implement synchronisation techniques in order to maintain resource integrity.
Processor Scheduling
A running process can be viewed as moving through five specific process states during its lifetime as depicted in Image A below.
[Image Here]
The scheduler is a component of the operating system responsible for moving processes between these process states to facilitate the illusion of parallel processing (Jurgen Borg, 2014). Depending on the application, it may employ one of various techniques to determine the optimal order for tasks to be processed in, such as first come first serve, shortest job first, round robin and priority & multilevel queue scheduling. See Appendix A for an overview of these techniques.
Memory Management
A dynamic memory allocator is used to request or return memory blocks to a pool of allocatable segmented memory blocks knows as a heap, each of which is set at a fixed size (Steven Saunders, 2002). This, coupled with a tool used to efficiently manage the storage of memory blocks called a garbage collector, effectively handles shared memory between running tasks. Techniques employed to facilitate shared memory management include segmentation, paging and contiguous memory partitioning. Refer to Appendix B for information on these techniques. Many programming languages such as java and ruby handle low level memory management processes, however few such as the C language leave it up to the programmer to implement (Stephen Gilmore, 2007).
DBMS Consistency
When multiple concurrent users are writing to a database, it is important that the management system is able to overcome issues of consistency so that data is accurately maintained. This is achieved through concurrency control; the implementation of a transactional model whereby scenarios requiring concurrent resource access are systematically addressed, preserving their integrity (Fandom, 2018). Many models exist to address this requirement, however the SSER (Strictly Serialisable) model is considered to be a baseline for comparison (Marc Shapiro; Pierre Sutra, 2018). In heavy multitasking systems such as DBMS’s, resources need to be shared between a set of transactions (processes), allowing the possibility for deadlock to occur.
Deadlock
The concept of deadlock involves a situation where one or many transactions in a set cannot progress further due to the dependance on a resource which is being locked (claimed) by another transaction (Sriman Boora, 2015). For two or more processes to be involved in deadlock, four conditions must hold simultaneously:
- Mutual exclusion: A resource is defined as being non-sharable.
- Hold and wait: A process holding one or many resources requests more resources.
- No pre-emption: Processes hold a lock on resources until they have finished working.
- Circular wait: Each process waits to acquire a resource which is held by another process.
(MengChu Zhou; Maria Pia Fanti, 2005)
The resource allocation graphs below depicts two instances of deadlock.
Not all situations where processes are competing for resources can be classified as deadlock, as illustrated in Image C.
(Jurgen Borg, 2014)
Prevention & Recovery
Deadlock prevention aims to eliminate the possibility for at least one of the required conditions to occur, usually through the implementation of constraints in the system’s design (Johns Hopkins University, 2016). The amount of restrictions typically in place result in prevention being inefficient for concurrency.
A solution involves deadlock avoidance, a technique whereby the operating system aims to maintain a safe state, as opposed to preventing unsafe states though constraint. This is achieved by utilising a data structure containing all resources to predict if allocation will result in deadlock (Colorado State University, 2003). Regardless, there is still a possibility that deadlock can occur due to unforeseen circumstances, therefore many concurrent systems implement detection and recovery.
The identification of deadlock is achieved through use of an algorithm which analyses the resource allocation data structure. Methods of recovery include process abortion, process rollback, successive termination and successive preemption of resources, which are applied to transactions involved in deadlock in an attempt to recover. (Abraham Silberschatz; Greg Gagne; Peter Baer Galvin, 2013)
Parallel Programming
Parallel programming follows a modular design technique which makes use of many execution units such as threads to allow the simultaneous processing of tasks comprising a program (Richard Gerber; Andrew Binstock, 2010). It is helpful to execute tasks simultaneously using multi-threading, as many events in the world do not occur serially, but rather in parallel; from our interactions with a computer program to the simulation of weather patterns and planetary movements (Blaise Barney, 2018). See Appendix C for a comparison between serial and parallel environments. Modularisation allows for the construction of a complex program through the combination of simpler, often abstracted procedures, making it ideal for combining the many simple standalone functions of a word processor (Ian Foster, 1995). Multi-threaded applications often see performance benefits on single processor computers, and are essential to multi-processor machines if one wants to make efficient use of all available resources in order to save time and solve problems of a greater complexity. Threads also allow for simple resource sharing within the same address space as well as powerful synchronisation techniques; both of which benefits to the operation of a multi-threaded word processor. (University of Alberta, 2004).
On single-processor machines, multithreading can enhance the user experience in situations where some tasks take much longer to execute than others, and where user interaction causes a simultaneous update of the application’s interface (Tamra Kerns, 1998). Such is the case in a word processing application, where the program is waiting for a comparatively long amount of time for user input, and when input is detected, the interface requires a concurrent update.
Support for Language Concurrency
A significant amount of research has been carried out into effective designs for concurrent programming languages, as well as methods of porting both object-oriented (OO) and non-OO programs (Michael Papathomas, 1995). Some early methods include language extension and library recreation, which have since been replaced by more modern approaches involving advanced algorithmic analysis and recreation (Simon Kågström (2008). The majority of modern programming languages make a great effort to integrate concurrency concepts beyond basic threading capabilities (Benjamin Erb, 2012). A remarkable language in this regard is Javascript; once thought of as a web-based language, it has since evolved to incorporate a variety of fundamental concurrency features (Alex Handy, 2017).
When we analyse Karl Rupp’s collection of microprocessor trend data, plotted in Image D above, we can see that frequency (MHz) has stagnated over recent years, and in a similar timeframe the number of logical cores has multiplied. As a result, many programming languages have either had to adjust their internal mechanics over time to accommodate and take advantage of these changes, or be replaced by alternatives which do (Alex Crichton, 2017). This has lead to a wide variety of concurrency-oriented languages being developed, each addressing the needs of a growing potential market requiring enhanced support for the future of computing (Chris Dannen, 2014).
Research and development into concurrent programming languages have managed to take what were once undetectable runtime errors, and through the implementation of advanced techniques, identify them to the programmer prior to compilation or distribution. This was previously a challenge due to the nature of concurrent errors being logical as opposed to syntactical, however modern languages such as Go and Rust have introduced a great deal of support for concurrency features, allowing the programmer to both write and refactor code that is free of subtle bugs (Rust, 2018). As a result, it is beginning to replace a significant amount of C and C++ development in the industry, and is becoming popular among web developers looking for a more convenient language which also offers performance improvements (Mitch Pronschinske, 2016).
Conclusion
Since the speed of processing units has come to a comparative standstill in recent years, programmers have been required to improve responsiveness and functionality by taking advantage of the increase in number of logical cores. This has contributed towards the growing requirement for programmers to thoroughly understand and implement concurrent concepts, further expanding the amount of information available and the effectiveness of methods used to teach abstract concurrency concepts.
References
- University of Western Australia (2018). ‘Lecture 9: Concurrency in Operating Systems’. Available at: http://teaching.csse.uwa.edu.au/units/CITS2230/handouts/Lecture09/lecture9.pdf Accessed on November 19th, 2018
- Chalmers University (2010). ‘Resource Allocation and Deadlock Handling’. Available at: http://www.cse.chalmers.se/edu/year/2010/course/EDA092/SLIDES/10-ra-dl-10.pdf Accessed on November 19th, 2018
- Fandom (2018). ‘Concurrency Control’. Available at: http://databasemanagement.wikia.com/wiki/Concurrency_Control Accessed on November 19th, 2018
- Oracle (2017). ‘Processes and Threads’. Available at: https://docs.oracle.com/javase/tutorial/essential/concurrency/procthread.html Accessed on November 19th, 2018
- Peter M. Chen (2014). ‘Threads’. Available at: https://web.eecs.umich.edu/~pmchen/eecs482/handouts/threads.pdf Accessed on November 20th, 2018
- Shan He (2013). ‘SSC - Concurrency and Multi-threading Basic concepts’. Available at: https://www.cs.bham.ac.uk/~szh/teaching/ssc/lecturenotes/Concurrency/Lecture1_BasicConcepts.pdf Accessed on November 20th, 2018
- Steven Saunders (2002). ‘Parallel Memory Allocation’. Available at: https://parasol.tamu.edu/~rwerger/Courses/689/spring2002/day-3-ParMemAlloc/3parallelMemoryAllocation.ppt Accessed on November 20th, 2018
- Stephen Gilmore (2007). ‘Advances in Programming Languages: Memory management’. Available at: http://homepages.inf.ed.ac.uk/stg/teaching/apl/handouts/memory.pdf Accessed on November 20th, 2018
- Jurgen Borg (2014) ‘Module 4: Operating Systems’ Presented at St Martins College, Malta on December 8th 2015, Unpublished
- Abraham Silberschatz; Greg Gagne; Peter Baer Galvin (2009). ‘Operating System Concepts, Eighth Edition’, Chapter 5. John Wiley & Sons. Available at: http://www.uobabylon.edu.iq/download/M.S%202013-2014/Operating_System_Concepts,_8th_Edition%5BA4%5D.pdf Accessed on November 20th, 2018
- University of Illinois at Chicago (2006). ‘CPU Scheduling’. Available at: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html Accessed on November 20th, 2018
- Sriman Boora (2015). ‘DBMS | Introduction to TimeStamp and Deadlock Prevention Schemes’. Available at: https://www.geeksforgeeks.org/dbms-introduction-timestamp-deadlock-prevention-schemes/ Accessed on November 22nd, 2018
- Marc Shapiro; Pierre Sutra (2018). ‘Database consistency models’. Available at: https://pages.lip6.fr/Marc.Shapiro/papers/DBconsistency-Springer2018-authorversion.pdf Accessed on November 22nd, 2018
- Smitha Dinesh Semwal (2017). ‘Deadlock in DBMS’. Available at: https://www.geeksforgeeks.org/deadlock-in-dbms/ Accessed on November 23rd, 2018
- University of Washington (2013). ‘4 Conditions for Deadlock’. Available at: https://courses.cs.washington.edu/courses/cse451/98au/Lectures/9-deadlock/tsld003.htm Accessed on November 23rd, 2018
- MengChu Zhou; Maria Pia Fanti (2005). ‘Deadlock Resolution in Computer-Integrated Systems’ Marcel Dekker/CRC Press. Available at: https://books.google.co.uk/books?hl=en&lr=&id=Rs60QVayLiwC&oi=fnd&pg=PR11&dq=conditions+for+deadlock+computing&ots=g6r_8rSLdC&sig=tEY8t0zW3WpRUWzkx-lhAN-ftWA#v=onepage&q=mutual%20exclusion&f=false Accessed on November 23rd, 2018
- Johns Hopkins University (2016). ‘The Difference Between Deadlock Prevention and Deadlock Avoidance’. Available at: https://www.cs.jhu.edu/~yairamir/cs418/os4/tsld011.htm Accessed on November 23rd, 2018
- Colorado State University (2003). ‘Deadlock Avoidance’. Available at: http://www.cs.colostate.edu/~cs551/CourseNotes/Deadlock/DDAvoidance.html Accessed on November 23rd, 2018
- Abraham Silberschatz; Greg Gagne; Peter Baer Galvin (2013). ‘Operating System Concepts, Ninth Edition’, Chapter 7. John Wiley & Sons. Available at: http://iips.icci.edu.iq/images/exam/Abraham-Silberschatz-Operating-System-Concepts---9th2012.12.pdf Accessed on November 23rd, 2018
- Algonquin College (2003). ‘Operating Systems : Overview of Concurrent Processing’. Available at: http://elearning.algonquincollege.com/coursemat/pincka/dat2343/lectures.f03/23-OS-Concurrent-Processing.htm Accessed on November 23rd, 2018
- Omer F Rana (1997). ‘Deadlock Recovery’. Available at: http://users.cs.cf.ac.uk/O.F.Rana/os/lectureos12/node9.html Accessed on November 23rd, 2018
- Richard Gerber; Andrew Binstock (2010). ‘Fundamental Concepts of Parallel Programming’. Available at: http://www.drdobbs.com/parallel/fundamental-concepts-of-parallel-program/225200750 Accessed on November 23rd, 2018
- Tamra Kerns (1998). ‘The Advantages of Multithreaded Applications’. Available at: https://www.evaluationengineering.com/the-advantages-of-multithreaded-applications Accessed on November 23rd, 2018
- University of Alberta (2004). ‘AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs’. Available at: https://sites.ualberta.ca/dept/chemeng/AIX-43/share/man/info/C/a_doc_lib/aixprggd/genprogc/benefits_of_threads.htm#D3A4479953manu Accessed on November 23rd, 2018
- Ian Foster (1995). ‘1.3 A Parallel Programming Model’. Available at: https://www.mcs.anl.gov/~itf/dbpp/text/node9.html Accessed on November 23rd, 2018
- Blaise Barney (2018). ‘Introduction to Parallel Computing’. Available at: https://computing.llnl.gov/tutorials/parallel_comp/ Accessed on November 23rd, 2018
- Michael Papathomas (1995). ‘Concurrency in Object-Oriented Programming Languages’. Available at: http://scg.unibe.ch/archive/oosc/PDF/Papa95aOBC.pdf Accessed on November 24th, 2018
- Simon Kågström (2008). ‘Tools, techniques, and trade-offs when porting large software systems to new environments’. Blekinge Institute of Technology. Available at: https://grahn.cse.bth.se/Kagstrom_diss.pdf Accessed on November 24th, 2018
- Alex Handy (2017). ‘Newly Released ECMAScript 8 Sets the Stage for Concurrent JavaScript Processing’. Available at: https://thenewstack.io/newly-released-ecmascript-8-sets-stage-concurrent-javascript-processing/ Accessed on November 24th, 2018
- Rust (2018). ‘The Rust Programming Language. Chapter 16 Fearless Concurrency’ Second Edition. Available at: https://doc.rust-lang.org/book/second-edition/ch16-00-concurrency.html Accessed on November 24th, 2018
- Benjamin Erb (2012). ‘Concurrent Programming for Scalable Web Architectures’. Available at: http://berb.github.io/diploma-thesis/community/092_progtrends.html Accessed on November 24th, 2018
- Mitch Pronschinske (2016). ‘5 emerging programming languages with a bright future’. Available at: https://techbeacon.com/5-emerging-programming-languages-bright-future Accessed on November 25th, 2018
- Alex Crichton (2017). ‘Rust Concurrency Explained’. [Code Drive 2017]. Available at: https://www.youtube.com/watch?v=Dbytx0ivH7Q Accessed on November 25th, 2018
- Karl Rupp (2018). ‘42 Years of Microprocessor Trend Data’. Available at: https://www.karlrupp.net/2018/02/42-years-of-microprocessor-trend-data/ Accessed on November 25th, 2018
- Chris Dannen (2014). ‘Why Does The World Need More Programming Languages?’. Available at: https://www.fastcompany.com/3031443/why-does-the-world-need-more-programming-languages Accessed on November 25th, 2018
Further Reading
- Rick Greenwald; Robert Stackowiak; Jonathan Stern (1999). ‘Multiuser Concurrency’. Available at: http://pages.cs.wisc.edu/~anhai/courses/764-sp07-anhai/oracle.locking.htm Accessed on November 19th, 2018
- Ben-Gurion University (2016). ‘Concurrency: Threads’. Available at: https://www.cs.bgu.ac.il/~spl181/wiki.files/Lecture7_Threads.pptx Accessed on November 19th, 2018
- Université Libre De Bruxelles (2008). ‘Course Notes on Databases and Database Management Systems’. Available at: https://cs.ulb.ac.be/public/_media/teaching/infoh303/dbmsnotes.pdf Accessed on November 19th, 2018
- Mark Handley (2004). ‘Deadlock Avoidance’. Available at: http://nrg.cs.ucl.ac.uk/mjh/3005/2004/10-deadlock-avoidance.pdf Accessed on November 19th, 2018
- Dr. Gul N. Khan (2015). ‘Multitasking and Real-time Scheduling’. Available at: https://www.ee.ryerson.ca/~courses/ee8205/lectures/Multi-Tasking.pdf Accessed on November 19th, 2018
- Roderick Bauer (2017). ‘What’s the Diff: Programs, Processes, and Threads. Available at: https://www.backblaze.com/blog/whats-the-diff-programs-processes-and-threads/ Accessed on November 20th, 2018
- Sarah Diesburg (2016). ‘Concurrency: Threads, Address Spaces, and Processes’. Available at: http://www.cs.uni.edu/~diesburg/courses/cs3430_sp18/sessions/s02/s02_concurrency.pdf Accessed on November 20th, 2018
- Central Connecticut State University (2017). ‘Dynamic Memory Allocation’. Available at: http://chortle.ccsu.edu/assemblytutorial/chapter-33/ass33_3.html Accessed on November 20th, 2018
- Jepsen (2018). ‘Strict Serializability’. Available at: https://jepsen.io/consistency/models/strict-serializable Accessed on November 22nd, 2018
- Kwei-Jay Lin (1989). ‘Consistency Issues in Real-Time Database Systems’. Available at: https://www.computer.org/csdl/proceedings/hicss/1989/1912/02/00048069.pdf Accessed on November 22nd, 2018
- Mark Grechanik; B. M. Mainul Hossain; Ugo Buy (2013). ‘Testing Database-Centric Applications For Causes of Database Deadlocks’. Available at: https://www.semanticscholar.org/paper/Testing-Database-Centric-Applications-for-Causes-of-Grechanik-Hossain/867604da5e8d8915c63059dcea2ff467e8cf06f0 Accessed on November 23rd, 2018
- Microsoft (2018). ‘About Processes and Threads’. Available at: https://docs.microsoft.com/en-gb/windows/desktop/ProcThread/about-processes-and-threads Accessed on November 23rd, 2018
- Shameem Akhter; Jason Roberts (2006). ‘Multi-Core Programming: Increasing Performance through Software Multi-threading’ Edition 1 - Intel Corporation. Available at: http://www.ics.ele.tue.nl/~heco/courses/ACA/Multi-CoreProgramming-Ch1_Akhter_Roberts.pdf Accessed on November 23rd, 2018
- QNX Software Systems (2008). ‘Programming Overview’. Available at: https://www.qnx.com/developers/docs/6.4.1/neutrino/prog/overview.html Accessed on November 23rd, 2018
- Google (2007). ‘Introduction to Parallel Programming and MapReduce’. Available at: https://courses.cs.washington.edu/courses/cse490h/07wi/readings/IntroductionToParallelProgrammingAndMapReduce.pdf Accessed on November 24th, 2018
- Michael B. Feldman (1990). ‘Language and System Support for Concurrent Programming’. Available at: https://resources.sei.cmu.edu/asset_files/CurriculumModule/1990_007_001_15818.pdf Accessed on November 24th, 2018
- Matthew J. Sottilel Timothy G. Mattsonl Craig E. Rasmussen (2009). ‘Introduction to Concurrency in Programming Languages’ Version 20131120. Taylor & Francis Group. Available at: https://www.taylorfrancis.com/books/9781420072143 Accessed on November 24th, 2018
- Joe Armstrong (2003). ‘Concurrency Oriented Programming in Erlang’. Available at: https://guug.de/veranstaltungen/ffg2003/papers/ffg2003-armstrong.pdf Accessed on November 24th, 2018
- Peter A. Buhr; Glen Ditchfield (1992). ‘Adding Concurrency to a Programming Language’. Available at: https://plg.uwaterloo.ca/usystem/pub/uSystem/AddingConcurrency.pdf Accessed on November 24th, 2018
- Marlon Rodrigues (2018). ‘Handling Concurrency with Async/Await in JavaScript’. Available at: https://blog.vanila.io/handling-concurrency-with-async-await-in-javascript-8ec2e185f9b4 Accessed on November 24th, 2018
- Karl Rupp (2015). ‘40 Years of Microprocessor Trend Data’. Available at: https://www.karlrupp.net/2015/06/40-years-of-microprocessor-trend-data/ Accessed on November 25th, 2018
- John C. Mitchell (1996). ‘Advanced Programming Languages: Concurrent Object-Oriented Programming Languages ’. Available at: ftp://cs.stanford.edu/cs/theory/jcm/cs358-96/overview.tex Accessed on November 25th, 2018
Appendices
Appendix A: Processor Scheduling Techniques
Using a First Come First Serve scheduling approach, processes are moved to the CPU in the order in which they arrive in the ready state. This is a non-preemptive approach, meaning that once a process is given access to the CPU, it keeps it notwithstanding waits and termination. This technique suffers from a long average turnaround time.
Scenario 1
[Image Here]
Avg Turnaround: (20 + 25 + 35) / 3 = 26.6 ms
Scenario 2
[Image Here]
Avg Turnaround: (10 + 15 + 35) / 3 = 20 ms
The Shortest Job First (SJF) algorithm analyses all the processes in the ready state and dispatches the one with the smallest service (burst) time.
[Image Here]
Avg Turnaround: (5 + 15 + 35)/3 = 18.3ms
The difficulty with the SJF algorithm lies in knowing the length of a process’ service time before it actually executes. One approach is to predict an estimate of the service time by considering various probability factors and the process’ past performance. Getting the prediction wrong may quickly lead to a deterioration of the SJF algorithm. Moreover, trying to guess a process’ service time will contribute substantially to the scheduling algorithm’s overhead, making it unsuitable for use in short-term scheduling.
This algorithm is often implemented as a non-preemptive algorithm, however a preemptive version involves interrupting the currently running process if a process with a shorter service time than the time remaining for the running process enters the ready state.
Round Robin is a preemptive, starvation-free scheduling method involving the distribution of processing time equitably among all ready processes through the definition of a time slice (or time quantum); an amount of time each process is granted before being preempted and returned to the ready state, allowing another process its turn. Eventually the preempted process will be given another time slice on the CPU, repeating the procedure until the process eventually gets all the time it needs and terminates.
Case 1
[Image Here]
Time Quantum: 10ms
Context Switch Time: 2ms
Turnaround Time: 35ms
Case 2
[Image Here]
Time Quantum: 5ms
Context Switch Time: 2ms
Turnaround Time: 47ms
If the time quantum is large relative to the average process service time, the round robin scheduling approximates first come first serve scheduling. On the other hand, if the quantum is extremely small, this approach is called processor sharing and in theory creates the appearance that each of the n processes has its own processor running at 1/n the speed of the real processor. In reality the time taken for the dispatcher to perform the context switch has to be taken into account as well, especially if this is comparable to the time quantum. Frequent calls to the dispatcher will increase the overhead on the system and will result in a longer average turnaround time, resulting in a performance drop of the scheduler.
A Priority & Multilevel Queue Scheduling algorithm associates an execution priority with each process, allocating CPU time to the process with the highest priority. These algorithms are often preemptive, whereby a higher priority process will interrupt a currently running, lower-priority process.
A common issue with priority scheduling is indefinite blocking or starvation. In a heavily loaded computer system, a steady stream of high-priority processes can leave some low priority processes waiting indefinitely. A solution to this problem is ageing, a technique where the priority of a process is increased after is has been waiting in the system’s ready queue for a duration of time.
Multilevel queue scheduling is an evolution of priority scheduling, requiring that the ready queue be partitioned into several priority-unique queues, each having its own scheduling algorithm. This manages how equal priority processes are handled. The diagram below depicts this situation.
[Image Here]
(University of Illinois at Chicago, 2006)
Appendix B: Memory Management Techniques
Contiguous Memory Partitioning assigns each process a fixed-size partition based on the program’s size and data requirements. The memory map ensuing from such a scheme is illustrated in the diagram below. The ‘lowest’ partition is usually reserved for the operating system.
[Image Here]
Partitioning requires that the operating system maintains two pieces of information about a process: the base memory address and the length of the partition (the limit). This information is retained in a partition table.
Paging is a memory management scheme through which a program can be contained in non-contiguous memory areas. The concept is illustrated in the diagram below.
[Image Here]
With paging, a process is logically divided into pages; likewise, main memory is divided into frames. A page from the process is stored in any available frame in main memory. The operating system maintains a local page table for each process, which maps its pages to corresponding frames in main memory.
Segmentation is also non-contiguous, but it logically divides a program in segments of variable length, which are stored in memory wherever there is enough free space. A segment table is maintained for each process, which is used to record the segment number, the base memory address, and the limit (length) of the segment. This is illustrated below.
[Image Here]
(Jurgen Borg, 2014)
Appendix C: Serial vs Parallel Environments
Serial Payroll Program
[Image Here]
Parallel Payrol Program
[Image Here]
Blaise Barney (2018)
Ratings