Survey							
                            
		                
		                * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
What we will cover…
 Processes
Process Concept
Process Scheduling
Operations on Processes
Interprocess Communication
Communication in Client-Server Systems (Reading Materials)
 Threads
Overview
Multithreading Models
Threading Issues
Lecture 3
1-1
What is a process
 An operating system executes a variety of
programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Single-user Microsoft Windows or Macintosh OS
• User runs many programs
• Word processor, web browser, email
 Informally, a Process is just one such program in
execution; progress in sequential fashion
Similar to any high level language programs code
(C/C++/Java code etc.) written by users
 However, formally, a process is something more
than just the program code (text section)!
Lecture 3
1-2
Process in Memory
 In addition to the text section
 A process includes:
program counter
contents of the processor’s
registers
stack
Contains temporary data
Method parameters
Return addresses
Local variables
data section
While a program is a passive entity, a process is known as an active entity
Lecture 3
1-3
Process State
 As a process executes, goes from creation to
termination, goes through various “states”
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Lecture 3
1-4
Diagram of Process State
Lecture 3
1-5
Process Control Block (PCB)
 A process contains numerous information
 A system has many processes
 How to manage all the process information
Each process is represented by a Process Control Block
 a table full of information for each process
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
I/O status information
Lecture 3
1-6
Process Control Block (PCB)
Lecture 3
1-7
CPU Switch From Process to Process
Lecture 3
1-8
Process Scheduling
 In a multiprogramming environment, there will be many processes
many of them ready to run
Many of them waiting for some other events to occur
 How to manage the architecture?
 Queuing
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready
and waiting to execute
Device queues – set of processes waiting for an I/O device
 Processes migrate among these various queues
Lecture 3
1-9
A Representation of Process Scheduling
Lecture 3
1-10
OS Queue structure (implemented with link list)
Lecture 3
1-11
Schedulers
 A process migrates among various queues
 Often more processes are there than can be executed
immediately
Stored in mass-storage devices (typically, disk)
Must be brought into main memory for execution
 OS selects processes in some fashion
 Selection process carried out by a scheduler
 Two schedulers in effect…
Long-term scheduler (or job scheduler) – selects which
processes should be brought into the memory
Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
Lecture 3
1-12
Schedulers (Cont)
 Short-term scheduler is invoked very frequently (milliseconds)
 (must be fast)
 Long-term scheduler is invoked very infrequently (seconds,
minutes)  (may be slow)
 The long-term scheduler controls the degree of
multiprogramming
 Long-term scheduler has another big responsibility
 Processes can be described as either:
 I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
 CPU-bound process – spends more time doing computations;
few very long CPU bursts
 Balance between two types of processes
Lecture 3
1-13
Addition of Medium Term Scheduling
Lecture 3
1-14
Context Switch
 All the earlier mentioned process scheduling has a
trade-off
 When CPU switches to another process, the system
must save the state of the old process and load the
saved state for the new process via a context switch
 Time dependent on hardware support
 Context-switch time is pure overhead; the system does
no useful work while switching
Lecture 3
1-15
Interprocess Communication
 Concurrent processes within a system may be independent or
cooperating
 Cooperating process can affect or be affected by other
processes, including sharing data
 Reasons for cooperating processes:
Information sharing – several users may be interested in a shared file
Computation speedup – break a task into subtasks and work in parallel
Convenience
 Need InterProcess Communication (IPC)
 Two models of IPC
Shared memory
Message passing
Lecture 3
1-16
Communications Models
Shared-memory
Message-passing
Lecture 3
1-17
Shared memory: Producer-Consumer Problem
 Paradigm for cooperating processes
 producer process produces information that is
consumed by a consumer process
 IPC implemented by a shared buffer
 unbounded-buffer places no practical limit on
the size of the buffer
 bounded-buffer assumes that there is a fixed
buffer size
• More practical
• Let’s design!
Lecture 3
1-18
Bounded-Buffer – Shared-Memory Solution design
 Three steps in the design problem
1.
Design the buffer
2.
Design the producer process
3.
Design the consumer process
1.
Shared buffer
(implemented as circular array with two logical pointers: in and out)
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Lecture 3
1-19
Bounded-Buffer – Producer & Consumer process
design
2. Producer design
while (true) {
/* Produce an item */
while (((in + 1) % BUFFER SIZE) == out)
; /* do nothing -- no free buffers */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER SIZE;
}
3. Consumer design
while (true) {
while (in == out)
; // do nothing -- nothing to consume
// remove an item from the buffer
nextConsumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
}
Lecture 3
1-20
Shared Memory design
 Previous design is correct, but can only
use BUFFER_SIZE-1 elements!!!
 Exercise for you to design a solution
where BUFFER_SIZE items can be in the
buffer
Part of Assignment 1
Lecture 3
1-21
Interprocess Communication – Message Passing
 Processes communicate with each other without
resorting to shared memory
 IPC facility provides two operations:
send(message) – message size fixed or variable
receive(message)
 If P and Q wish to communicate, they need to:
 establish a communication link between them
 exchange messages via send/receive
Lecture 3
1-22
Direct Communication
 Processes must name each other explicitly:
 send (P, message) – send a message to process P
 receive(Q, message) – receive a message from process Q
 Properties of communication link
 A link is associated with exactly one pair of
communicating processes
 Between each pair there exists exactly one link
 Symmetric (both sender & receiver must name the
other to communicate)
 Asymmetric (receiver not required to name the
sender)
Lecture 3
1-23
Indirect Communication
 Messages are directed and received from
mailboxes (also referred to as ports)
Each mailbox has a unique id
 Processes can communicate only if they share a
mailbox
 Properties of communication link
 Link established only if processes share a
common mailbox
 A link may be associated with many processes
 Each pair of processes may share several
communication links
 Link may be unidirectional or bi-directional
Lecture 3
1-24
Communications in Client-Server Systems
 Socket connection
Lecture 3
1-25
Sockets
 A socket is defined as an endpoint for communication
 Concatenation of IP address and port
 The socket 161.25.19.8:1625 refers to port 1625
on host 161.25.19.8
 Communication consists between a pair of sockets
Lecture 3
1-26
Socket Communication
Lecture 3
1-27
Threads
 Process model discussed so far assumed that a process
was sequentially executed program with a single thread
 Increased scale of computing
 putting pressure on programmers, challenges include
•
•
•
•
•
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
 Think of a busy web server!
Lecture 3
1-28
Single and Multithreaded Processes
Lecture 3
1-29
Benefits
 Responsiveness
 Resource Sharing
 Economy
 Scalability
Lecture 3
1-30
Multithreaded Server Architecture
Lecture 3
1-31
Concurrent Execution on a Single-core System
Lecture 3
1-32
Parallel Execution on a Multicore System
Lecture 3
1-33
User and Kernel Threads
 User threads: Thread management done by
user-level threads library
 Kernel threads: Supported by the Kernel
Windows XP
 Solaris
 Linux
 Mac OS X
Lecture 3
1-34
Multithreading Models
 Many-to-One
 One-to-One
 Many-to-Many
Lecture 3
1-35
Many-to-One
 Many user-level threads mapped to single
kernel thread
 Examples:
Solaris Green Threads
 GNU Portable Threads
Lecture 3
1-36
One-to-One
 Each user-level thread maps to kernel thread
 Examples
 Windows NT/XP/2000
 Linux
Lecture 3
1-37
Many-to-Many Model
 Allows many user level threads to be
mapped to many kernel threads
 Allows the operating system to
create a sufficient number of kernel
threads
 Solaris prior to version 9
Lecture 3
1-38
Many-to-Many Model
Lecture 3
1-39
Threading Issues
 Thread cancellation of target thread
 Dynamic unbound usage of threads
Lecture 3
1-40
Thread Cancellation
 Terminating a thread before it has
finished
 General approaches:
Asynchronous cancellation terminates the
target thread immediately
• Problems?
Deferred cancellation allows the target
thread to periodically check if it should be
cancelled
Lecture 3
1-41
Dynamic usage of threads
 Create thread as and when needed
 Disadvantages:
Amount of time to create a thread
 Nevertheless, this thread will be discarded
once it has completed work; no reusage
 No bound on the total number of threads
created in the system
• may result in severe resource scarcity
Lecture 3
1-42
Solution: Thread Pools
 Create a number of threads in a pool where
they await work
 Advantages:
Usually faster to service a request with an existing
thread than create a new thread
 Allows the number of threads in the application(s)
to be bound to the size of the pool
 Almost all modern OS provide kernel support
for threads: Windows XP, MAC, Linux…
Lecture 3
1-43