A thread pool is a technique in multithreading programming used to manage and optimize the use
of threads in an application. In an application, creating and managing threads can be resource-intensive
create a number of threads at process startup and place them into a pool threads sit in the pool
and wait for work:
- when a server receives a request/task, it awakes a thread from pool if available and passes it the request/task
- once thread completes its service, it returns to the pool
Benefit:
reduce thread creation overhead
limit number of thread => avoid crash (db, server)
separate the task to be performed from the thread creations => we can use diff strategies to
schedule the task: execute the task after a delay or periodically.
A race condition is a situation that occurs in concurrent or parallel programming when the outcome
of a program depends on the relative timing or interleaving of multiple threads or processes. It
arises when two or more threads or processes access shared data or resources in a way that leads
to unexpected and non-deterministic behavior.
The situation when 2 or more threads update a shared resource simultaneously leads to the shared
resource’s uncertain state.
A lock that can be held by at most one thread at a time. If another thread tries to acquire a spin
lock while it is already held, the thread busy loops (spins) waiting for the lock to become available.
Spin lock is optimal for short hold time because if a section is locked, other threads keep
waiting and do nothing useful => waste of processor resources.
You cannot sleep while holding a spinlock.
E.g: thread A acquires spin lock L, then sleeps. Thread B tries to acquire L, but because L was acquired
by A, B has to wait. If the computer only has 1 core, we have a deadlock: B waits for A to release L
A is sleeping and cannot wake up because B is using processor for spinning.
A semaphore is a synchronization primitive used in concurrent programming to control access to a shared
resource by multiple threads or processes.
When a thread attempts to acquire an unavailable semaphore, the semaphore places the task onto a wait
queue and puts the task to sleep. The processor is then free to execute other code. When the semaphore
becomes available, one of the tasks on the wait queue is awakened so that it can then acquire the semaphore.
E.g:
When a person (thread) reaches the door and it is locked, instead of spinning, he/she puts his name
on a list and grabs a ticket then sleeps.
When the person inside the room leaves, he check the list and goes over the first name, waking him up
and gives him the key.
Counting semaphore: Semaphore allows more than 1 lock holder. It maintains a number called count, which
is the number of available resources.
Support 2 functions
down() => acquire the semaphore and decrease count by 1. (count -= 1)
up() => release the semaphore and increase count by 1. (count++)
When count == 0, thread goes to sleep and waits to be woken up.
A technique that allows the execution of processes that are not completely in memory.
Benefit: programs can be larger than physical memory (RAM).
Separation of logical memory (memory space) and physical memory (RAM) => allow large
virtual memory to be provided for programmers when only a smaller physical memory is available.