TPS & Semaphore

 

Project 3 : TPS & Semaphore

ECS 150: Project #3 - TPS & Semaphore Implementation

Implementation:

  • semaphore
  • Thread Private Stroage

Semaphore

Semaphores are a way to control the access to common resources by multiple threads.

We implement the semaphore as following:

having a data structure including:

  • the count for the number of thread
  • queue for the waitlist of thread

for sem_t sem_create(size_t count): we allocate space for the semaphore struct, assign the value of count sem -> count = count; and initialize the queue(waitinglist)

for int sem_destroy(sem_t sem) : we destroy the waitinglist and then free the semaphore. Here we add edge cases to consider if sem == null and waitinglist is empty.

for int sem_down(sem_t sem): if there no any available resources (count == 0) we called the pthread_self(); and put that thread in waitlist then block it. If not then the resources --mean this thread grab one resource.

for int sem_up(sem_t sem): it just do the opposite of down. This function unblock the first item in the queue and give out the resources.

The hardest thing in this phase is to determine the where to put critical section to avoid the race condition. Initially We tought only the change of count would cause race condition, but later, we found out that the enqueue and dequeue would also casue race condition.

We choose to put the critical section at the beginning of down and up to avoid the race condition that giving and grabing resources at the same time.

That’s all for part1.

TPS

After read through 2.1 to 2.3, we combined 2.1 and 2.2 when we were implementing TPS. Firstly we wrote a data struct for each thread which contains Thread TID and address of mmap(), initially we try to use queue to implement the TPS, and using iterater function to find the address of given TID. But that doesn’t work out, because the pointer issue due to enqueue() and dequeue() (void *), which took us too much time to debug. Also it was hard to found out where goes wrong and how to fix it because we have to go to queue.c and check the content. So finally we give up this approach and choose to use linked list to solve that.

2.1 to 2.2

By writting a linked list only for the tpsNode, we can easily found out where goes wrong and easy to fix it.

So First of all, in the int tps_init(int segv) we create the global linked list.

Then we go to tps_create(void), so each thread called tps_create(void), called mmap to mapping to the memory. we store the TID as well as teh address of the mapping into our data structure and put it into linked list(TPS). for mmap we give it permission of PROT_NONE and we will unlock it by using mprotectlater in write and read, that’s how to combine 2.1 and 2.2.

For int tps_destroy(void), we first find the list node according to give TID. Then call munmap to free that memory. After remove it from the linked list, we free that data structure. This is nothing hard.

For the int tps_read(size_t offset, size_t length, char *buffer) and int tps_write(size_t offset, size_t length, char *buffer) they are basically the doing the same thing.

For read it just memcpy from the address to the buff, for write it memcpy from the buffer to the address. In order to do this, we first find the address ptr of the current thread in the TPS. Then unlock it by calling mprotect with PROT_READ or PROT_WRITE. Then memcpy the content depend on it is read or write. After the memcpy, we use mprotect to lock up the memory again. As for the offset and length, it just follow the memcpy rules. no tricky here.

For clone, We initially use memcpy. We find the target node according to TID, then memcpy its memory. that’s how we finish 2.2

2.3

As for 2.3, we changed the memcpy in clone to assign the address ptr to the given thread, then those thread share the same memory, so that save memory space. But we encountered the copy-on-write problem.

In order to solve this, we add an function search for address in write. If we want to write on an shared memory address, we will allocated new space for it bymmap and write on it. By doing this, here is an example, suppose we have 3 thread t1 t2 t3: t1has its own memory and t2 t3 share with t1. If we want to write on t1 then we check if it is share with other by calling find address, if is, then we mmap new space for t1 to write on. t1 will has new space and t2 t3 has no changes. This will be the same if we write on t2 or t3.

This is lazy load implementation, which allocate the resources when needed.

lastly

We add the segv handler to catch up the TPS protect fault. By call findData we check through the TPS to find the p_fault and print the error message. So we can catch the prtection fault.

Testing:

We wrote a simple test to check for the copy and write and segv handler. It works fine.

Reference: