418FinalProject

Updated Schedule

The remainder of our schedule is as follows:

Completed Work / Progress

So far, we have completed the goals we designated for the first two weeks of the schedule. We have implemented two different allocators - the initial, single-heap malloc implementation with one global lock and the multiple arena implementation of malloc. We have also begun work on developing the testing suite with which we can use to benchmark and analyze these different allocators.

We have completed the majority of tasks in our schedule, though we are slightly behind with some of the tasks we assigned to be completed over Thanksgiving Break. In the future, this shouldn’t be a problem since we will be back on campus and able to work together. We plan to complete the testing suite this week and will not be working on compare and swap as it falls under the “nice to have” category of deliverables. We anticipate that we will be able to complete all of our initial goals for the project. Since we elected to present at the earlier poster slot, it is unlikley that we will be able to complete the “nice to have” stretch goals before the presentation slot. With this in mind, our current plan for the poster session is to focus on completing the “plan to achieve” goals.

Poster Session Plan

For our poster session, our plan is to generate graphs comparing the performance of our different allocator versions. With our testing suite consisting of multiple applications which frequently free and allocate memory, we can compare the performance of these different allocators by analyzing the trace generated by each version. We plan to use the single-heap single-lock version, the multiple arena version, and the multiple arena with thread local caches version. Comparing these different allocators running on different kinds of applications will allow us to generate graphs which show the performance of our implementation and the parallelism benefits it provides. We expect that these graphs will show low, perhaps slightly negative speedup for the single-lock implementation and good (slightly sub-linear) speedup for more advanced lock-free implementations as thread counts increase.

Issues

In the last two weeks of the project, we anticipate that finishing development of thread local caches will be a challenge. We may also have some difficulties developing a sufficient testing suite to cover various memory allocation patterns.