Beyond
March 13, 2026 - Part 1
I'm currently taking an Operating Systems course this semester. Professor DZ has made it interactive and interesting so far, and it's given me a much deeper understanding of how computers work under the hood. Threads, scheduling, synchronization, memory management.. they're only seen as exam topics if you look at them from a surface level, but when you really dive into them, you see how they shape the way we build software and systems.
We've been given a semester long project by the Prof called "psirver". It is essentially a user-space C++ server on a POSIX system that executes and monitors Python scripts over HTTP. It has forced me to think about process control, signal handling, and fault isolation in a way that most assignments never did.
I enjoyed building psirver, but I wanted a second project where I could make my own architectural tradeoffs from day one. So I thought of this simple web crawler. It's small enough to finish quickly, but deep enough to explore concurrency and resource management.
I gave myself a 24 hour time constraint to keep momentum high and decisions practical. The challenge and goal is not to build the biggest crawler on day one, but to design something correct, measurable, and easy to extend without rewriting everything later.
— Montasir