University Life - Study

Don’t diss the dissertation!

Computer Lab for GPU programming
Computer Lab for GPU programming
Computer Lab for GPU programming

Computer Lab for GPU programming

It’s that time of year when all the undergraduates flee the city to enjoy their four month summer break and lecturers frantically start addressing the research they’ve been neglecting all year. It also the time when Master’s students make a start on their dissertation (theoretically). The Master’s dissertation (in Computer Science at least) starts after the exams and goes on till the middle of September. A student can choose a dissertation topic proposed by a lecturer, or, even propose his/her own!

So what counts as a legitimate dissertation topic? Well, if it involves programming and is an active research area, you can make a dissertation from it. If you want to program an indestructible virus, then we’re not going to stop you (I mean, sure, you’ll have to sign a few disclaimers, meet with the ethics committee and accept that you are making yourself completely unemployable, but other than that, you’re good to go!).

In order to propose your own topic the first thing you need to do is identify what area you’re interested in. This could be Security, Social Networks, Games, High Performance Computing or many more. Once you’ve done that, it is worth speaking to a lecturer who is also interested in that topic (if you don’t know who to speak to, ask your personal tutor for advice). The lecturer can tell you what the current research in that area is focusing on and suggest some topics for you to mull over (with a glass of wine of course). Alternatively, if you already have a specific topic in mind, it is still worth speaking to the lecturer who is involved in that area as the lecturer can help you turn that topic into a proper dissertation topic. Since I’ve quite a strong interest in the area of High Performance Computing, I thought I’d suggest a topic in that domain. Not only does a project in this domain teach you to write more efficient code but you become scarily good at speeding up Computers (particularly if they run Windows). So much so, that I actually find it quite therapeutic (and am campaigning to have it recognised as a form of mindfulness)

The Project

Nvidia Graphics Card (Photo Credit: <a href="https://www.flickr.com/photos/gbpublic/">GBPublic)</a>

Nvidia Graphics Card (Photo Credit: GBPublic)

Currently most (if not all!) of the computers you get come with dual-core, quad-core or basically multi-core processors. The reason this is the case is that scientists are struggling to make processors any faster, therefore, to compensate they’re just putting more of them on a single chip. Now you might think that moving from a dual-core to a quad-core processor will translate to a computer that’s twice as fast, but you’re wrong. You see you can’t just throw more processors at a problem and expect it to be solved faster, as, unless the software makes use of these additional processors, you’re not going to notice a speed up (Another limitation is that processors shouldn’t be thrown). Now that may seem like a small inconvenience, but it’s really not (much like the Welsh weather). The programming languages used to utilise multiple processors can be quite tricky to write and aren’t what programmers are used to.

Is this that big a deal though? Well, the processor is what bumps up the cost of your device (Similar to how alcohol bumps up a student’s debt). That’s essentially where all your money is going and, yet, for the most part, these additional processors are just sitting idly whirring away happily like a student doing a degree in wine tasting (Perhaps a few too many references to alcohol in this post). Therefore one project idea could be to take algorithms/software that runs on one processor (and is known to have performance problems) and re-write it to run on multiple processors and then measure the speed-up (or, er, slow-down as was the case with me).

Compute Unified Device Architecture (CUDA). A language for GPGPU. Photo Credit: <a href="https://commons.wikimedia.org/wiki/File:NVIDIA-CUDA.jpg">Wikimedia Commons</a>

Compute Unified Device Architecture (CUDA). A language for GPGPU. Photo Credit: Wikimedia Commons

It doesn’t end there though. Most computers also come with a Graphics Card nowadays. Now while Graphics Cards are great for games (not that I would know), they can also be used to help the processor in its general day-to-day processing not involving graphics. This is known as General Purpose computing on Graphics Processing Units or GPGPU (Learn that acronym, it sounds very impressive when casually slipped into conversations). Unlike processors Graphics Processing Units (GPU) can have hundreds of cores (as opposed to two, four or eight) and therefore can be a huge help in speeding up software/algorithms that wouldn’t normally think to use the Graphics Card. However, again, GPGPU is very difficult to implement and so is not used anywhere near as often as it should be.

So there you have it, start thinking about what interests you and what the current state of research is in that area as if you propose your own topic, you’re much more likely to enjoy it and take the diss out of dissertation (Yup, I just went there). Since we’re on the topic, you know what else was produced from a Master’s dissertation in Cardiff University? IMDb.