Monthly Archives: September 2011

This code is a mess. Let’s start from scratch again …


[This was originaly published on #AltDevBlogADay.]

I have heard this sentence a lot of times. And I even said it myself more than once. It is pretty common that programmers want to have a clean and nice code base. They want to be able to understand what is happening at the first glance and they want to have the feeling that the code meets their quality expectations.

I also do.

But there is a serious problem which is often overlooked when we talk about ‘throwing away’ and starting from scratch.

Where is this messy code coming from?

Code does not get ‘messy’ and war-torn by itself. It is also – normally – not the fault of some stupid programmer who has no clue what he is doing. I admit, this might happen from time to time, but I have never worked with anybody who could be attributed like that.

This are the two main reasons for code to become ‘messy’.

Bugfixes and the handling of corner-cases

There are lots of issues that are found and fixed during the lifetime of a code base. All the small and big fixes sum up to code, that is not really what one would call ‘clean’. But this does not mean, that the code is bad. I would even say it is exactly the opposite of bad. The functionality is tested, proven and able to handle real-world data thrown at it. This ‘messy’ code is your safe haven and you can rely on it doing what you would expect.

Design/Focus changes

The code was written under different base assumption, which are not valid any more. Design or focus changes forced a strong shift and required the code to adapt in ways that were not really fitting its original design. This leads pretty fast to code, which is hard to understand as a whole and therefore hard to maintain. The additional complexity introduced by this can even spread into the toolchain, which makes the also the life of the user of the system miserable.

What to do with the mess?

The most important thing, is to realize, why the code is in the shape it is. It is crucial, that this is approached with the right mindset. You always should assume, that the implementation, the bugfixes and the extension of the code was done by someone, who had a clear picture of what is going on and a clear understanding of what needs to be done. It might sound obvious, but always assume the best knowledge and the best intent. Then you are able to judge the code objectively.

When you know in which state the code really is and when you understand all the interdependencies, you can make your decision on how to refactor it.

If there is no serious flaw in the design and it was not developed with different base assumptions and a different goal then what it has grown into, you should really think twice if you want to change it at all. Is there really a pressing reason to change it? Your decision should be not based on how much you like the code and how you judge the ‘elegance’ of the solution. The sole reason for the existence of the code is to deliver a specific functionality. And if this functionality is not suffering from the ‘ugliness’, don’t put it on stake. Accept the fact, that it might be not perfect, but it does the job. In the end, we are not writing code for the sake of writing code. We are building software. If the software functions properly, we did our job well. No one cares if there is code somewhere that does not adhere to the personal standards of a programmer, right?

Should you realize, that the code was developed with different requirements and was afterwards altered to somehow mirror the changes that happened to this requirements, the situation might be a bit more complicated. But even then, you need to keep in mind, that even this code is not necessarily shitty.

Whatever you think the right action is … throwing away the code is mostly the wrong one. We are always tempted to start from scratch, because we love to implement things and it is the most fun if you have a clear start. It is also by far easier to write new code, than to read old one.

But no matter how hard you try, you will be doomed to fix all the small bugs, issues and corner-cases later on again. All the things, that has been fixed for the existing code already need to be found again by the QA and fixed by you. There is no way you can fix all these issues on the fly, while re-implementing the functionality. Because of that, even a crappy implementation that was around for some time, has proven its right to exist and should therefore be refactored, rather than thrown away. You want to keep as much of the juice, that made the code do it’s job, as possible. And usually there is enough of it worth saving.

Conclusion

The last motorcycle I had, was over 15 years old. It had a lot of small quirks, but I knew every single one of those. I knew how she behaved in every situation. I knew how to handle her when riding in different weather conditions. I could do the service while being drunk … with closed eyes.

The same is valid for old, ‘messy’ code. It is not beautiful, and has its scratches and it’s quirks. But you know them, and you know how to use the code to get your job done. Everything you need to be able to do, has been done already. You can rely on it, to do what you expect.

Do not throw away this intimate relationship just because of aesthetic reasons. The new one will also have it’s issues and problems, but you first need to find all of them and learn how to handle them.

Advertisements

Memory allocation pitfalls on multi-core CPUs


[This was originaly published on #AltDevBlogADay. Go there if you want to read a lot of awesome stuff from awesome dudes … ]

Although it is less and less common nowadays, there are still “Thread-Safe Memory Allocators” in use. What do I mean with this? A standard, single-core based allocator that uses a simple locking mechanism on top to avoid race-conditions.
I am usually a big fan of “The simplest solution”(tm), but this one unfortunately leads to two big problems on multi-core architectures and therefore doesn’t really qualify as a ‘solution’ at all.

Thread contention

I think it is pretty obvious that thread contention is bound to happen. When one thread is accessing the allocator ( allocating or releasing memory ) all other threads that are trying to do the same are blocked. It does not matter how fast the allocator is, as it will never be fast enough to not introduce contention and block other threads. This issue has an impact on performance especially in standard high-level gameplay code. As high-level gameplay code tend to use the allocator a lot ( creating/destroying objects, growing/shrinking dynamic arrays, etc. ) this is a recipe for just throwing away clock-cycles. For no gain at all. I am not talking about a few nano-seconds here as depending on the amount of runtime allocations, this can sum up faster than one might expect.

False Cache-Sharing

That is the more serious issue, and not that obvious to see. Two threads are working on data in a memory-area that is mapped to the same cache-line. This is not a theoretical problem, but a situation that is not that unlikely to happen. The probability of running into that increases with the amount of allocator contention. There is a good chance that a non-thread-aware allocator returns consecutive memory areas for consecutive allocations. If these allocation requests are coming from different threads, false cache-sharing is waiting to happen.

Example

Thread_A resides on CPU0
Thread_B on CPU1.

Both threads are doing totally unrelated calculations and both of them are allocating some memory.
Let’s assume both get a chunk of memory from the same cache-line.

This situation is called ‘false sharing’ or – what is even more fitting – ‘cache line ping-pong’. We have now created the biggest nightmare ( at least performance-wise ) for the cache-coherency protocol.

Thread_A writes to his memory.
– This invalidates Thread_B‘s cache-line.
– The cache of Thread_A must be written back to memory …
– … and read back again to the cache of Thread_B.

The same applies if Thread_B is modifying its memory area.

If you are interested in more details and also some performance impact measurements, check out ‘Analysis of False Cache Line Sharing Effects on Multicore CPUs’.

[Update]

As I was asked what I would propose as a solution on the comments section of ADBAD, here is my answer:

My preferred solution would be to disallow dynamic allocations at runtime completely, but that might be a bit drastic 🙂

So I rather go with this answer:

Instead of using a ‘thread-safe allocator’ which introduces the mentioned problems, the usage of a ( I like to call it ) ‘Thread-Aware Allocator’ should be the way to go.

Each thread gets his own big blob of memory and the management is done on a per-thread basis. This reduces the thread-contention to the situations where a new memory chunk is needed.
As every thread is allocating from his own memory-blob, the chances of false sharing due to the described reason are minimized.

One well documented example is the Intel TBB Scalable Allocator (TBB Scalable Allocator). ( It starts a few pages down … search for ‘SCALABLE MEMORY ALLOCATION’ ).
[/Update]

Further reading

[1] Analysis of False Cache Line Sharing Effects on Multicore CPUs
[2] Concurrency Hazards: False Sharing
[3] For more details on caches, read this excellent post by Luke Hutchinson.

Who killed Anti-Portals


[This was originaly published on #AltDevBlogADay. Go there if you want to read a lot of awesome stuff from awesome dudes …

Check out the comments on this post on ADBAD, especially these from Christina Ann Coffin ( http://altdevblogaday.com/2011/08/24/who-killed-anti-portals/ ) ]

Yesterday, I had a small chat with a former coworker that threw me back in time. It was about Anti-Portals.
Yeah, I know, you heard that term back in the days, but as it is so long ago ….

… a small reminder what the hell an Anti-Portal is

An Anti-Portal is just a plane placed in the world, which shows you that everything behind this plane is not visible. To make use of it, you need to generate a plane that goes through the player’s point of view for every edge of the portal. You end up with a frustum that allows you to easily check if an object or scene-partitioning node is occluded or not. Normal Portals are working exactly the same, but they define the visible area instead of the occluded. They were used for doorways and such.

What the hell has happened to Anti-Portals?

After that chat yesterday, I realized for the first time, that this technique disappeared silently and is dead by now. I haven’t heard of anyone using anti-portals since around 2004 or something. Why? Although we have other ways to handle occlusion nowadays, I cannot see what a few well placed anti-portals could harm. But I can see that they could be perfectly used to reject a bunch of objects with almost no cost. This does of course only make sense for large occluders, but hey, why not? You will almost always be able to find some of these.

But now, let’s ask the most important question … who’s fault was it?

Who killed Portals / Anti-Portals? 🙂

If you have any clues that might help to find the murderer, please share them with us. Maybe we can catch that bastard before occlusion queries silently disappear.

Why the hell am I writing this?

First and foremost, to commemorate the fallen ones and to find that reckless criminal. And second, because this topic remembered me instantly of a long forgotten ( at least by me ) internet site. Flipcode!. Although the site is down since 2005, they still have the archives there and it is still a lot of fun to read that stuff again.