Category Archives: Uncategorized

Who killed Anti-Portals


[This was originaly published on #AltDevBlogADay. Go there if you want to read a lot of awesome stuff from awesome dudes …

Check out the comments on this post on ADBAD, especially these from Christina Ann Coffin ( http://altdevblogaday.com/2011/08/24/who-killed-anti-portals/ ) ]

Yesterday, I had a small chat with a former coworker that threw me back in time. It was about Anti-Portals.
Yeah, I know, you heard that term back in the days, but as it is so long ago ….

… a small reminder what the hell an Anti-Portal is

An Anti-Portal is just a plane placed in the world, which shows you that everything behind this plane is not visible. To make use of it, you need to generate a plane that goes through the player’s point of view for every edge of the portal. You end up with a frustum that allows you to easily check if an object or scene-partitioning node is occluded or not. Normal Portals are working exactly the same, but they define the visible area instead of the occluded. They were used for doorways and such.

What the hell has happened to Anti-Portals?

After that chat yesterday, I realized for the first time, that this technique disappeared silently and is dead by now. I haven’t heard of anyone using anti-portals since around 2004 or something. Why? Although we have other ways to handle occlusion nowadays, I cannot see what a few well placed anti-portals could harm. But I can see that they could be perfectly used to reject a bunch of objects with almost no cost. This does of course only make sense for large occluders, but hey, why not? You will almost always be able to find some of these.

But now, let’s ask the most important question … who’s fault was it?

Who killed Portals / Anti-Portals? 🙂

If you have any clues that might help to find the murderer, please share them with us. Maybe we can catch that bastard before occlusion queries silently disappear.

Why the hell am I writing this?

First and foremost, to commemorate the fallen ones and to find that reckless criminal. And second, because this topic remembered me instantly of a long forgotten ( at least by me ) internet site. Flipcode!. Although the site is down since 2005, they still have the archives there and it is still a lot of fun to read that stuff again.

What you need to give up when going data oriented


[This was originaly published on #AltDevBlogADay. Go there if you want to read a lot of awesome stuff from awesome dudes …]

This post is not about the performance advantages of data oriented design, as this has been covered already pretty extensively by much smarter guys. ( see links below )

What I want to talk about, are the prejudices that I always here when people start to defend their holy objects.
Everyone and his mother is constantly reiterating the advantages of OOP – productivity increase, maintainability and code-reuseability. Should we really sacrifice all this good things just in favor of faster execution times?

What are the advantages of OOP everyone is so keen to protect?

I think this list should cover most of the claimed benefits:

– Encapsulation
– Inheritance
– Polymorphism
– Modularity
– Code-Reusability
– Elegance
– Extensibility

Let’s go through this list and take a look what we really need to sacrifice.

Encapsulation:

Hiding of implementation specific data and functionality. This is something you should aim for in any programming paradigm, no matter if you call it OOP, DOD or ADHS. This is definitely nothing you should give up. And you don’t need to. To claim that encapsulation is only working with OOP is just plainly wrong, as this has been done in C since the dawn of time. So, apparently this still applies to DOD and is nothing you need to give up.

Inheritance:

The ability to inherit a class’ data and functionality to extend/alter it’s behavior..
Inheriting data is really not fitting well into the DOD concept. It obfuscates the way your data is organized and forces you to group the data by object, not by usage pattern. This is definitely a thing you would need to give up, at least to some extent.

Polymorphism:

That is a really nice concept of OOP. It releases you from thinking about what happens, when you call a method on any given object. If you want to update your 10000 entities, you just iterate over a list and call update() each time. Everything is handled for you. Yeah, it is the most inefficient way you could handle this … but it works. It is convenient and it safes you a lot of headaches. Unfortunately, it is highly overused.
You do not really want that much of polymorphism if you design your code around your data. This does not mean there is no space left for it, but you have to decide whether is makes sense or not instead of using it as the default way of programming.

Modularity:

You put all data and functionality into one parent entity … a class for example. The ‘class’ in this case is an arbitrary defined scope for a ‘module’. A module could as well be a library, a subfolder in your source-tree, a matching pair of header and implementation-files or even a block of code marked by some fancy comment ( maybe even including ASCII-Art ). So, why is modularity attributed to OOP. You do not have to give up any modularity if going away from an OOP model. In the end, a module is defined by a description of the data and the transforms that can be applied to it. Modularity is also perfectly possible and encouraged in data oriented programming.

Code Reusability:

This should never be anything that drives your implementation decision. But apart from this, code-reuse is achieved by calling a function, right? Is it ‘better’ code-reuse if a class provides you the ’reusable’ functions to call? “But you can reuse entire objects, you $%*§$!”, I hear you saying. Can you? How often – in a real world application – have you reused an object to do something you haven’t had already in mind when writing this class originally. I think there aren’t that much occurrences, apart from the obvious cases. There is nothing that stops you from reusing your non-OOP code. Just because it is not modeled after an object does not mean that it cannot be used elsewhere. You can reuse as much code as it makes sense, so nothing to give up here.

Elegance:

What the hell is this supposed to mean? What is elegance in code? Is it achieved by modelling your code base after small chunks of code Erich G. & Friends have taught you?. Is it elegant, if you are layering one abstraction over another? The definition of elegant code is a bit subjective, so I find it hard to use this to support any programming model.
My personal definition: “Elegant code does the job it is supposed to do ( and nothing more ) in an efficient way and can be understood by you or some other coder 6 month later without jumping through 37 files.”
So, to reverse this argument – and to clarify how subjective ‘elegance’ is – object-oriented design encourages programmers to write non-elegant code according to my definition ( which for sure is the only correct one 😉 ).

Extensibility:

I prefer to see extensibility of code pretty tight to my definition of elegance. If you are able to modify any given code and can understand it and the possible side effects, without the need to understand the 33 abstraction-layers underneath you, you have pretty extensible code.
The requirement is not to have a system in place that gives you the possibility to derive some classes and re-implement some functions. The requirement is, that you are able – in a short timeframe – to extend the code by the needed functionality without causing hazard some thousand lines away.

Conclusion:

If you look back – at my highly biased post – there is not really a lot you are giving up in favor of faster execution times. But the most important advantage of data oriented design is the fact, that you are discouraged to over-engineer. You are not writing code for the sake of creating a code-temple for your ego, but you are writing code to perform an operation on your data-set. So, by starting being awesome and concentrating on your data, you are elevating your code to new levels of readability, maintainability and extensibility … and you get faster execution time as a nice side-effect. Do not forget the street cred you get from doing the right thing …

Further reading:

Pitfalls of Object Oriented Programming
Typical C++ Bullshit
Practical Examples in Data Oriented Design

Stupid quoting is the root of all evil


If I would have received a beer for every time I hear (or read) someone quoting Knuth’s “Premature optimization is the root of all evil”, I would have long ago died with cirrhosis of the liver … twice.

Why does this quote drive me so mad?

First of all, no one I have ever met, who was using this quote to back up his stupid point has even read the paper this quote originates from.

Structured Programming with Goto Statements

Yeah, really, that was the title. And the entire article was about optimizing the shit out of stuff.

Why is no one quoting the title of this paper? Maybe I should do this whenever someone claims how evil ‘goto’ is.

Sorry, that you are not able to use the available tools without screwing your code base and falling into spaghetti-mode, moron! Did you ever heard a carpenter saying: “Dude, I don’t use saws. That is fricking dangerous. I could hurt myself.”?

Goto is as evil as virtual when you give it into the wrong hands.

But that is not the point … I’m getting sidetracked 🙂

Never ignore performance considerations

To be clear: I know the importance of profiling to identify your bottlenecks and your critical path. I would never argue against that. Optimize only where your profiler tells you, it makes sense.

But that does not mean that you can give a crap about the rest of the code. Keep one thing in mind: There is no non-performance critical code in a game, ever. None. You don’t need to optimize the hell out of everything, but you need to think about performance implications of your code in every single case. There should never be an exception to this.

When you start to don’t care, Baby Jesus will hate you.

What this gives you in the end, is a bit too much cost for almost everything that is going on. You are wasting your time in trivial things all around your code base, but you are not able to nail it down and to optimize it properly, as it is spread everywhere. And every single optimization will give you almost non-measurable improvements. But the sheer amount of small inefficiencies sums up and costs you and considerable amount of execution time.

Unfortunately, when you are at this point, there is no chance of improving this ‘Death by a thousand papercuts’ situation anymore. You will not have the resources to spent precious programmer time on such minor improvements. It is just not enough bang for the buck.

Do not ignore performance considerations ever! This will bite you in the ass in the long run and you will have to suffer in other areas. In the worst case you will even be forced to scale down some features to meet the performance criteria. But for what?
Just for the fact, that you followed a totally outdated quote, that is used out of context and interpreted wrongly.

And stop quoting stuff you have no clue about.

Ironic as I am, I will finish this post with another quote from another awesome programmer 🙂

"My point is, that you should fire anyone quoting anything from this paper without pointing out, that all this is obsolete, because compilers changed a lot since the age of dinosaurs ;-)"

Git Stuff

Since working at Nokia, I have the pleasure to work with a ‘Distributed Version Control System’. As I have used mostly perforce before, the switch was both, a blessing and a curse.

I have to admit, that I had massive problems to get used to it in the beginning. But by now, git and me are BFFs … at least until random shit starts to happen again :).
Yeah, I know that this is mine fault and not git’s. It is just so damn easy to do something wrong. Git is far away from a submit-and-run VCS like perforce, but that is a fair price for the fact, that you can now branch whenever you want without days of integration pain.

I do not want to go into to much details here, as there are more than enough very good tutorials out there. If you are new to DVCS, check out Joel’s brilliant article.

Here are some (hopefully) useful tips for working with git.

Git in Dropbox

That one is pretty obvious, but extremely useful, especially for private projects. You can push your local repo to your Dropbox and it automatically gets synced with all the PCs you are using Dropbox with.

# go to your Dropbox and create your project directory
$ cd ~/Dropbox
$ mkdir my_project
$ cd my_project

# now initialize your git repo with
$ git --bare init

# As you have your remote-repo prepared, go to your local repository.
$ cd ~/dev/my_project

# First, you need to introduce the remote location to git 
# this adds the specified path as the remote named 'origin'
# but you could as well name it 'Dropbox' or 'whatever'
$ git remote add origin file:///home/user/Dropbox/git/my_project

# git is set up, so push it to the remote ( 'origin' or whatever
# name you have used ). 
$ git push origin master

Done, you know have your repo on your Dropbox. If you are on another PC and want to
access it, just clone it from there, and you are set. You can use this like you would
use any git-server.

Save the history, with rebase

As your local repo is basically a branch of the remote repo, the default behavior of git pull is a merge. There is nothing really wrong about this, but if you work on larger projects with lots of contributors, this makes your history really hard to read.

You can avoid this quite easily by using rebase instead: git pull --rebase.
The main difference is the way the merge happens. With rebase, your commits are ‘removed’, the remote changes are applied and after that your changes are applied on top of the remote changes. This preserves a linear history and makes it human readable again.

Interactive Rebase FTW!

The interactive rebase allows you to modify already committed changes. Let’s say you are prototyping something. Instead of waiting for a good state to commit your changes, you can commit as often as you want. When you are ready to push, you can do the interactive rebase and put commits together, remove them completely or change the commit messages.
So, you have been prototyping a feature and realized that you need to refactor a bit of old code in this process. Let’s assume you have now 5 small checkins. 2 changes are small refactoring and the other 3 are iterations of the feature you are prototyping. You realize that it would make more sense to have only 2 commits. One for the refactoring, and one for your feature.

# you need to tell interactive rebase in which commits you are interested in 
# ( in our case these are the last 5 commits )
$ git rebase -i HEAD~5

This will put you into the rebase mode, where you can select what you want to do with these changes.

pick 5c6bb74 some refactoring
pick 91dbdfa other refactoring
pick 3080d61 iteration 1
pick 4e4f56a iteration 2
pick 1890f70 iteration 3

# Rebase a37f00c..1890f70 onto a37f00c
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#

You can now alter the changes. In this case we want to group them and change their
commit messages. The result could look like this:

reword 5c6bb74 some refactoring          # changes the commit message
fixup 91dbdfa other refactoring          # groups this commit with the previous
reword 3080d61 iteration 1               # changes the commit message
fixup 4e4f56a iteration 2                # groups this commit with the previous
fixup 1890f70 iteration 3                # groups this commit with the previous

After you have done this, you will be prompted for the commit messages of the two rewords. When finished, you have only two commits left and they have the proper change description. You can now push this without having a bad conscience. This is how the history now looks like:

$ git log
commit 70f40f9504e5721c7bce32fe9a8c792cddce6acf
Author: Martin Zielinski 
Date:   Thu Jul 7 23:50:14 2011 +0200

    feature xyz

commit 4e47d572508b1109097f73959fe7be02e23ee437
Author: Martin Zielinski 
Date:   Thu Jul 7 23:49:22 2011 +0200

    refactoring old code

Hello world!


I hope that I find the time to blog about some technical stuff, especially programming and game programming related. Also topics covering optimization for game-consoles as well as mobile platforms might find their way onto this blog. And I will for sure also do what I am best in, complaining and ranting :).

But don’t expect too much, I’m not doing this either.