As a matter of fact, when you look under the hood of an object, all you see is a structure with a few pointers to functions. You don't need an object oriented programming language to produce such code, C will do just fine, thank you very much.
BTW, structures are not very efficient. A structure gives usually rise to a frenzy of alignment problems that can only be solved by fillers. Actually, you really don't need structures, since it can also be represented by a host of loosely related arrays. It doesn't make much difference whether you write 'row.column' or 'column [row]', except that arrays of the same type are much easier to handle for both the compiler and the memory allocators.
Basically, there are only one-dimensional arrays or do you really believe that by magic memory is wrapped into matrices and cubes? The compiler conveniently translates your 'array [x] [y]' notations into real offsets like 'array + ((x * size) + y)'.
There are a lot of fancy datatypes, but basically they are just arrays of arrays. As a matter of fact, there are only two real datatypes, a word and a byte. A pointer is usually a word. The different pointertypes are just created to let the compiler do some work for you, so you don't have to remember what the real size of e.g. a character or an integer is. Thus, 'char* p + 1' can be translated into 'p + sizeof (char)'.
I wonder why people need a 'typeof()' operator: after all these declarations in order to let the compiler do most of the work they seem to have forgotten halfway what type 'p' actually was. After all the help their object-oriented compiler gave them they have completely lost track of what they were doing. So, if it doesn't help, what good is it?
As a matter of fact (hush!) there is no such thing as a pointer. It's just an address, in other words: it is a variable that holds a location in memory. A NULL pointer is just a variable that points to the very first byte in memory (address 0) and by (compiler) convention that is not a valid address. A NULL string is a pointer to a byte in memory that just holds a terminator, which is 0 by convention. Note, these are just conventions. You could define another convention which may work just as well or even better.
Usually, I don't need much more than a stack (where I push my parameters (all words)) and a way to allocate an array of words and bytes. When I push a byte on the stack, it is expanded to a word and when I store a value from the stack into a byte its 'most significant bits' are lost. When I'm done, I terminate the allocation. I don't need a paranoid garbage collector to clean up my mess, thank you. Even less, I don't need a compiler to do the 'instantiation' or 'destruction' of 'objects', I'm quite capable to allocate or deallocate a bunch of bytes or words myself.
BTW, the latter seems a lot easier. Not only that, but my programs seem to run a lot faster and are a lot smaller than most others. I can forgive C that it is typed so I am forced to do some casts to get rid of those ugly warnings. It still seems odd I have to prove a dumb machine that I know what I am doing. The problem is that with C++ I have to figure out what he is doing by using ugly things like class browsers. Normally, I don't even need a debugger, so why the heck should I use something as hideous as a class browser. When I'm really in trouble the assembly switch of a C compiler does wonders, even though most have forgotten it's in there.
In short, I'm still quite happy with C. However, it seems a better conceived language than C++ is. I can read it easily (without trying to remember what << or ++ means on Tuesdays) and it seems to realize very well that sometimes the world is not an object..
No comments:
Post a Comment