30 years of C

30 years ago I learned C by reading the only book available on it, The C Programming Language by Kernighan and Ritchie. I didn’t have access to a Unix system (I’d accepted my first job, but it didn’t start for about a month) so I just read the book cover-to-cover and wrote out the exercises on paper. When I started the job I had a quarter-inch stack of paper to type in and try out.

Years later I got used to hearing people complain about how hard K&R was to learn from and I found it difficult to relate. C seemed to be designed for the way my brain worked; pointers were a natural expression of what the machine was doing, structs and unions seemed obvious, the control structures were simple, and only declarations were hard to understand.

My next readings after K&R were Kernighan and Plauger’s Software Tools and an illicit photocopy of Lions’ Commentary on Unix 6th Edition. In other words, I had a heavy dose of the Unix philosophy of development early in my career, and that pretty much ruined me for IBM shops and serious AI hacking.

My first “real” programming in college was in Pascal, which seemed horribly crippled and awkward to me (the typing scheme — especially around strings — was broken, the I/O stunk, and amongst a hundred other small irritations, you had to invent extra condition variables to exit loops). I read Kernighan’s paper Why Pascal is Not My Favorite Programming Language and heartily agreed with it. When I worked for Apple a few years later I was dismayed that it was a Pascal shop, until I found that Apple had extended Pascal with many of the things that C had, just so Pascal could be a real systems programming language.


My first experience with C++ was at Apple, using AT&T’s CFront, which was a preprocessor that converted C++ source code into vanilla C. My first C++ program wasn’t a “hello, world,” it was a simple use of multiple inheritance. It was too simple to fail, yet it kept blowing up. I took it to the local C++ guru (the guy who’d ported CFront to the Mac) who spent some time going over it, digging into the CFront output (which was a hash of generated code that looked a lot like Perl).

“CFront bug,” he finally declared, “It’s not moving the base pointer around correctly.” This was a harbinger; my very first program had found a bug in the compiler. The whole time I was at Apple I learned to mistrust whole swaths of C++ features. I think this made me a better C++ programmer.

Over the years, I’ve found that the various teams I’ve worked with have converged to pretty much the safe subset of C++ that I decided on, and a while ago I read the Google C++ coding guidelines and they were bang-on as well. No exceptions, use of “interface” inheritance, no RTTI, no operator overloading, and so on. Templates changed the landscape here; I haven’t done any serious development with C++ templates, other than to make use of someone else’s class libraries, but in my opinion templates are pretty much a disaster unless you’re an expert. Template meta-programming is a great example of people being clever without being responsible.

Most of the systems programming I’m doing now is in C, and I think that the simplicity of the language and the inability to fool yourself into thinking that things are free results in a better product. (You probably hear echos of 1970s assembly language programmers here. I’m aware of that).


I spent about two years working in Java. When I started using C# at one point I literally thought, “Oh thank God, the nightmare is over.” A lot of this was probably tools based (I had a debugger that worked, and could handily work with projects even of substantial size, while the Java environments I’d had to use were buggy and usually collapsed around 20-30K lines of code). Some of the differences were in the support libraries; native interop was incredibly easy, for instance, and the I/O system wasn’t thread-centric and blocking. It wasn’t so much about the core language as it was about the support, and I can’t help but think that Sun continued to blow its opportunities here, for years.


I’ll leave you with three rules I’ve learned, the hard way. There are lots of things I could have included; these are the desert island ones:

1. Either leave the existing brace style the hell alone and live with it, or completely re-write the code. Going the middle road leaves two unhappy parties, leaving it alone or replacing it leaves just one. And while you should err on the side of just living with what’s already there, you shouldn’t be shy about cleaning up a train wreck.

2. Good programs do not contain spelling errors or have grammatical mistakes. I think this is probably a result of fractal attention to detail; in great programs things are correct at all levels, down to the periods at the ends of sentences in comments.

“Aw, c’mon! You’re kidding!” You might think that nit-picking like this is beneath you, whereupon I will start pointing out errors in your code. It was embarrassing the first couple times this happened to me.

3. Crack open your current project. Now, delete as much stuff as you can and still have it work. Done? Okay, now toss out more, because I know that your first pass was too timid. Pretend that you’re paying cash for every line of code. If your project incorporates code by other people, wade into their stuff, too. You’ll be amazed how much can come out.

Don’t leave unused code hanging around because it might be useful someday. If it’s not being used right now, remove it, because even just sitting there it’s costing you. It costs to compile (you have to fix build breaks in stuff you’re not even using), it costs to ignore it in searches, and the chances are pretty good that if you do go to use it someday, you’ll have to debug it, which is really expensive.

It’s tremendously freeing to zap the fat in a system, and after a while it’s addictive. Read all code with an eye towards “What is superfluous?” and you’ll be amazed at how much unused, half-written and buggy crap there is in loose in the world, and how much better off we are without it.

Author: landon

My mom thinks I'm in high tech.

55 thoughts on “30 years of C”

  1. As for number 3, the ability to go back and find that unused code if you need it again is what source control tools are for. And if you aren’t using source control, well, there’s your problem.

  2. I am glad to see C# getting some recognition as a true, valid programming language. Many I work with people see it as a useless basic language that isn’t for real programming. I do admit C++ is best for large scale pumping of data but for one off tools and utilities C# does more than fit the bill. Glad to see such a great programmer has the same thoughts about it. Also, I love the 3 rules…I find them so true!

  3. “I am glad to see C# getting some recognition as a true, valid programming language. Many I work with people see it as a useless basic language that isn’t for real programming.”

    This is mostly Microsft’s “aura” rubbing off. C# is very decent and F# brilliant.

  4. That’s one thing I found all too common. Too many times there’s just too much code for nothing.

    Many times developers just write until they get it working and stop there. They don’t go through the last step of refactoring to clean up the code, extract common code, etc.

    This is especially true for consultants that work solely on projects as they never have to build a 2nd, 3rd, etc. version. It makes a big difference.

  5. hehe nice post about C
    about rule 2, well…. don’t forget worse is better.
    as far as comments are easily understanble, they are ok

  6. I like the “fractal attention” idea. Programs should be “fractally readable”, that is, the code should have a clear structure at any level of detail, from the directory tree down to the for loop, and contain comments for each level. A list of files at the same directory level doesn’t help readability, nor does (sometimes) a 5-line for loop without a one-line comment.

  7. Great article! I think your bit about Java is a bit unfair though. I am not sure when you touched it, but these days there are numerous quality tools available.

    I love the three rules. I have believed 2 and 3 for some time, and just recently came to the conclusion presented in 1.

    @ chrisl – ohh the irony is almost too much.

  8. The point about comments is very perceptive , the way i look at it is ,if a developer cannot be bothered to punctuate and spell their comments correctly what chance does the code have to be correct ?
    This is true of open and closed source code.

  9. chrisl: “hehe nice post about C
    about rule 2, well…. don’t forget worse is better.”

    I’m so sick of that saying. Worse is worse is worse. It is not better.

    chrisl: “as far as comments are easily understanble, they are ok”

    I think that the grammar and spelling in a code comment correspond to the level of attention that the author spent on the code itself. For instance, if I saw a code comment that was written as badly as your reply above, I would have to assume that the code itself was poorly written as well, and I would review it with a more suspicious eye.

  10. MOV AL, #6Ah
    That was the first line of code I ever wrote, in assembly language for god’s sake. My father was a sadistic man. “Hey dad I want to make a game.” Ok son, read this book and have fun(The book was Introduction to machine and assembly language: systems/360/370). I was only eight years old at the time and my father had forgot to mention C had been around for about 15 years. After toiling/learning how to properly address the hardware in that big box on the desk, dad introduced me to C.

    It really made sense though especially after dealing with my old ASM-H(ell). Funny though that there were still tasks especially with old serial interfaces that I would usually use assembly because I just wasn’t able to express things quite as well with C at that point. But I digress, point 3 was always important with my old assembly code mainly because you are already dealing with an ungodly number of lines for even the most menial tasks. I wish I could have learned C back in 1979. Problem is I wasn’t even born until midway through that year.

  11. @chrisl: Please don’t take the following unkindly.

    I work on code from many different sources. There is a strong correlation between code that has typos in its comments and bugs and poor structure in that code.

    Now, I know some good developers who cannot write well, or spell worth a hoot. I know one guy who is damn near blind. Guess what? Their code is impeccable. It’s better than mine. So are their comments. It’s all there.

    By letting one thing slide, you invite another. By admitting sloppiness in one corner, the whole room is put in a shabby light. I want to make buildings of stone and steel that the people living in them will respect; I don’t want to build something and have it turn into a by-the-hour hotel.

  12. I write C for a living.

    I was a C++ programmer for over a decade, and when I learned C in college I instantly fell in love because it got rid of all the hidden shit the compiler did that I couldn’t debug easily.

    The numbered list personifies the code I write every single day for something that isn’t a throw away script.

    Article resonates very well, bravo.

  13. landon: its ok, no unkindness taken 😉

    the phrase “worse is better” is no supposed to be taken 100% literal, it just a way to imply that this approach, the worse is better approach, which is the Unix approach (also called new jersey style) is opposed to the MIT style, the one called “the right thing”
    Richard gabriel, the one who created the “worse is better” phrase said “I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.”
    Then he says that even with that, he thinks is a better approach in software design.
    the whole concept is about simplicity, about to focus in fast solutions that solves 90% of the cases instead of trying to create perfect code. in some cases, trying to create perfect code, is a Programmer Machismo situation.
    there is no perfect code, and once you got your software solving 90% of the cases, you have to be very careful with how much ‘perfection’ you add from there, as handling the other 10% usually can bloat your software (most of the time, that 10% are cases that almost never happen).
    And bloating software introduces more bugs, which also need to be fixed, and if they try to fix them trying to do the ‘right thing’, the code keeps growing and growing. Its a vicious cycle.
    If I was going to name the ‘worse is better’ philosophy, I would call it “Imperfect is better” (or, since perfect code is impossible: “Imperfection is your only choice, embrace it”)

    Oh! I’m not trying to offend anyone here, and I really hope none felt offended, also I’m aware most of you have already read about the ‘worse is better’ and ‘unix philosophy’ ideas.

    I just don’t think if I type in a comment something like “here I’m trying to…” instead of “here im tryin to…” would turn me in a better programmer.

  14. oh! but don’t take me wrong, I don’t have anything against good spelling and I think is a good thing that everyone should try to achieve.
    I only say, don’t get distracted with spelling

  15. “have converged to pretty much the safe subset of C++ ”

    and that, my dear hacker friend and twin soul, is why to this day I have not the slightest respect or appreciation for *any* OO language. Not just C++.

    Yes, that includes Java and its demented relatives.

    C doesn’t need improvement. Any problems with it are exclusively caused by what’s between the chair and the keyboard.

    BTW: anyone relying on an IDE to do their programming fully deserves to be using Java!

    On 2: any program where the ratio of lines of comments to lines of code is not higher than 1:1, is not a C program.

    On 3, one of my golden rules when I programmed in C went like this: don’t look at the code for two days. If on the third day there is any portion of it that is not clear in your mind on first read, throw it away. If it turns out it is really needed, recode it properly. Otherwise, leave it off.

    Amen to this post!

  16. Great post! I’ve followed a similar career path and I have to agree wholeheartedly with your three rules. I’ve had to deal with way too much code where the code style changed several times (within the same source file!) just because each programmer who touched was oblivious to the style of the existing code.

    I’ve never found it necessary to restrict myself to a “safe” subset of C++. For example, I’ve found that using exceptions judiciously can lead to much leaner and clearer code than returning error codes. The key word here is “judiciously”. I’ve seen a lot of painful abuse of exceptions.

    I haven’t written C code in a long time. I do miss the ability to hold the entire language (and most of the standard runtime library) in my head. On the other hand, the ability to get rid of most pre-processor usage is priceless.

  17. Ahhh. Memories of a C application I wrote for one of my previous employers … old co-worker called me on one of my code comments around a particularly ugly code block which started…

    // hack

    True to the description, it as an eyesore and a programmatic improbability … but it worked and was happiest if it was left alone.

    On C#, your description was right on target. It obviously will not generate the most efficient code in the world, but imo, its cost savings in development time more than make up for its other deficiencies … for most business cases it is “good enough”.

  18. “C doesn’t need improvement. Any problems with it are exclusively caused by what’s between the chair and the keyboard.” A telltale sign of a mediocre programmer is thinking that his favorite language can’t be improved. Guess what, it can, and big time.

    Guess what again, it won’t, because of backwards compatibility you’ll be stuck with C89.

  19. Great post, I love C and I wish I got to use it more in the working world than I have so far. One comment on the C#/Java:

    “I had a debugger that worked, and could handily work with projects even of substantial size, while the Java environments I’d had to use were buggy and usually collapsed around 20-30K lines of code”

    From an OO perspective I would berate my team if I found something that size. I’m not sure if it’s a change of coding philosophy that came with OO, but IMO large code files (i.e., classes in this case) are terribly hard to maintain and are usually a byproduct of not following OO design especially when using something like C# or Java.

  20. @Jordan: We had a largish application, and maybe 80KSLOC of Java implementing a whole server and suite of applications in several hundred source files. The development platforms we had at the time were completely inadequate to the task; they were slow, buggy, and crashed a lot.

  21. Great post!

    I was reading Stroustrup’s book on the C++ language, and what struck me about it was, this is a guy who you can trust to use all these clever structures and programming tools correctly… but very few others will be able to do so, and so it’s better to avoid them altogether.

    The nastiest stuff I’ve seen for comprehension has been Javascript… since every browser has different standards, and there’s a lot of security concerns, JS production code tends to be a bunch of indirect and ambiguous calls to global variables and functions.

  22. @Adamantyr: JavaScript is, IMHO, a great programming language (with warts, of course). The issues you mention aren’t related to any inherent problem with JS itself (except maybe packaging). Crap code can be written in any language; beautiful code can be written in JS. Probably the reason there’s so much crap JS is just that there are so many non-professional programmers using it, plus professionals who just don’t want to take the time to learn its idioms.

  23. Just celebrated my 21 yrs of C programming – meaning, I started right as ANSI-fication came out. I am privileged to have worked almost my entire career in C, although I own some Java and C++ as well. I used to like to play in Perl, now instead play in Python…

    I’ve shrugged at the C99 changes (and later annexes) but I do find myself fascinated by the new stuff Apple has extended C with… namely blocks/closures, and the GCD library to try to ease programming for multiprocessors. Landon, have you looked at those? Any thoughts?

  24. @JohnH: ANSI C was a huge improvement. Well, they blew it with trigraphs. But on the whole a FAR better language.

    I looked briefly at blocks/closures and GCD, but I see no opportunity to make use of them. I like the idea of very efficient CPU dispatch.

  25. Excellent, those 3 rules. Mind you, in my projects, rules 3 never applies. If its dead, its gone immediately. The earlier comment about source control applies.

    As for rule 1 – yes, you are correct, this is the stuff of religious wars. I find some styles simply impossible to read. For me, code needs to be like a piece of art – everything there, and beautiful in every way. That means the comments are meaningful, properly spelled, and punctuated. The code actually does what the comments say, and the code is easy to scan – by eye – with good layout, good use of whitespace, and consistency. Do those things and the maintainers job is easier. Don’t do them, and the maintainer (which might often be YOU) wants to find the author and strangle them.

    I completely agree about C++. You can be far too clever, and especially on embedded systems C++ is to be avoided at all costs. It’s too easy to “clever” and have all sorts of crap going on under the hood.

    Plain and in-yer-face is preferable.

    10 years of writing Ada (Ada 83) was great for discipline. These days I write like I’d write Ada, in every language. Clear, simple, less arcane. All good things.

  26. C# is one of the most beautiful languages I have seen in a long while… for many purposes. Yes, it is a bit more expensive in terms of execution speed, but what you gain in productivity you can spend on a little better server.

    Admittedly, it will never replace assembly or c for embedded systems, but it is just so versatile and keeps getting better with every release. Most issues I ever run into (usually a lack of an API for something i need) are already in the works to be fixed in the next version when i run into it.

    I am getting really into the concept of silverlight and linq for a lot of what i do. After dealing with a hugely complex flash program, silverlight sounds like a dream come true.

  27. @steve maina: And Java probably /should/ be taught. But not to exclusion.

    I could write a whole lot on Java and education, but what it really comes down to is: The people I interview who don’t know something low-level do the poorest, /even when I have them demonstrate in the language they’re strongest in/.

  28. I agree wholeheartedly with point 2, due to my own experience. Most of the bugs I introduce into my own code have been due to the fact that I can’t type worth a damn. Careful proofreading is my only defense against such defects.

  29. 30 years of C?

    Does this mean we can stop now?

    Can the pain finally end?

    Can we have integers that really are integers and not integers modulo 2 to the power of (something indefinite-1)?

    Can we have char’s that are signed, or unsigned, I mean can’t we just decide!

    Can we finally admit “unsigned” is just a storage space optimization?

    Can we have garbage collection please? That stuff was done and sorted a decade ago!

    Can we have a sane way of specifying, packing and unpacking serialized data?

    Can we all just stop using C!

    Pretty Please!

  30. @DadHacker: I love your list of three! They resonate with my own philosophy: Good programmers are lazy programmers! Huh? Lazy programmers don’t bother to reformat code that’s formatted neatly but differently. They don’t make the code harder to read by writing their own updates in a different style. Lazy programmers are lazy enough to make their comments sufficiently clear, accurate, and verbose to remove the need to reread, parse, and understand code that was written months or years before or by someone else. Spelling and punctuation count if you want to be understood, especially in a multi-programmer global development environment. Lazy programmers won’t code around dead code. It’s easier (lazier) to remove it. Lazy programmers don’t write the same 12 line block of code 20 times, they encapsulate the block in a function call or appropriate object.

    @John Carter: No, we cannot get rid of ‘C’. Of all of the languages I code in, and there are many of those, I mostly write in ‘C’. It’s not a comfort thing. I first wrote software in FORTRAN in 1970 then went on to learn to code rather proficiently in C, Basic, COBOL, Pascal, Modula2 (still my language of ‘comfort’ and preference, though I haven’t been able to write a line of it in 15 years), C++, Ada, AWK, Perl, ksh, FORTRAN-99, Embedded SQL within several languages, three different 4th generation languages, etc. In all that time, when the job has to be done correctly the first time, be maintainable and extendable, and run efficiently with a minimum of resources consumed, I write it in C.

    If we get rid of C, we get rid of many of the other languages as well. Java interpreters and compilers aren’t written in Java. C# interpreters aren’t written in C#. Nor are the byte code engines for those languages. COBOL compilers aren’t written in COBOL (though that one I’ve seen done at least). They are all written in C or C++. Hmm, maybe there’s a hint there.

    Does ‘C’ have problems? Sure. The 64bit standard is brain dead! Int should have been made 64bit and equivalent to long long int and long should have been left at 32 bit. Far less well written code would have been broken when porting to 64bit environments that way. No development environment is perfect.

    Garbage collection is not always a good thing. It’s a panacea for programmers too undisciplined to delete objects that are no longer needed. It enables bad code. Are there applications for a garbage collecting allocator? Sure. There are applications for nuclear radiation sterilization too, but it’s not the most efficient way to preserve milk and I don’t need an NRS unit in my kitchen. I know where to find one if I ever have to preserve a pastrami for posterity. Similarly, I know where to find a garbage collecting memory allocator for my C or C++ code when I need it as well – and yes, I’ve used such things in C and C++ projects when it was the right tool for the job.

    To me, as a polylingual programmer, that last is the watchword. Use the right tool for the job at hand. I struggle consciously to resist the syndrome Maslow described when he said: “… it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” To that end, I like to keep as many different tools in my box as possible. Don’t take my screwdriver away!

  31. @MP: John Carter has some very valid points. (Also, I have shipped “real products” using VB as a core, in fact, on the server side of a kick-ass messaging system, in places where performance did not matter).

    On the other hand, C isn’t going anywhere in systems software, not soon anyway. Maybe when we get commercial JIT-to-the-metal operating systems (personally I’d love to work on bare hardware using C#).

  32. When I was a tike the only compiler you could get access to was Borlands. And at 600 quid(1982/83) there was no chance.

    Had to whack away at zx and BBC basic. Alas……..

  33. @Juan Kerr: Remember BDS (“Brain Damage Software”) C?

    I ran across the manual for it the other day. I understand the source code for it is online.

    It was pretty good for the time. CP/M, 64K address space and 250K disks.

  34. #1 – When I lead a team, I’m fairly tolerant of different coding styles as long as they are not too far out in left field. However, I do insist on 2 rules. First, be consistent and use the same formatting style everywhere. Second, if you have to fix a bug in someone’s code, adhere to their coding style unless you are assuming responsibility for that code unit.

    #2 – Poor grammar and spelling in comments indicates a general lack of attention to details. If I cannot trust you to follow such a simple rule as ending a sentence with a period, how can I trust you with much more complex rules?

    #3 – I agree, but I would also expand this to include commented out code. It is OK to comment out a block of code when you’re trying to fix a problem, but always clean up after the debugging is completed. Lots of commented out code makes your source difficult to read.

    I’m not as paranoid about C++. I find operator overloading to be quite useful, but ONLY if it is intuitive. For example, creating a class for complex numbers and then overloading the arithmetic operators so they work the same way as they do for the native numeric types is a Good Thing. Overloading the + operator to concatenate string objects is OK. Both of these are intuitive to someone reading your code. Unfortunately, many programmers take overloading too far and practically invent their own version of APL or Forth.

    Templates are great for creating type-safe containers. I only use them as Stroustrup suggest in his book, i.e. as THIN veneers over a base class that does all the real work. Unfortunately, many programmers have implementation code in their templates which lead to bloated binaries. As you might have guessed, I’m not a huge fan of the STL.

    Multiple Inheritance has its place, but those places are few and far between. I always look for a way to refactor my object model to avoid it if possible. I’m reminded of the description of goto in K&R: “It should be used rarely if at all”.

    Exceptions are very useful for exceptional situations where there is no point in continuing; however, they are expensive and should not be used for control flow, such as to break out of a loop. Documenting which exceptions you might throw results in a clear contract between your code and whoever uses it. Simply declaring that you throw some generic base class is not that helpful. If you catch an exception, either do something or leave a comment explaining why you did nothing. Unfortunately, many programmers missed these memos.

  35. With about 25 years of C experience and about 20 years of C++ I prefer C++. I think of it as a power tool. If you have to screw in 200 screws are you going to use a hand tool or a power tool? It sounds like a lot of C proponents would use the hand tool. A professional mechanic uses air tools.

    In the wrong hands those tools can cause damage but a professional can get a lot more work done with those tools then without them.

    Container classes, iterators, RAII, smart pointers are examples of mechanisms that, when used correctly, can simplify your code and help you focus on solving the real problem rather than on the mechanics.

    I’ve seen people spend huge amounts of time implementing and debugging data structures in C where C++ containers can solve the problem with practically zero effort. I also see a lot of common C programming errors, e.g. returns without proper cleanup, buffer management errors, thread safety issues, that are almost completely solved by applying the right C++ techniques.

    C++ naturally puts data and the operations on that data in one place. A lot of C programmers end up emulating that with clumsy and error prone mechanisms such as pointers to structures or pointers to functions that are passed to global functions.

  36. I started over 30 years ago doing machine code and then assembler on paper.
    All of my friends in High School thought I was wierd!

  37. “no operator overloading”

    Really? For the std::cin and std::cout (and anything std::) garbage, I agree, but some things just come really naturally when operator overloading is used. For example, appending a string to a string class instance, nothing beats:

    TempStr += “Silly wabbit, “;
    TempStr += “Trix are for quids! “;
    TempStr += 15;
    printf(“%s\n”, *TempStr);

    That’s a simple example. You can get WAY fancier than that. The equivalent C code is a nightmare and std::string is crap and inflexible.

    Also, templates are useful IF you know how to use them correctly. Boost and the std:: library do it wrong. Everyone who thinks the Boost and std:: libraries are good need to get their heads checked (because you’re wrong).

    Beyond that, there are two key areas where C++ kicks C’s butt: Sorting and the ‘virtual’ keyword. qsort() in C is fast, but C++ templates (even std::sort) allow for a minimum 25% increase in performance over even the fastest qsort() implementation due to template inlining and I’ve got a sort C++ template implementation that is 30% faster than most std::sort library implementations (basically, my implementation kicks your qsort()’s butt by a roughly 40% increase in performance). Your C sorting routines can’t beat that without significant and very painful custom coding work.

    I switched to C++ years ago when I ran into a situation where I needed an arbitrary function callback. My TCP/IP socket code was getting very ugly very fast (I was attempting to dynamically inject a SSL layer into a socket interface) and I knew C++ had the answer to my problem: The ‘virtual’ keyword. You create a base class that defines all the functions you need defined and then the ‘virtual’ keyword is applied liberally to that class. Then you create a derived class that implements some or all of the ‘virtual’ functions. Then you pass the base class around to code that calls the base class functions and, magically, the derived functions are called. The C++ code was literally 200 lines of clean code. The C code was thousands of lines, I never finished, and it was buggy as hell. Interestingly, applying it to the concept of multithreading worked similarly well – the base/derived class run() function made it drop-dead simple to create a new thread. The closest you can come in C is a bunch of ugly code with void pointers all over the place with a ton of casting. Been there, done that, hated it.

    You haven’t really used C++, IMO. Try it again. But don’t use std:: anything or Boost. Build your own library. Building your own library sucks for a while but is ultimately totally worth it.

  38. @mitch: I think we actually agree.

    I’ve /really/ used C++. Templates, exceptions, RTTI and overloaded whazoos. And I couldn’t stand the result; it was a horrible, undebuggable mess. So what I learned was that to write really big programs you had to be conservative, and that you needed to learn the distinction between “writing good abstractions” and “saving keystrokes” — some people think they’re the same thing, and they’re not.

    Careful, considered overloading is okay if you stay on top of things so that they don’t get totally wild. But even “safe” stuff can get out of hand. I’ve come to believe that C++’s biggest mistake was introducing exceptions. It’s incredibly hard to write good, exception-safe code, and keep it that way over time.

    What I find is that library-level stuff is best left pretty primitive; make all the error handling and checking explicit and don’t use exceptions, be really thoughtful about how you acquire memory.

    I regularly write driver-level code in C++ with virtual functions. I don’t have a problem with that. But they’re really not doing much more than encapsulation and providing a callback environment, like your TCP stack.

    Templates . . . are useful. But again, my touchstone is “is this just someone saving some keyboard time?”

Leave a Reply

Your email address will not be published. Required fields are marked *