That is not your daddy’s OS

I swear to God, the next time I see someone declaring that they’ve written a new operating system in a browser, or (worse) that the browser somehow _is_ the operating system they’re writing applications on, I’m going to find a rock, tie a typewritten “e-mail” to it with a piece of string (“XML”), declare the rock a “TCP/IP packet,” and throw it through the moron’s window (“port”).

I’ve lived through this already. Except that the “operating systems” in question weren’t browsers, they were implementations of BASIC on 8-bit microcomputers. Remember the days of chained-together BASIC programs that used PEEK and POKE and CALL to bend the system to their will? People did write very large applications in BASIC for systems like the Apple ][, the Atari, and IBM-PC: Accounting packages, point-of-sale systems, video games (lots of those), you name it, and every one I’ve seen the source to was an unmaintainable hash. It’s what you get when you hack in BASIC.

It’s important to remember how far up the food chain you are.  Why?  So when someone comes to you and complains that your “scheduler” is O(N**2) and that the “context switch” overhead is on the order of a millisecond, that you have someone else to blame.  Because if you’re on the hook for that, and you think the primitives you’re working with are at the level of an OS, you’re pretty much sunk.

I used to worry about the “kids” who know just Java and who wouldn’t even know where to start if you gave them a raw CPU and a handful of memory. Now I’m worried about a new crop of “software engineers” who may never write a line of code that isn’t executed by an interpreter, who somehow think that the Javascript they’re writing is it, rather than what it really is: A hothouse flower supported by a vast infrastructure of sophisticated components, and that’s before you get to the damned browser.

Declaring that a minnow is a whale doesn’t make it so. So, fine: You can call your browser an OS if I can call my Emacs session, well, anything I please.

Author: landon

My mom thinks I'm in high tech.

42 thoughts on “That is not your daddy’s OS”

  1. There’s a weird conflation of “Operating System” and “Immediate Runtime Environment” that seems to be popular right now. I think that it’s because A)”Operating System” is a proven marketable good, whereas runtime environments are just a thing that you have wrapped around you, and B)OH MY GOD IT’S A PRETTY WEBAPP CLOUD COMPUTING 9.99990 TODAY THE FUUUUUUTURE!!!111!!

    Am I the only one who doesn’t find the idea of pushing more software into “the cloud” (Quotations denote sarcasm) to be the high and mighty end goal of everything? Sometimes a thin terminal onto an amorphous blob of computing resources isn’t what you’re looking for, you know.

  2. As a hobbiest ruby (on cgi, not rails) & javascript developer, I am well aware of the architecture that supports the insanity that is a web browser. It boggles my mind that so many pieces can work together and not break all that often. When software crashes and takes my data with it, I don’t scream with anger, but am saddened at my loss, understanding of why it had to be. So much architecture, so many layers, and it makes me happy when they work, not angry when they don’t. It’s an amazing, crazy, insane system, but I love it.

    It seems pretty clear that one major way computing is heading is in the idea that faster hardware is best used to ease software development, to add more and more convenience layers. It’s an attitude that benefits me, both as a coder and a user. The crazy amounts of architecture in Mac OS X let indie software developers really compete with software giants. Thus all this fancier modern day hardware gets me freedom of choice by making more competition possible, and making complex tweaks to make the usability of something slightly better less of a pain to implement, and thus, more likely to get done.

    The web browser surely is no operating system, but is a venue for a weird kind of application regardless. I can’t imagine many ways an application could be made more indirect or more abstracted from the way the host system actually works than web applications. Still, I won’t be trading in iCal for Google Calendars any time soon. 🙂

    I guess all of this rant is just to say, that as someone who until a few months ago, had never written compiled code run straight on a processor, I know quite a lot about the architecture software really runs on when all the layers are stripped off. I don’t need to know it… But I picked up a lot through the years. I certainly appreciate the insane amount of architecture in making modern web apps work, and I don’t think any reasonable javascript developer wouldn’t be just the same.

    Still, while your handful of ram is a nice offer, I think I’ll stick with my architecture, and If I need to build something straight on a microcontroller, I’ll use some architecture there too. A C compiler, a USB serial adaptor, a microcontroller SOC perhaps. The days of Woz’s, and perhaps yours too, of writing code on paper and manually transcribing it to hex are gone, I appreciate every minute of it. 🙂

    Finally, more stories about atari years please! They are really great. 🙂

  3. I like software that runs entirely on my box. Gmail’s search feature is the cat’s meow. So fast… I never lose anything… I don’t have to organize it…. I’d rather have all of my email locally stored in text files, though– something I could grep, awk etc. against.

    The more I work with computers the less I like the dependencies associated with those gigantic frameworks. The Java applications I have to use at work are completely sucky– they’re extremely slow and the designers don’t seem to have heard of a datagrid control with cut and paste functionality. .Net 3.5 is a pain to install; I don’t trust regular people to get it set up. Perl, on the other hand, is ubiquotous and doesn’t need an IDE so much. (I think half the OOP craze is about discoverability via Intellisense[tm] and very little about design.) Perl is seeming saner all the time.

    I did try a little C and have dabbled in some assembly language… but mostly I code within my little sand boxes. Learning a few more languages every year helps to think outside of the box, but a real education would be better. I am sick of browser apps, though. And I hate everyone’s lame GUI’s, too. I am on the verge of becoming a crank that refuses to do anything that can’t be done from m-x shell….

  4. I work for a small software company in Bellevue Washington (Near Microsoft’s main campus for those who don’t know). We used to be able to find qualified C++ programmers easily. Now we absolutely have to go through headhunters and even then the picking’s are slim. And many of those that do come through haven’t programmed in C++ for years. It seems that a lot of C++ programmers thing C# is the new C++ and that it’s a replacement. It’s difficult to explain to them that C++ and C# solve two different kinds of programs even though you could use either to solve many problems better suited to the other.

    Interpreted languages are not the language for a backup product and unless significant changes are made they never will be.

  5. Yeah, finding a good C/C++ programmer is tough these days. I don’t know what happened to them all. Maybe they retired?

    @Jenna: My belabored point is that people who thing that a browser is an OS are (as @William so succinctly put) conflating “operating system” and “runtime.” Browsers are run-time environments; you’re not going to be able to do a good job of memory management, networking or disk I/O handling in a browser, and the people who think they are at the same level of sophistication by writing JS are fooling themselves.

    The underlying stuff matters.

    It’s getting harder to find people who care about it, or who can work at that level.

    So what happens when the last low level programmer dies? 🙂 I’m reminded of that Isaac Asimov short story about the society that rediscovers calculation by hand, as a secret weapon in a war of some kind…

  6. Most of what your typical crap developer does today is desktop database aps (in some flavor of VB), web versions of the same (in one of a dozen languages), and manipulating text (with Unix type tools). There’s no pressure pushing your average developer close to the metal…. So much of it is throwaway code that only has to be “good enough.”

    I remember peek and poke from my Atari 800 days and I don’t miss them. (Woo-hoo! M.U.L.E. with 4 joysticks!) J.D. Casten’s code was incomprehensible to me. I don’t recall needing to drop down into the Win32 API since .Net came out. (Can’t complain about that….) The fact that PC’s “won” is very depressing to me, though. What an awful machine! I prefer to code as if the hardware didn’t exist– or rather, as if it were a given.

    The thing I’m wishing I really understood at this point is compilers and (I think) grammers.

  7. I recently had cause to write microcode for a 6502, it was a surprisingly enjoyable experience. Even more so since I hadn’t done it in a very, very long time and none of the guys I work with thought the systems in question was salvagable.

    I think that people forget that developing the next whizz bang net application isn’t the only place computing is used – it is still extremely common to come across some older architecture in a lot of industrial systems, even with the advent of Java. Sadly I also suspect that the low level stuff is not viewed as sexy.

    That said I also don’t think it’ll be lost. People said that about COBOL back when I was at university and a few years afterwards people were employing us left right and centre on silly money to bring legacy systems up to standard for the year 2000.

  8. I’m entering my last year of my undergraduate studies in CS and I’m glad that my CS department chose to switch the intro CS courses from Java back to C++ (the reason they changed to Java in the first place was because of the AP Computer Science test).

    Just the other day I was talking with one of my fellow undergraduate summer research interns participating in Notre Dame’s ERWiN REU program (Experimental Research on Wireless Networks) and he told me that he’s done very little C++, let alone any pure C, as most of his courses are done in Java.

    Unlike myself who prior to college has been programming in C++ since self-teaching myself sophomore year of high school, I feel somewhat smug that I am able to handle my research project’s programming tasks with minimal learning curve… as I’m having to cross-compile and write efficient algorithms pertaining to my research in pure C and be run on small ARM devices running Linux.

    It’s encouraging to know that there’s actually companies out there having trouble finding people who C/C++. As I was starting to get the impression that I may need to bite the .NET and Java bullet soon.

  9. Oddly I think the reason many of us abandonded C++ is that it’s got way too bloated as a language. I think the rot set in with the STL and the “new” style casts, the syntax is just plain ugly and the lack of a standard interface to other languages (mostly due to name managling) means that I think it is destined for a slow lingering death.

    I still love classic C (not C++) but I’ve moved my programming life on to C#.

  10. Dougie,

    It’s not microcode. Microcode exists within the processor and is, in essence, signals that control the processor parts. Microcode is used to implement complex instructions in processors. Many notable processors have/had writable microcode stores you could use to make it do interesting tricks, like implementing a new instruction or even emulating a totally different instruction set. The 6502 was not one of them.

    What you wrote was probably assembly language code.

    And I agree. The 6502 is a surprisingly delightful processor to program. It’s also quite refreshing to be able to code without worrying about a complex operating system.

  11. I’m a C programmer – a little rusty (it’s not far from 10 yrs since I coded C commercially) – although I’ve progged in many languages since then.

    The trouble I have is not having a degree at all; I left school at 18 and went straight into coding & maintaining large systems for 50+ employee software house.

    You’d throw my CV away through lack of degree instantly.

    That’s where we’ve gone.

  12. @John: I myself am a college drop-out (about three years of school before leaving for Atari). After maybe ten years the fact that I don’t have a degree ceased to matter.

    I don’t think I’ll ever work at Google, though. My position has been
    if someone doesn’t hire me _only_ because I don’t have “that ticket,” it’s their loss.

    I never throw a resume away for lack of a degree. I throw them away for other reasons… 🙂

  13. I agree completely. Even the Java VM appropriately does not have the gall to call itself and operating system. The first few times I heard the term “web OS” I was completely confused, thinking that there was something I didn’t get. Now I just roll my eyes.

  14. If all software ends up written in a browser (which I don’t think will happen, personally) then what IS the operating system? The real operating system is the browsers’ rendering and javascript engines and so forth.

    I think you’re right, though, that the term shouldn’t come to mean either programming languages or browsers. I think people are just misusing the word because it’s easy to. Perhaps ‘platform’ is a better word. Since even code written for a desktop operating system has several platforms it uses as the actual base to applications. If I write a word processor I’m not really writing it FOR Mac OS X, I’m writing it on top of and for their application API (Cocoa on OSX, .NET on Windows, etc).

    Anyway, I propose ‘platform’. Carry on 🙂

  15. Overall, sounds like strawman ranting to me. I’m paying pretty close attention to blogs and feeds about programming, and I can’t recall seeing too many claims about OS written in JavaScript, or in a web browser. Internet being what it is, cranks abound of course, and you’re welcome to rant, but…

    I think the meme that’s getting a lot of play is that the web is a pretty effective platform for building distributed applications, and I believe that stands up to scrutiny.

  16. @monkpow: Strawman ranting? Not really.

    My next door neighbor was a marketing guy for a “browser OS” company. Kept having me over to his house to show it off. They were in negotiations with certain large software companies.

    “XML this and that. Write your applications in XML. See, we have a desktop, and a file browser….” Toy applications included a small word processor and a game or two.

    Basically what they had was an application framework in Javascript. Hey, feel free to call that an operating system . . . but I’m not going to buy it on that basis.

  17. A browser isn’t an operating system, but both as a web application user and a web application developer there are things I expect of my browser (and of the browsers my customers are using) that are definitely reminiscent of what I expect an operating system to support. I think it’s reasonable to want a browser that doesn’t need to be restarted a few times a day as a matter of course. I think it’s reasonable to expect a properly-implemented page for an app to be able to continue working for days on end in a long-lived browser instance without generating resource issues beyond its control (i.e., issues that are the browser’s problem).

    Making an analogy is a useful shorthand, but you’re right that simply referring to a browser as “an operating system” glosses over a long list of significant differences between a browser and a real operating system. One could use a more generic term like “application platform”, but those are generally pretty weak and spineless. Maybe the best thing to do is take a cue from Ted Nelson and come up with an entirely new word.

  18. I totally agree with you. I’m now fully programming in an interpreted environment, but I used to work solely in ANSI C (!!!!), writing software for signal processing hardware environments.

    The efficiency issues that most web developers encounter are nowhere near the level when you’re actually counting bytes to make sure you have enough memory to hold your current buffer. Or, when you have to write a complicated buffer swapping scheme just to reduce your memory footprint. And that’s not even touching on real-time latency issues in signal processing environments.

  19. Can’t do interpreted languages if you don’t know C. You must know pointers, this OO is just syntactical sugar, do you have any idea how slow GC is ?

    — rinse

    You can’t just use C without knowing assembly. Your fancy functions become goto statements in machine code. You have to know what the compiler is doing, putting the variables so far apart will spread slow due to the L1 cache size, what if the compiler is not optimizing the branches and leaving noops?

    — rinse

    You can’t just write assemby without knowing what goes on behind the scenes. What if you don’t have a compiler, here’s your computer, one switch per bit, press here to reset, press here to go to the next memory address. You have to know how to assemble by hand.


    You can’t assume the processor works, you have to know how to build it. Here are some and/nand gates, do the rest… build memory, build an ALU, build the datapath.

    — rinse:

    You just assume the gates work, you have to know what happens behind the scene, how do they work? Here’s some semiconductor material, build the gates.


    Semiconductor material? hah, what if you did not have it… here build some vaccum tubes.


    Electricity: what if you did not have it, how would you compute.

    (ok this last one is a strech, But Ada & Alan did fine with paper and pencil)

  20. I the above post I simply wanted to point out that it’s possible to separate each layer, and have separate people specializing in each. Given an environment you have to know how to best utilize it and a minimal understanding of the layers below.

  21. I agree wholeheartedly! Man, am I sick of all the B.S. associated with “contemporary computer science”. We used to have software architects, software engineers, and programmers. Now the industry is stuffed with script kitty’s and goprammers that are constantly blogging about things they clearly don’t understand – always “inventing” “new” things that aren’t nearly as good as the “old” things simply because they don’t know their history, do any serious study, or strive for excellence. Oh how I wish for the days to return when skilled software engineers would bend the system to their will with genius and creativity.

  22. PeterI – I couldn’t agree more. C++ is a horrible language. Its syntax is so huge and complex that even compiler guys think it is a PITA to write a decent parser for it. C on the other hand is just pure ecstasy to program in.

    That said, I use C++ at times but I have cast in stone rules for myself when using C++.
    – Just C with classes.
    – Single inheritance.
    – templates only STL ones

    – No operator or function overloading
    – No exception handling crap. It is much harder to debug that shyt.
    – No user defined templates and no nested templates at all.
    – No auto_ptr and other boost stuff.
    – No fancy stuff like virtual constructor etc.
    – No other behind the back voo doo of C++ such as receipes given in effective and more effective C++ book.

    IMHO C++ lost one major principal of software development i.e. KISS (Keep It Simple Stupid).

    C and Assembly programmer forever.

  23. Right now, the only languages I know are those which need interpreters to run. However, I am about to start my freshman year in college for computer science, and I will be studying under great names in computer science. I tend to learn quickly, I just usually need a mentor to be there and help me along the way.

    What I hate most, is when I meet a developer who is, suffice to say, “stupider” than I am. I may not know the syntax of the language that they do, such as COBOL (Yeah kids, it’s still used in Mainframes), or C++, yet I understand concepts which they don’t, it’s absolutely ridiculous and utterly pisses me off.

    Another thing which pisses me off, is when I go to apply for a job (such as writing PHP CMS’s), and as soon as they find out that I’m 17 I get the response of “Hah, go apply for McDonald’s, kid.” I’ve spent years behind a computer. Not playing video games, certainly not on social networking sites, but programming. Learning how computers work, and right now it’s sending me to college, and paying for it completely.

    Terribly sorry for ranting, I just love speaking with people who’re on the same wavelength as myself.

  24. As a certified Old Timer, I’ve seen it all, too. Calling inside the browser an “Operating System” is stretching it, to be sure. But, more and more, we “live” inside the browser. The web apps are becoming more powerful, and, today, anything that isn’t networked is pretty much useless.

    Today, the “assembly language” of the Web is JavaScript. Learn it well.

    In the same way that C became “portable assembly language”, JavaScript and Web Browsers enable completely virtual workspaces. Soon, it won’t matter which OS you run, as long as it supports Firefox 6.0 (or whatever) and all of the fancy features it will have. No longer will we be chained to one particular computer, we’ll be able to move to any available computer, and we’ll be able to ask for whatever computing resources we need from the cloud.

    It really will be better. And, just like we don’t require radio and TV actors to understand how to build a radio or TV to _use_ one, in the future, so much of the computer guts will be hidden (details known only by an ever-shrinking priesthood) and what well be known as “the computer” will be the browser window.

    I can’t wait.

  25. Alright so perhaps I am just a young and unfortunate victim of the new age of CS, but i dont see the point in writing modern programs with anything but a C style language. All my problems can be fixed with C++/C#. Again, i may just be showing off my noobness here, but i dont see the point of developing “an operating system” in a browser….does the browser not need an operating system to run in the first place? and seriously…why not just develop for the os that the browser itself is running in in the first place. It may be extra work but in the end would it not be worth it? Anyways, i appreciate all of your input as i acknowledge that you are all far more experienced than I, and i certainly have no issue with bowing to your knowledge…i know i still have much to learn about this world of Computer Science and i know that correction from the best is the only way to go 🙂

  26. I’m a Junior CS major at a pretty well respected Tech school. I read things on the web all the time about students graduating and not knowing anything else besides Java. And not having the faintest clue about memory management.

    Where are these students coming from? During the past few semesters I’ve had homework and projects in Assembly, Java, C, C++, PHP, Smalltalk, etc.. Not only did we have to learn these languages, we had to learn the foundations they were built on. This helps to pick the right tool for the right job. Don’t other schools have classes in Operating Systems/Program Design/Language Theory/etc.?

    If I had to change my curriculum to mimic what people claim that new CS grads are taking, I don’t think that it would take more than a few semesters of work to graduate.

  27. So you have JavaScript which is run in a interpreter, closely linked to a browser, which (thank God) is run directly by the machine. Machine, for which every “atomic” instruction can be further broken down into micro operations and all the fun hardware stuff the programming manuals never tell you about. It’s a simple model of a very big and complex computational tower. What really makes your mind hurt is when you’ve programmed in every part of the spectrum, from the very low-level processor assembler, where you control where every bit of data is placed and how it’s used, going through to C, where , although still very low-level, you already feel a disconnection with the machine, and through to the high-level languages, Ruby, Python, Lisp etc. where there almost isn’t a “machine” to talk about (when is the last time you worried that all those list look-ups will wreck the cache coherency of your perl script? :P) and finally ,you realize what one more layer will add to the process. The question of course is, if we’re running from the machine what are we heading to that’s opposed to it?

  28. What about EyeOS?

    “eyeOS is a new kind of Operating System, where everything resides on a web browser. With eyeOS, you will have your desktop, applications and files always with you, from your home, your college, your office or your neighbour’s house. Just open a web browser, connect to your eyeOS System and access your personal desktop and all your stuff just like you left it last time.”

  29. Great Blog, I love it! The comments too…
    I would be interested in what the old timers would recommend in terms of a programming language to learn so that bad habits are not ingrained in my programming soul.

    Agree with above – more Atari stories please!

    Julian F

  30. About the only thing that might accurately be called a browser O/S that I can think of is Inferno (having people like Ritchie and Thompson on the developer list lends some serious credibility)

    That it runs as a stand alone O/S on a broad swathe of hardware (embedded and upwards) as well as in a browser plug-in helps considerably.

  31. JulianF Says:
    “Great Blog, I love it! The comments too…
    I would be interested in what the old timers would recommend in terms of a programming language to learn so that bad habits are not ingrained in my programming soul.”

    Helpful Answer
    I don’t think you can really answer that question. I used to think the Perl was for obscurantist posers, but I actually worked on a project that used Perl and it was quite OK. For a variety of fairly horrible reasons we needed to generate code and parse. And the guy that wrote the Perl did it in quite C like way.

    Of course Perl being Perl doesn’t have civilized things like function prototypes or strong typing so we had to be a bit careful about passing arrays around. But I don’t think you can say that people will necessarily be ruined by writing in a scripting language.

    That said, I’ve always enjoyed C and C++, used as someone else put it as C with classes, not “look I can find a use for operator overloading”. Though actually I think smart pointers and using are OK too. You have a destructor that will be called when something goes out of scope, why not use that feature. But it is possible to write awful code in C++ if you try to use all the features to show what a genius you are, just like it is possible to write awful code in Perl if you try to be terse. Or assembler if you try to be clever.

    But it’s possible to right stuff that is a joy to read in C++, Perl, Assembler or even English.

    I think the lesson is don’t try to be a god when you code. Write code that you’d be happy to go in and change on your first day at a new job.

    But I think to really understand what is going on you need to be familiar with C/C++. And also know how to debug those languages in assembler on the machine you work on.

    Unhelpful Answer
    (That’s hacker God talk for “your question has no answer, n00b”)

  32. Hi.
    First I have to introduce myself a little : I’m french so I’m sorry if I make stupid english mistakes, I’ll try to be understood. And I’m one of these “software engineers” who never use anything else than PHP, JavaScript, ActionScript and a little TI Basic on my calculator in High School.

    I’m not here to troll (Oops I shouldn’t have say it, you’re gonna think I am) but my colleages and I have the (bad) habbit to troll about the very subject of this article. In the web startup I work for, we have a division full of C/C++ guys who program what we web developers call “heavy softwares”. Every time we talk about the languages we use, they start to rent the absence (not true but still) of variable types in PHP and JS.

    Every time I hear them rent, I only think about two things :
    – since JS and PHP are written with C, it’s a little bit the C community’s fault that this languages commit these “sins”.
    – thank God (or whoever you want) there is guys like me who don’t know C, Fortran, Lisp or I don’t know what because they’re will be no web applications whatsoever or they will be delivered for twice the time.

    You can do pretty much everything with interpreted languages (except OSs, you’re really right about that) and you can do everything with compiled languages but often for twice the price.

    I’ll finish with two short examples of what I’m saying.

    – This week, I had to add quotes in some 200 XML files almost every two lines. XML is evil, more if it’s to talk to a C application like in this case.
    It took me 15 lines of codes (written in about 15 minutes) and a simple regular expression in PHP to do the job. I didn’t even have to launch my browser, just launched the script with a command line. A colleague of mine told me it would have required twice the work to do it in C.

    – Last month, I gave the Adobe AIR runtime a try. With only JS, CSS and HTML and a well known video Flash player library (and basic SQL to store some local data) talking in JSON with a good server API (written in PHP), I wrote a video player wich installs himself in “one” click, automatically updates itself when I publish a new version and “only” use 40Mb of RAM and some 5% of CPU when playing a video.
    It took me 15 hours, half of it trying to understand the shitty Adobe documentation for AIR. It’s doable with C but for how much time (given you use VLC or something like that to play the FLV videos).

    On a complete different note, I love this blog, I admire guys like you who add to deal with “challenging” hardware (almost like browsers are challenging plateforms to write for). Keep the good work sir.

  33. Lipsy, I just finished a grad level course with regular languages. Less practical (e.g. learning how to use RL in Perl) and more theoretical (underlying concepts.) It was so difficult I constantly found myself wanting to stick a hot poker in my eye. Everyone did terrible. I ended up with a B+, but I barely cracked 60% on any tests unscaled! I say God Bless those before us you created the VS C++ compiler!

  34. I once took one of those script-kiddiie Perl programmers and exposed him to the raw bring-up on a new board (musenki, ppc 8245 cpu, first step was hacking the bootloader to work, second step was working-around the reversed flash bus).

    The whole process blew his mind.

    After Musenki, he went to another linux hardware startup (agenda, who built a PDA).

  35. @Mike – I guess this is kind of at the heart of what I was saying… I don’t believe that that necessarily will be better. I’m not saying that that we shouldn’t have web applications, or even that it’s bad to make it possible to push any application onto a remote platform using web services. I just am not convinced that pushing all computing and user data onto “the cloud” (I also just HATE that nomenclature) is desirable. Some fences are there because they need to be, and some just mark comfort zones and prejudices, but some of those comfort zones are still valuable to those holding them. I’m not sure I’m really rooting for a world in which there’s no hard boundary between my data and the net, in short.

    @Landon – Slightly offtopic here: I have a question. Every single person who I’ve read/heard speak on the subject of developing programming skill and knowledge agrees that, in addition to programming and reading about programming, you should examine other code in a thorough manner. I can find lots of suggestions for which books to look at, and if I look real hard, I can find particular problems recommended for enlightening, but what programs should someone look at, when they’re ready to start trying to grok actual, production code?

  36. @William – Code reading is a skill. You have to practice it, because that’s mostly what you do in an existing system that is more than a 100K lines or so.

    I read _Lion’s Notes on Unix_, which is pretty dated but still a good read. Also, Kernighan and Plauger’s _Software Tools_.  I enjoyed these, but they’re quite old.  We’re not on PDP-11s any more.

    Modern stuff: Bits of the Linux kernel, probably. 4.x bsd Unix sources are better, because there’s a good “implementation of” book by Mckusick, et al, on the internals.

    Books like _Apache Server Commentary_ are pretty good, too. I think there were others in this series as well (_The Linux Core Kernel Commentary_).

    Knuth’s “Literate Programming” books (e.g., _The Stanford Graphbase_, and _TeX: The Program_) are okay, but you’ll not encounter this in industry.

    At work: Walk through some code with a cow-orker. Read it first, make notes, and then run through it “paragraph by paragraph,” paraphrasing to your cow-orker what the code is doing, blow-by-blow. It was amazing how much this has helped my understanding of complex system; something about being on the spot with another person.

  37. Agree with many comments above – I took a few CS classes at Carnegie Mellon but ended up graduating in Bio. Java classes were widely regarded as a joke by the department, and I believe there were only two in the entire curriculum. Those guys were really, really hardcore.

    I firmly believe that you can’t understand what you’re doing unless you comprehend where it came from; I think assembly and C courses should be absolutely mandatory.

    By the way, I wonder how Chrome shakes up opinions given above 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *