H2CO3's tech rants

Let’s Stop Bashing C

This blog post is a quick reply to Let’s Stop Copying C.

To begin with, I agree with most of the things Eevee wrote. However, I think she went a little too far and described some of her personal opinion as if it was a fact. I think that as programmers, we need to be more honest about these kind of things, so here goes my rebuttal of some points she made.

What’s Wrong with Integer Division?

The point the author makes about integer division is that it can confuse beginners. Sure enough, it can! But what can’t confuse beginners? After all, they are… beginners. This is consequently not a great argument against integer division, or any other language feature, for that matter.

However, the behavior of integer division is very useful. I would argue that in most cases, one expects it to behave just as it does. The reason for it is simple: when one is working with integers, one often has a problem related to integers in the first place. And we don’t want to involve nasty floating-point operations when, say, trying to zip a list of indices with their corresponding array elements. Performance is not even the point; floating-point numbers are much more complex (and a lot harder to use correctly even for experienced programmers) than integers, and above all, they have different semantics. I think that if one wants to escape the nice and friendly realm of closed operations, one should indicate it explicitly, via a cast – exactly how it’s done in C. Or maybe have another operator that does floating-point division, I don’t mind that either.

What’s Wrong with Increment/Decrement?

Bashing the increment and decrement operators became quite popular since Swift has taken the side of their omission. After all, if Apple does it, it can only be good, right?

Unfortunately, Eevee seems to fall for the extremely common “++ is equivalent with += 1” fallacy. You can see it’s a fallacious statement when, in the end, even the author herself admits that there are things that can’t be implemented in terms of “+= 1“; for instance, incrementing non-random-access iterators.

But look, there’s an even more direct argument: the ++ and -- operators are not even “one” operator. There is a prefix and a postfix variation of them that do slightly different things. The usual semantics is that the prefix ones evaulate to the already-modified expression, while postfix ones yield the original value. This can be very convenient, as in different contexts, code might require the value of one state or another, so having both versions can lead to more concise code and less off-by-one errors. Therefore, one can’t simply say that “++ is equivalent to += 1“, as it is simply not true: it’s not equivalent to at least one of them. In particular, in C, it’s not equivalent with the postfix increment operator.

What’s Wrong with Braces and Spaces? (And Semicolons Too)

Now, let’s pull out the big guns. The article argues that brace-based blocks and indentation-based statement grouping are equivalent and that having both is redundant. It also mentions that terminating semicolons fall into the same category and that they should be replaced with newlines. This is very much not true, though. Spaces are for the human eye, while braces are for the compiler. In a language where whitespace is significant, automatic indentation becomes literally impossible, which is very annoying. I’m no Python expert, but I’ve run into hard-to-debug errors several times because I’ve cut and pasted some code into my function during refactoring, and it misbehaved because of the indentation that didn’t happen to match the then-current indent level.

With regards to semicolons: you can’t just interpret every newline as if it was a semicolon, because newlines become context-sensitive in this way. For example, after a function declaration, a newline doesn’t imply the end of a statement. And now lexing has become context-sensitive too, and it’s entangled with parsing, and it’s a pain in the ass to write, let alone to write it correctly. It’s very confusing to see the author advocate for such a misfeature just after having argued that type-first syntax makes parsing hard.

Treating newlines as statement endings is hard to remember correctly for humans, too, however appealing it might be at first glance. For example, what should this piece of code do?

Should it return unit, or should it return 3? It depends on whether the newline is taken as a statement separator in this case. Blargh.

TL;DR: the designers of C weren’t idiots. Yes, there are many aspects in the design of the language that reflect the 40-year-old need for “speed above everything”, and which are consequently obsolete nowadays. But it would clearly be a mistake to shove every design decision into this one pigeonhole. The three points I’ve emphasized above are clearly not results of short-sightedness – on the contrary, they show great insight into real-life problems programmers encounter in their daily work.

138 thoughts on “Let’s Stop Bashing C”

  1. William says:

    It seems bizarre to me that having two nearly identical operators (prefix and postfix ++) that do almost but not quite the same thing would lead to *fewer* rather than *more* off-by-one errors

    1. H2CO3 says:

      I was *sure* someone was going to pick a quarrel about that 😉 The only thing I can suggest you to do is to actually go program in C for some years, write some good software, and you will see what I mean.

      1. yakcyll~ says:

        The sentiments in Eevee’s blog stand to contrast your last sentiment I think. If you have to code good software for a few years to understand why C has done some things right, then it’s probably a matter of getting used to it and not it being a good design decision in the first place. I think the idea is that C is more of a tool rather than a medium of expressing thoughts and Eevee tried to point out which parts of the language made it less intuitive and constrained than one could expect from a modern language.

        Then there’s this sentiment: ‘If only I had a computer to take care of such tedium for me!’, which makes me think they have to teach kids the gun-foot joke at the programming 101 courses.

        1. Marcel Kincaid says:

          “If you have to code good software for a few years to understand why C has done some things right, then it’s probably a matter of getting used to it and not it being a good design decision in the first place.”

          That doesn’t follow at all. It’s also a strawman … the discussion is specifically about how *omitting* postfix increment/decrement from a language produces more off-by-one errors. That beginners might not understand why this is the case is no argument that it’s not the case after all.

          1. emallnotvalid says:

            “That doesn’t follow at all.” — of course it does

            the author makes the correct statement “what doesn’t confuse beginners” — but then he says “go program in C for some years, write some good software, and you will see what I mean.” —

            so he’s basically saying you have to code for years for you not to be considered a beginner —
            I agree being a good coder takes experience, but YEARS just to understand a BASIC feature of the language ??

            “It’s also a strawman” — no, its not

            because the discussion is ACTUALLY about how the increment/decrement operators are badly designed in C and should not be copied by other languages

          2. Marcel Kincaid says:

            “of course it does”

            Wrong.

            “so he’s basically saying you have to code for years for you not to be considered a beginner”

            Wrong.

            “no, its not”

            Wrong.

            “because the discussion is ACTUALLY about how the increment/decrement operators are badly designed in C and should not be copied by other languages”

            Just asserting a position is not a “discussion”. How about a discussion about what a worthless loser you are … don’t bother denying it, because that’s not what the discussion is.

      2. Hello says:

        I bet you wish we could all be so perfect as you, Mr. Carbonic Acid. Sure, just tell someone who read your silly article that they aren’t qualified to mention any point because they first need to go and write some good software.

        Shame on you.

        1. Marcel Kincaid says:

          “Shame on you.”

          Shame on you for insulting the author based on your complete misrepresentation of what they wrote.

        2. H2CO3 says:

          > “I bet you wish we could all be so perfect as you, Mr. Carbonic Acid.”

          I did not claim perfection anywhere in this article.

          > “they aren’t qualified to mention any point because they first need to go and write some good software”

          You are absolutely qualified to mention all sorts of points, that’s why your comment has been approved. However, I reserve the right to disagree with the opinions of those who have little or no experience with the technology being discussed.

      3. Chippiewill says:

        I agree, as someone who regularly programs in both Python and C++, when I program in Python it’s a really PITA to not be able to increment a variable as part of another statement

      4. FU says:

        You are rude and condensending and presenting your opinion as fact.

        Your case is so weak you have to refer to calling the OP an “amateur” and yourself the “expert” based on no information. It doesn’t qualify in the slightest as as an argument.

        Also Mr Good Programmer, you dumbass blog doesn’t allow some actual valid email addresses

        1. Marcel Kincaid says:

          “You are rude and condensending and presenting your opinion as fact.”

          A radical example of projection if ever there was one.

        2. H2CO3 says:

          > “You are rude and condensending and presenting your opinion as fact.”

          I don’t see any rude, let alone condescending, parts in my blog post. I do not believe in insult-based reasoning.

          > “Your case is so weak you have to refer to calling the OP an “amateur” and yourself the “expert””

          The only context in which the word “expert” appears in my post is that “I’m no Python expert”. The word “amateur” never appears in the text, nor do its synonyms. In case you are referring to the part about beginners (which is most definitely not the same as amateurs!): my point is that the fact that beginners complain about a language feature does not qualify as an argument, since beginners, by nature, don’t understand a lot of things, and they complain about perfectly reasonable things more often than not, out of sheer misunderstanding. Therefore, “it confuses beginners” is not a good case against anything at all.

        3. N. Haughton says:

          And you, FU, are an ignorant idiot (I am deliberately descending to your level, to give you a taste of what it’s like to be on the receiving end).

      5. Raster says:

        Indeed. Do C for a few years and oddly these “beginner mistakes” don’t happen. You even get good at not having pointer/memory errors and discover your errors just become logic ones that would happen in any language (missed an if case you didn’t think would happen for example, or forgot to walk all elements in your list and aborted the walk too early etc.) I would say most of the complaints are from people with very “rough” skills in C who have just learned enough to be dangerous but not enough to be really good. 🙂

        1. emallnotvalid says:

          so your claim is that there is NO software out there at the moment that is written by experienced C developers that doesn’t have “beginner mistakes” or pointer/memory errors ??

          HA HA HA

        2. Jerry says:

          Oh, I don’t know. I’ve been programming in C for over 30 years, having written hundreds of thousands of lines of C code. I still make “beginner mistakes” like if (a=b). The difference is I make them less often and find them much faster 🙂

      6. Chris says:

        I can’t believe folks would be upset about the postfix and prefix operator, maybe I fall into the “that’s how I’ve always done it” situation but I use ++ just about every day in my coding. If you limit your programing language features to only things new developers can understand then you hamstring mid to senior developers. You are only a new developer for a little while, everything is hard at first then it isn’t.

      7. weberc2 says:

        I have, and it took *years* for me to get comfortable with this, and I *still* made lots of errors conflating these. :/

        1. Marcel Kincaid says:

          “I have, and it took *years* for me to get comfortable with this, and I *still* made lots of errors conflating these. :/”

          If you conflate “in front of” with “behind”, perhaps you are dyslexic and it’s not the fault of the programming language. I first learned of pre/post increment when writing PDP-11 assember, and I think it took me about 5 minutes to become comfortable and familiar with it.

      8. Henrique says:

        Well, this works for any feature/bug in any language:
        work in language X for some years, write some good software, and after that you will be able to skip all the bad parts.

        1. nimbiotics says:

          Looks like you are talking from (in) experience…

      9. RKM says:

        Having spent a few years writing C code, as well as many many years writing non-C code that still likewise supports pre and post increment operators, I can say that the number of times I’ve used either increment operator where it made a difference whether pre or post increment was used could be counted on one hand with plenty of fingers left over. For a feature that’s used in such a vanishingly small capacity the added complexity and cognitive load of having two nearly identical operators just feels entirely unjustified. In fact for the purposes of writing this comment I actually had to go lookup to make sure that I was remembering the semantics of the pre/post increment operators properly, that’s how little they’re actually used. It’s also true that in each of the cases where it is being used you could fairly trivially replace that usage with something that’s equivalent, easier to read, and easier to reason about.

        1. Marcel Kincaid says:

          “In fact for the purposes of writing this comment I actually had to go lookup to make sure that I was remembering the semantics of the pre/post increment operators properly, that’s how little they’re actually used.”

          That YOU don’t use them and aren’t familiar with them doesn’t mean that they aren’t used.

          “It’s also true that in each of the cases where it is being used you could fairly trivially replace that usage with something that’s equivalent, easier to read, and easier to reason about.”

          No, that absolutely is NOT true of postfix operators.

        2. Sarain says:

          ‘For a feature that’s used in vanishing small capacity’, Sorry but I find it difficult to believe you’ve spent years writing C code, and not come to respect the absolute flexibility that these operators give, not having pre and post increment/decrement operators is a real shortcoming in many modern language and a reason C prevails.

        3. nimbiotics says:

          “I can say that the number of times I’ve used either increment operator where it made a difference whether pre or post increment was used could be counted on one hand with plenty of fingers left over. For a feature that’s used in such a vanishingly small capacity the added complexity and cognitive load of having two nearly identical operators just feels entirely unjustified.”

          That only shows you’ve been good at denying yourself using an excellent feature of the language.

          Oh, and by the way; you can certainly say you’ve written more lines of code than I have!

        4. Jerry says:

          Sorry, I’m with Marcel. I use both prefix and postfix operators rather regularly, especially when stepping though arrays (including strings). I find them very handy.

        5. Klaus-Werner Konrad says:

          while (*dest++ = *source++) ;

      10. xxxxxx says:

        Sure. The old assume lack of knowledge condescension argument.

        1. H2CO3 says:

          There’s no condescension argument, nor any assumptions. The original post explicitly argues against certain features using the “beginners are confused by this feature” non-argument. For the reasons stated above, this is not a valid point, since anything can confuse beginners. That beginners lack knowledge is not an assumption; it’s a fact based on the very definition of “beginner”.

      11. Joe says:

        What a shitty, non-answer. Fuck off.

        1. H2CO3 says:

          Have a good day, you too!

      12. iclicker2 says:

        My only experience with pre and postfix operators in computing was with Java in High School.

        What are their use cases, and what gives them their advantages?

        I understand that it’s nice to read out an array by saying
        i=0;
        while (i<array.length){
        doSomething ( array[i++]);
        }

        but what are the other things that it's good for?

        1. Jerry says:

          I think that’s the main use in C – although it is a very useful tool. In OO languages such as C++ which allow operator overloading, they are also used for stepping iterators through many different types of collections.

      13. pablo says:

        The same can be said of python syntax though. Go program in python for some years, try copying and pasting some code and let’s see if you have trouble finding the proper indentation. Being proficient in something is no good measure for it’s adequacy. People used to hate computer keyboards because they were used to typewriters.

      14. Justin says:

        Its not that ++i and i++ are impossible to figure out, and I’m not saying the programming language needs to be a syntactic babysitter, but I don’t think anybody is getting help from code that looks like brainfuck.

        Source: Years of programming experience

        1. nimbiotics says:

          I bet you are a great Snap! programmer…

          (http://snap.berkeley.edu/)

      15. James says:

        The argument that someone needs to be proficient at understanding and utilizing an operator to evaluate its semantics makes any criticism of any language which the critic has not used extensively invalid.

        To say that someone needs to “go program in C for some years” to be able evaluate the semantics of C is fallacious, distracting, and indicative of this anti-critical philosophy. The prefix and postfix versions of the ++ and — operator serve a purpose in the language, and familiarizing yourself with their use is obviously going to solve the confusion that could cause off-by-one errors.

        But why have the confusion at all? Why not explicitly retain the value of an increment prior to performing it rather than having the operator do that work, isn’t that of higher readability? Doesn’t that reduce the likelihood that you’ll mess up and have an off-by-one error? If not, say why, rather than appealing to authority in a condescending manner.

        1. Marcel Kincaid says:

          “To say that someone needs to “go program in C for some years” to be able evaluate the semantics of C is fallacious, distracting, and indicative of this anti-critical philosophy.”

          I think I prefer the people who just scream rudely at the author.

          “But why have the confusion at all? Why not explicitly retain the value of an increment prior to performing it rather than having the operator do that work, isn’t that of higher readability? Doesn’t that reduce the likelihood that you’ll mess up and have an off-by-one error? If not, say why, rather than appealing to authority in a condescending manner.”

          Is this a millennial thing, this notion that it’s “condescending” to suggest that people are in a better position to evaluate something after they have experience with it? It’s certainly a bizarre response.

          Consider writing the equivalent of this code without using postfix:

          while (i– > 0)
          {
          …;
          }

          Here it is:

          while (true)
          {
          int tmp = i;
          i = i + 1;
          if (tmp <= 0) break;

          }

          The people attacking the author have no grasp that such a transformation is necessary in order to get the same behavior, and would be very likely to introduce an off-by-one error instead.

          1. Tomi says:

            While ( i> 0){
            //do things
            i = i-1;
            }
            i = -1;

            I also find the added explicitness nicer than some one line magic

          2. emallnotvalid says:

            while (true)
            {
            int tmp = i;
            i = i + 1;
            if (tmp <= 0) break;

            }

            when would tmp ever be less than zero ?
            unless it was less than zero to begin with ?

            "Is this a millennial thing, this notion that it’s 'condescending' to suggest that people are in a better position to evaluate something after they have experience with it? It’s certainly a bizarre response."

            LOL, talk about condescending — yeah, those stupid millennials

            its quite easy to understand

            if you spend years writing in brainfuck you will make many fewer mistakes than someone who has just spent a few months with the language and you will have a better understanding of the syntax

            does that mean brainfuck is a well designed language LOL

          3. Jerry says:

            @Tomi, which shows exactly why a postfix operator is better. Your code loops one time fewer than Marcel’s. His code with the postfix works, yours without does not.
            BTW – the fact you had to set i to -1 after your loop should have hinted at a problem.

          4. Marcel Kincaid says:

            “His code with the postfix works, yours without does not.”

            No, Tomi’s code loops the same number of times, but it’s not equivalent if i starts out < 0, nor if the there's a break within the loop.

            I won't bother to respond to notvalid's ignorant and inept drivel.

      16. altendky says:

        You can often (usually?) just put the +=1 on the previous or next line. I would suspect that not having ++ relates to wanting to discourage overly complex one liners in favor of expanded code. Anyways, when you are incrementing you should very often be overflow checking…

        1. Marcel Kincaid says:

          “You can often (usually?) just put the +=1 on the previous or next line. ”

          Not if the increment is part of a condition test.

          “Anyways, when you are incrementing you should very often be overflow checking…”

          The increment is usually part of a limit test, e.g., (for i =0; i < end; ++i) …
          Overflow is actually a bigger risk with additions that *aren't* increments.

      17. Fusty says:

        Yep, you gotta know your language in order to make less mistakes.

        Not knowing about prefix/postfix increment/decrement in your language yet still using them is a recipe for disaster.

        That can be generalized. Using X feature without understanding X feature is a recipe for disaster.

      18. Simon says:

        This comment is precisely what Eevee mentioned as “Blaming the programmer”, except more insidiously patronising. I wish people would stop writing this whole class of comments – there’s millions just like this out there right now. ++ and — don’t do much except saving you the occasional line where you remember a value before incrementing it, and – more importantly – confusing people who aren’t within your mind and know all the clever tricks you pulled at once with assignment and increments in one line. Don’t be clever and terse, be readable.

        1. Marcel Kincaid says:

          “This comment is precisely what Eevee mentioned as “Blaming the programmer”, except more insidiously patronising.”

          No, it isn’t. You seem to have understood neither Eevee’s comment nor the author’s. The author’s statement that it takes experience to understand why having post increment operators in the language results in fewer off-by-one errors than not having those operators does not blame anyone for anything … it certainly doesn’t blames programmers for their off-by-one mistakes. In fact, the author’s comment was *supportive* of Eevee’s point, suggesting that we give programmers the tools they need (such as postincrement) to easily write error-free software, rather than deprive them and then blame them for their mistakes because one’s own quasi-religious notions about programming language design.

          “Don’t be clever and terse, be readable.”

          This is a false dichotomy. In many cases, terse IS readable.

        2. H2CO3 says:

          > “Don’t be clever and terse, be readable.”

          Terseness often strongly correlates with readability. I’m not in favor of “clever tricks”, but an operator (or two) that proves to be useful in many situations definitely does not count as a “clever trick”.

        3. Sarain says:

          Who wants to read two lines when one will do!

          1. H2CO3 says:

            It’s not about the number of lines, it’s about the structure of the code.

        4. HadEnuf? says:

          Try telling a C++ programmer that ‘++’ and ‘–‘ merely “save you the occasional line”: the postfix forms *may* imply *copy construction*, altering the original and yielding the copy.

          (This is actually the case in C, it simply happens to be limited to integral and pointer types.)

          Furthermore, even in C the binding into a single expression (whether or not it stands alone as an expression statement) mitigates the risk that the increment or decrement will be lost in maintenance, producing an abject hang: I have seen far more “hanging” problem reports in shops that tried to ban these operators. Keep in mind, C was designed for *systems*–not application–programming (which really is NOT “beginner’s territory”: most of these hangs meant a frozen, *physical device*, not a program one could simply interrupt, and often with no possibility of attaching a debugger!

          That said, I have carried a pet peeve with unnecessary use of the postfix forms for over thirty years: while it’s harmless in C (the semantics allow the “copy” to be elided), in C++ it can be deadly–so I see both ‘for (i = 0; i < LIMIT; i++) …' and 'i++' standing alone as [eldritch] abominations [that make Cthulhu look like a babe in diapers], compared to the more benign 'for (i = 0; i < LIMIT; ++i) …' and '++i'.

          Considering I find myself coming down routinely even on similarly-experienced developers who use the postfix forms "by default", when their semantics are NOT required, I think it might be fair to conclude we are doing a terrible job, collectively, of imparting the difference; and I suspect that if we actually TAUGHT this–rather loudly–to beginners, they might find the difference far less confusing!

      19. Willa says:

        Do you not feel that when you were starting out, EVEN after you didn’t consider yourself a beginner, that you made a silly mistake somewhere concerning prefix/postfix? Ok, so you never felt that, Me either! But I have felt the cognitive load in pattern matching those statements is EXPENSIVE during a code review or reading a new codebase.

        If you never felt like this was a problem, I think you’re in the top 5% in skill level at software engineering. You have very high attention to detail and you don’t blindly accept something you don’t understand, you dig deep so you know WHEN the abstraction will break down. You probably figured out how to right good software yourself, you didn’t even need to look at many examples of well structured, easy to reason about code. I’m sure you understand how RAM, and CPU caching effect your code and how it can fuck it up. You’ve probably installed your OS, multiple times. Again, all things from the 5%.

        If you’ve never seen the engineers around you make the type of mistakes you can EASILY make then you’ve been surrounded by that 5%. You kick ass and you’re on great teams.

        1. Please don’t throw the term “engineers” around for people who could not install their OS, or lack sufficient attention to detail to distinguish the pre and post increment operators. These are beginner coders. There is a coder shortage so these people can still get jobs.

      20. drpyser says:

        That’s a convenient(for you) and almost unfalsifiable way to argue a position. Almost an appeal to authority as well.
        The broader point, I think, is that syntax and semantics consistency is inversely proportional to probability of mistakes. “oups, I meant that to go before, not after” and “oups, I forgot those are not the same things even though they look the same” most certainly do happen. Sure, if you program in the same language for years you’ll eventually learn the syntax and semantics well enough that inconsistencies don’t matter. Pretty much any language is good enough for most things, no matter how horrible and inconsistent and counter intuitive it is, if you spend enough time learning it in details. But a mark of a good, well designed language is how much time and resources you have to put into learning it to actually be proficient in it. A good language should be intuitive, its syntax not too cryptic and its semantics predictable.

        My favorite example is Lisp. People are put off by the parentheses, but the syntax is perhaps the simplest and most consistent. Everything is pretty much written the same, and everything is pretty much the same(there aren’t a whole lot of types, and many types, such as symbols, are used ver). The semantics is also quite predictable, at least if you know the principles of functional programming(in the case of functional-oriented lisps like Scheme), and also quite consistent(usually, everything is an expression and has a value, most often first-class – you can pass it around and store it). So it’s easy to learn, easy to use, easy to parse… Hard to make a mistake in its syntax – You put parenthesis around symbols and values and there you go… Hard to forget the order of operators or their semantics and effects – everything is prefix, most things are functions, with a few special forms with specific semantics in common with most programming languages’ special syntax for statements(e.g. “if”, “for”, etc), and also usually pretty predictable, especially with functional style.

        Anyway, for what it does, C does it well. Mostly, I think, because it has a few decades of optimization and mainstream usage in its favor. All in all, It’s not a bad language, and there are certainly others more deserving of criticism. But I think there is space for innovation and enhancements, be it in its syntax, or its consistency and predictability. And anyway, we’re stuck with it until we have a system and high-performance programming language as efficient and mainstream.

        Thanks for this blog post!

        1. Dave Walker says:

          drpyser, I wish more people would post responses like yours. It’s a balanced argument that stresses the points you want to make without attacking previous posters and you provide examples to support your position. Hopefully, others will follow your pattern and not the pattern by posters such as joe, whose post added nothing to the discussion more substantive than “I’m angry and I don’t like what you said”.

          All of us, including me, are posting our opinions–not “the absolute truth”. For example, you use Lisp to point out that there are languages with simpler and cleaner syntax. It’s a good example, but one that doesn’t work for me. My first languages (1971) were Fortran and Compass (CDC 3600 assembly) and BASIC. In 1976, when I built my first PC, I was using BASIC and microprocessor assembly. Then, in 1985, I was introduced to Lisp. I was very impressed with what people were doing with it, but I couldn’t unlearn my previous programming experience enough to think in Lisp. So, to me, Lisp was useless. I didn’t know how to use it to solve problems. The simple syntax was wasted on me. This is not a problem with the language, it was a problem with me.
          And that is the point. Everyone–at least everyone who has written rational posts–has valid opinions and are telling it like it is. . .for them. As far as I can tell, there is no right or wrong here, just good opinions based on the experience of the posters.
          The value of these posts is not to reinforce our bias, but to allow us to push our boundaries and look at the world as others see it. Sometimes we will reject those opinions and sometimes we will learn something new. Maybe even both.

      21. Asday says:

        That’s a terrible argument. If it’s so self-evident, surely one such as yourself would trivially be able to post an irrefutable example.

      22. S Javeed says:

        For what it’s worth, the lack of nuanced understanding of the increment/decrement operators with respect to their prefixed/postfixed versions is what stood out in the original article for me. The language I code in these days is ruby which only has += and -= and I miss postfix operators all the time – specially in loops.

      23. GDFrank says:

        C was an assembler replacement … or at least that was one of it’s goals.

        As such, it tried its best to get as close to machine instructions as possible.

        And there is a pre and post increment machine instruction that allowed assembler programmers to do two operations in one instruction.

        Though C has evolved these days, back then it was programmed as close as possible to hardware as possible to attract those assembler programmers.

        C emerged when there still was a lot of need to pack code into small memory and slow machines.

        Probably the most two executed instructions are the pre and post increment … as CPUS iterate through memory … using pointers.

    2. mcint says:

      At the time of writing code, when you (had better) know what you mean, you can think carefully about the state of the variable needed in your expression, and how its state relates to the value in the next expression or iteration of a loop.

      It allows (and if you can adapt your style and understanding, encourages) you to write code which tests a value once, rather than scattering access and updates across multiple lines.

      It’s possible to misuse, but its absence would make well-written (non-beginner) C code more verbose.

      From a tutorial written by Brian Kernighan, “++n is equivalent to n=n+1 but clearer, particularly when n is a complicated expression.”

      https://www.lysator.liu.se/c/bwk-tutor.html#increment

      1. Stan says:

        “++n is equivalent to n=n+1 but clearer, particularly when n is a complicated expression.”

        How can n be a complicated expression?

        1. H2CO3 says:

          Think ++myArrayOfMaps[42]["foo"] versus myArrayOfMaps[42]["foo"] = myArrayOfMaps[42]["foo"] + 1.

    3. Almamu says:

      It simplifies things and the way it works is consistent on every usage so once you know how both versions work you don’t really mistake them, it’s a language operator defined by the standard after all.

    4. Brian says:

      For me it’s because I have to structure my whole bit of code around which of these choices I make, so the entire time I am writing I am thinking about which of these I have used and what the consequences are.

    5. heroic says:

      The reason prefix and postfix ++ and — exist and are distinct in C is because many processors have distinct instructions for increment, either returning or not returning the result.

      1. Marcel Kincaid says:

        “The reason prefix and postfix ++ and — exist and are distinct in C is because many processors have distinct instructions for increment, either returning or not returning the result.”

        No, that’s a myth.

    6. Travis C says:

      Well no it makes sense. If you mix them up, all your off-by-one errors become off-by-two errors!

      1. Marcel Kincaid says:

        “Well no it makes sense. If you mix them up, all your off-by-one errors become off-by-two errors!”

        That’s rather bizarre reasoning. You’re suggesting that you write code filled by off-by-one errors, and that if you moved all the auto-increment operators from the pre- to the post- position or v.v., that would double the number of errors. This is quite mistaken.

    7. Marcel Kincaid says:

      “It seems bizarre to me that having two nearly identical operators (prefix and postfix ++) that do almost but not quite the same thing would lead to *fewer* rather than *more* off-by-one errors”

      Then you aren’t thinking about it much, or clearly. The presence of both operators doesn’t produce more off-by-one errors because the different behavior is a matter of where the operator is placed; it’s not some subtle graphic difference in similar shapes, the semantics is tightly coupled to the appearance. And it SHOULD be obvious how removing the postfix operators results in more off-by-one errors: people will use the prefix operator where they needed the postfix operator, or they will do the awkward hand-coding necessary to get the same effect, but code it incorrectly.

      providing s diffet by their placement … they aren’t

      Reply

    8. The surest place to use these operators is the hardest to replace – that of incrementing or decrementing an iterator. Other idioms actually look out of place, where “+1” does not have its customary arithmetic meaning.

    9. OldBikerPete says:

      Despite all the arguments for and against C, it remains that the most available toolkits for programming microcontrollers support C++ and, therefore, C. Some of the large ones (like Raspberry Pi and BeagleBone Black to name two of the cheapest) are branching out with Python etc. But C remains the language of choice for programmers who must work very closely with the hardware they are targeting. Usually speed WILL be a prime consideration in such development.
      Peter

  2. Philipp A. says:

    You’re argumenting against JavaScript’s ASI, not Python’s style of statement termination, which is very simple and very robust: “Unless there’s a open bracket of some kind, a newline terminates the current statement”

    No ambiguity, not hard to parse, easy to grok.

    > In a language where whitespace is significant, automatic indentation becomes literally impossible, which is very annoying

    I don’t understand what you mean: Automatic indentation while typing or pasting code is easy to do and implemented in many editors and all IDEs. And while automatic (re-)indentation of the whole file obviously can’t fix indentation errors (because they’re code errors), it can fix blocks that are (consistently) indented a few spaces too much or too few.

    1. Marcel Kincaid says:

      “Automatic indentation while typing or pasting code is easy to do and implemented in many editors and all IDEs.”

      When pasting code, there’s no way for the editor to know what level it should be at, and the existing level of indentation is likely to be incorrect.

      1. H2CO3 says:

        That’s the point.

  3. me says:

    I read Eevee’s post yesterday and was pretty unhappy with it. A lot of arguments in it are very subjective and as you have shown, some are wrong.

    Thank you!

  4. Luke Plant says:

    It seems you have missed the point about braces and semicolons. “Spaces are for the human eye, while braces are for the compiler.” Yes, that is exactly the problem. As Eevee says, this causes a mismatch between how humans interpret a program, and how the compiler does, causing bugs. You need to respond to this argument to have a case here.

    Of course if you copy-paste code from one context to another, you need to be aware of all kinds of differences and adjust the code. This is just as true in C as in Python. All Python developers for this reason will be able to indent or dedent a block of code with a keyboard shortcut, just as C developers will probably have a ‘reformat this block’ shortcut. The point is that once the code is visually correct in Python, it is correct, while not in C.

    Regarding semi-colons, I’d agree that the example you give is confusing with languages like Javascript and its crazy optional semicolons, forcing you to read ahead in order to work out whether the current statement is finished or not, and apply complex rules. (That’s why I, like Eevee, always add the semi-colons in Javascript). In Python, for example, though, you always know at the end of a line if you’ve reached the end of a statement. If you have some open parentheses, you are not at the end of the statement. If you have a \ at the end, you are not at the end. Otherwise newline is the end of the statement. (There are some other cases e.g. function decorators, but the above rules covers most of it, and do *not* require mixing up parsing and lexing – have a look at the Pygments Python lexer for proof). When my brain is in Python mode, I have no trouble recognizing your example as a mistake code – it returns nothing, and then has a statement after it which cannot be executed.

    In either case, you can to some extent get used to the quirks of a language. But time has shown that *invariably* developers want to use whitespace to help with comprehension. We *don’t* leave code left-aligned, and we *don’t* write statements one after the other with just semi-colons to separate them. This means you have redundancy and make room for more errors. In addition, *experienced* developers in C *are* tripped up by the way their brain treats whitespace as significant, while the compiler does not (“goto fail” etc.). There is no equivalent mistake that experienced Python developers make. The designers of C did not have the knowledge about decades of universal developer practices that we have today. Perhaps they would have made different decisions if they had done.

  5. Richard says:

    Nice article , agree with all your points.

  6. Leo Tindall says:

    The idea that an operator should exist to increment iterators is downright silly; why are .next() and .nth(n) and the like not good enough? They are far more semantically obvious. For example given that iter is some iterator type expected to have three elements:

    int var_1 = iter.next()
    int var_2 = iter.next()
    int var_3 = iter.next()

    This is far more explicit than

    int var1 = iter++;
    int var2 = iter++;
    int var3 = iter++;

    which – oh wait! – turns out is not correct anyway; the prefix increment needs to be used.

    To recap, the objections are: 1) ++a or a++ doesn’t read as nicely as a.next(), 2) having the same operator do two different things based on a very small change in position not in accordance with the order of operations is definitely confusing, and 3) there is no functionality for ++ other than += 1 that doesn’t have to be implemented in a method anyway, so that method should be appropriately named and exposed.

    1. Marcel Kincaid says:

      “which – oh wait! – turns out is not correct anyway; the prefix increment needs to be used.”

      Circular argument. In many cases you want the current value, not the next one.

      1. emallnotvalid says:

        and you ignore his point entirely

        1. Marcel Kincaid says:

          I shall ignore you, as you are dim and your comments are worthless.

  7. Annatar says:

    If one needs good speed (short of Fortran or assembler) and a portable assembler, C is the perfect tool for the job;
    if one needs to write a global stateless application capable of running across multiple nodes (horizontal scaling), ANSI Common LISP is the perfect tool for the job;
    if one needs to process a large amount of data as input and produce some output, in a classic input-output flow, and they need it done as fast as possible, as simple and as flexible as possible, AWK is the perfect tool for the job;
    and finally, if one needs to automate, shell is the perfect tool for the job.

    Mistake people often make nowadays goes along the lines of overthinking, and then overcomplicating. It’s still one and zero underneath, no need to overthink or overcomplicate. Stick to the languages I listed, keep it simple. The goal should be less languages and less programming, and more data structuring, not more languages and more programming.

  8. Paril says:

    The whole point of integer division is that it’s precise – if you used floating point for integer division you’d get some pretty crazy stuff at high numbers wouldn’t you?

  9. RoadieRich says:

    I think the tl;dr is “If you don’t like a language, don’t use it, but don’t call people who do like it names. There are plenty of people who don’t like your favourite language.”

  10. tim says:

    “…reflect the 40-year-old need for “speed above everything”, and which are consequently obsolete nowadays.”

    Sadly, it seems that the opposite attitude, “speed is unimportant”, prevails these days. My computer is many orders of magnitude faster than the computer I used back in the early 80s, yet many modern applications are actually slower executing than the equivalent old application. Admittedly there is often a lot more going on in the newer application. For example the UI is much more complicated, but even when that’s not a factor, modern programs crawl in comparison.

    The same is true of size. Very few people seem to care about how large a program is or how much memory it uses.

    As an embedded developer I still worry about these things every day, so my point of view is obviously different from most people. However, I do wonder what current computers would be capable of if code were optimized for speed and size instead of time to market and convenience to the developer. I often fantasize about writing an os and applications on modern general purpose hardware with speed and efficiency in mind. I suspect that booting a PC in milliseconds would be possible and the that hour-glass cursor could be a thing of the past.

    1. H2CO3 says:

      > Sadly, it seems that the opposite attitude, “speed is unimportant”, prevails these days.

      I did not imply that performance is not important; my point was that there are some features that we can implement nowadays much better than C did without sacrificing performance (cf. Rust and modern C++).

      I pretty much think that speed is very important, that’s exactly why I’ve been programming in C for 13 years. And that’s also why I have negative feelings when I read a blog post about “stopping copying C”. C has many core values that are crucial for achieving bare metal performance, and many contemporary languages just can’t achieve that because they weren’t designed for it. I think “copying” C, or at least the good, performance-friendly parts of C is not just a good idea for designing a high-performance programming language; it’s pretty much necessary, because they are so fundamental and universal.

      It’s just that C is not perfect and some parts could be done better.

    2. Toby says:

      Hear, hear!
      Most commentators pose arguments that are based solely on how a particular language is applied to their field of programming expertise. I write code for embedded microcontrollers too. There are real-time constraints and tiny amounts of RAM – normally not even enough to support a heap that can be used for dynamic memory allocation, as required by most modern languages. There needs to be close correspondence between the language syntax and the compiled machine code. I often have to look at compiler output listings when hand-optimising ISRs. NB The ++ operator is useful for providing (almost) free pointer incrementation when integrated into a much larger statement. Good programming? Maybe, maybe not, but when reviewing code it shows the author’s intent for needed execution conciseness without having to blindly trust the compiler to efficiently optimise.

  11. Brian says:

    Thanks for writing this. As someone who does C and Python, and loves both for different reasons, I can say that I just do not like the required indenting as opposed to semicolons. For me, C has always been about choice; which style to use, how careful are you going to be, how many different ways can you code this. Coming from an assembly language background on 8-bits, it seemed a logical step. As much as I like Python for its simplicity and clarity, some things about it seem like too much regulation.

  12. benjcooley says:

    These are trivial criticisms of C.

    The real C bashing has more to do with it’s inability, even in expert hands, to produce reliable and secure software. It’s anachronistic over-focus on bare metal performance vs. providing tools to help ensure safety and reliability make it not particularly well suited for modern software, where these attributes are far more important than they were in the past.

  13. Venkat Chandra says:

    Bravo.

  14. Leles says:

    Sometimes I think the problem people have with “++i” and “i++” are the same with “your” and “you’re”, and yet both have its uses. ( its not because “you” don’t have use for it, that it shouldn’t exist. )
    Removing braces are just foolish IMO, it makes code hard do read/debug not easier some say, and lets not start on the eternal debate about using “tabs”, “2 spaces” or “whatever spaces”, witch makes removing braces even more problematic.

  15. Chark says:

    Thank you for this. You raise some solid points. I also find it a poor argument to hint that a merely because something is old it must be obsolete. I think the power and minimalism of C is quite remarkable. As a programmer/designer working with C you often have a good grasp of exactly how your code maps to operations that will be taking place. Many other languages compare to C similar to how an electric razor does with a shaving blade. Also, all languages tend to have some caveats. The best course of action often is to stay consistent in how things are done, and to properly test new ways of doing things to avoid unexpected surprises.

  16. Kiaran says:

    C is a wonderful language. I agree with Eevee that we should “stop copying c”. Instead, we should just use it.

    C++ on the other hand… has gone a bit too far.

  17. WhyEvenBother says:

    Wow. Thought you were a bit above producing pulp bullshit for the Reddit circlejerk.

    1. H2CO3 says:

      I did not post my article on Reddit.

  18. Luke says:

    As a longtime developer in languages with ++ and –, I’d happily ditch these, or at least the pre- and post- distinctions. Especially in the C world I see too much “clever” stuff that makes programs harder to read:

    *myPtr++ = *(++yourPtr);

    Good job smartypants, you understand the deep semantics of pre- and post-increments and operator precedence, but save it for the obfuscated code contests. To me this is clearer, you don’t have to mentally decompress it:

    yourPtr++;
    *myPtr = *yourPtr;
    myPtr++;

    1. Dave Walker says:

      This is an excellent argument for keeping a focus on readability, but not necessarily for getting rid of ++ and –. I’ve been programming (and teaching) C++ for a long time, and I would have to stop and think what the first example was really doing, while the second example requires very little thought.

      However, if a programmer needed to perform that action a lot and used the code in the first example all the time, it would be as easy for her to read as the second line is for me. I suspect, but don’t have the data to prove, that the code in the second example would be easier to read for most C++ programmers. I also suspect that the second example would be just as easy for most experienced C++ programmers to read (and preferable to write) than:
      yourPtr = yourPtr + 1;
      *myPtr = *yourPtr;
      myPtr = yourPtr +1;

      1. Stan says:

        “myPtr = yourPtr +1;”

        This typo is another argument in favor of ++.
        🙂

    2. Marcel Kincaid says:

      As a seasoned C programmer, when I see *myptr+++ = *++yourptr I understand it immediately. With you three lines, I have to stop and read them and try to figure out whether the right pointers are being incremented before and after. It’s unnecessarily verbose and violates DRY.

  19. ainmosni says:

    As a counter point to pasting code in python, yes this can indeed confuse beginners, but then again, what doesn’t confuse beginners. Once you’re used to it it becomes a non-issue and many python IDEs and editors will correct it automatically. Furthermore, auto formatters for python are quite common.

    Just wanted to add that, for the rest, I actually quite like C and think the hate for it among many “modern” developers is irrational.

  20. luke says:

    https://github.com/apple/swift-evolution/blob/master/proposals/0004-remove-pre-post-inc-decrement.md

    Swift already deviates from C in that the =, += and other assignment-like operations returns Void (for a number of reasons). These operators are inconsistent with that model.

    Code that actually uses the result value of these operators is often confusing and subtle to a reader/maintainer of code. They encourage “overly tricky” code which may be cute, but difficult to understand.

    While Swift has well defined order of evaluation, any code that depended on it (like foo(++a, a++)) would be undesirable even if it was well-defined.

    sounds legit.

  21. Why says:

    Your blog is cool and all, but you didn’t need to advertise your bad opinions on a better opinion piece…….

    1. H2CO3 says:

      You do realize that what’s “good opinion” and “bad opinion” is your… um, opinion, right?

      Also, I reserve the right to write whatever I want on my own blog. You also have the right to not read it if what I write hurts you.

  22. Hillel says:

    One of the things I think most whitespace-advocates miss is how dependent we are on our tooling to write good code. It’s easier to read indentation as blocks, but we have to be able to _trust_ the indentation. How do I know a line is explicitly outside of the scope or that was just a typo? I could hope it’s correct, or I could spend time reading the codebase and confirming the logic. Or I can just reindent the entire file with gg=G and force the indentation to match the scopes.

    With brace languages like C or Ruby this is easy. For Python, though, I can’t do that.

    1. H2CO3 says:

      That’s exactly the point.

  23. Kakyo says:

    Pretty nice response, but…
    “(…) reflect the 40-year-old need for “speed above everything”, and which are consequently obsolete nowadays”
    Go tell that to game programmers.

    1. H2CO3 says:

      I did not imply that performance is not important; my point was that there are some features that we can implement nowadays much better than C did without sacrificing performance (cf. Rust and modern C++).

  24. cLover says:

    Thank you. I *love* C, but I know it is not perfect.

    EEVEE, while I like most of her other stuff, went too harsh on C IMHO.

    The only thing I’d change in the language is the operator precedence, which, especially when it involves pointers, gets confusing. the book ‘Expert C Programming’ by Peter van der Linden describes this in Chapter 2 (“Some operators have the wrong precidence”).

  25. Nir Friedman says:

    Prefix and postfix operators are a disaster. There’s never been any code that’s been made clearer by using the return value of prefix and postfix. It’s like some rite of passage to save a single line instead of just incrementing the freaking variable before or after. “Oh, for C experts it’s just so easy and natural”: false. Even experienced people can easily make mistakes relating to sequence pointers that lead to UB.

    Bottom line is that there should only be one increment and one decrement operator, and it shouldn’t return anything.

    Note that I’m especially bitter because C++ took these from C, and and the postfix operator is a mess when viewed in the broader context of iterators.

  26. Nir Friedman says:

    Should be “sequence points” not “sequence pointers”.

  27. Fallacious reasoning: the belief that the only two alternatives for numbers are integers and floats. The fact that floats are an entirely different kind of number is completely valid but that doesn’t mean that’s the only possible result from integer division: rationals are a better answer. In particular, you say, «when one is working with integers, one often has a problem related to integers in the first place»: such a person does not then perform _division_, i.e., used the one operation that is most assuredly not closed under integers. Expecting 1 / 2 to produce “half” is really the single most reasonable thing to expect: neither an integer nor a float.

    1. H2CO3 says:

      Fallacious reasoning: the belief that division has nothing to do with integers only holds until you try to write one, just one simple algorithm that operates on integers. Want to implement “printf”? You will have written something like

      This is just a trivial little piece of code, where explicitly having to take the floor of the rational resulting from num / 10 is just inconvenient and ugly; however, in more sophisticated code, it can quickly become unwieldy, and even impose a performance penalty if evaluated many times.

      Not every construction is nice or practical to use in everyday programming in the exact same manner that it’s described in mathematics.

  28. Vlad says:

    Huh

    the keypoint in previous article was: “confusing for beginners”
    That’s it.
    People hate something they dont understand. Moreover they _dont_want_ to understand
    “Since C/C++ is too complex and we dont understand it and there is nice JS where we can successfully reinvet the wheel, lets hate it”

  29. Kirill says:

    What about the other parts? The abyssumal module system which isn’t? What about the total lack of any memory safety features? Sure, there are extensions, but they’re not mandatory. What about the bloody annoying forward declarations? What about 1<-1 evaluating to true?
    C was created in a world where a typical development machine had micro-generators in the keyboard (so typing more than 3 letters at once was actually painful) and had less computational power, than my bloody watch. Back then, all the omissions allowed for the compiler to be simpler, more portable and of course better performing. Nowadays, when the compiler could solve tons of issues with C without impacting run-time performance, the fact is still ignored by the standards commitee. That's what's wrong with C.

    1. H2CO3 says:

      Are you asking why I’m not defending the lack of a module system or the lack of safety? Well, because they are actual mistakes, unlike the behavior of some operators or the semicolon as a statement separator. In this post, I’m trying to refute some arguments that I think were unjust, mistaken or too subjective. I’m not trying to argue for language features that contain actual, important design errors.

      1. HadEnuf? says:

        I sometimes have to count the treatment of blocks in code context as a terminating form as something of a misfeature — typically every time I have to correct a platform-full of idiot macros that expand to terminating forms (macro invocation itself is a non-terminating form if not expanded) AND get people to understand it’s a matter of syntactic termination, not semantic scope that can prevent compilers and static analyzers from detecting constructs that are likely NOT to produce the behavior the programmer intended.

  30. KBZX5000 says:

    Good article.
    I enjoyed Eevee’s article, but had some qualms with the same points you brought up.
    Glad I’m not the only one who found it a bit too much C bashing.

    But.. all in all, better we all rant and discuss about what we like and dislike in programming nowadays.
    We all teach the new generation in one way or another, so it’s better that we gather and compare many perspectives along the way.

    (Although I could do without the ECMAScript zealots. And perhaps also the entire Swift/Go/Hack corporate language nonsense.)

  31. Carlos Leite says:

    If a programmer can’t understand integer divisions and get confused with pre/post fixing of some operators… programming is not for him. These concepts aren’t nearly as complicated as simplest business logic that we deal every day.

    1. H2CO3 says:

      Exactly my thoughts.

  32. Eric Wilson says:

    When I read the “Let’s Stop Copying C” article I was mildly disappointed. Your article reflects my thoughts on languages, especially the treatment of whitespace in Python. I like Python as a scripting language but for anything larger than a 100 lines or so, I find it increasingly less attractive. All the block structured languages (ALGOL family) have made things significantly better for the programmer and the compiler, and with the advent of editors that understand language structure, makes the task of indenting your code a no brainer (literally ), just hit reformat and boom! Nice pretty printed source code that fits with your style guide.

    Thanks for writing this.

    1. H2CO3 says:

      > “for anything larger than a 100 lines or so, I find it increasingly less attractive.”

      Exactly. I think Python is an excellent language for quickly solving problems and also as an educational language, partly because its strict indentation. (Almost no students bother to properly indent their code unless it’s a compiler error…)

      But for a large project, I’d pick a whitespace-insensitive, statically-typed language over whitespace-sensitive, dynamically-typed languages any day.

  33. Angry Bob says:

    I no longer code in VB.Net; I now code in C# and C++ not to different. But why waste time bashing VB? It is a language with many virtues especially if written strictly. The implied parts of the language make for quick construction of business critical systems. C# is perhaps as much as 10% less productive in coding speed. The name of the game is pick your tools because not doing so may make you the but of many jokes. C languages’ are great but so is Visual Basic in the right place. Stop wasting time and make points of merit in context where the point is valid in a real world situation. So pre or post ++ while interesting is a learn once and never worry about again issue. I would be more interested in an article on when to use different languages or design patterns.

  34. C# FTW says:

    While I agree integer division is just fine in C and that braces over indentation is a far superior way of delimiting blocks of code in any language, I really hate the prefix and postfix duo.

    However, I don’t think the original article from Eevee was meant to be read as “ways in which C sucks shit and I hope it does in horrible pain”.

    The article was saying what should not be copied from C in modern languages. For example, the above code snippet used as an argument by Mr Soda for integer division in its current form in C is exactly the kind of code you would write in C, not in a more modern language.

    Yes, C should act just like C, but nobody would die from writing “num = num div 10” instead of “num /=10” in C# for example.

    1. C# FTW says:

      *dies in horrible pain

  35. Sergey says:

    There is no point in trying to defend these kinds of things, sadly.
    The new generations of ‘software engineers’ are always dead determined that everything old is bad and new is good.
    It is only partially true, of course – we are learning from our mistakes (some of us, at least).
    But people don’t get it and never will. This is general attitude, not only in software engineering.

    There is nothing to do about it. It is natural flow of things with flawed human mind and the chase for the big buck (for more food and women in old times and actually still now, we aren’t changing much) and general attitude “I know better and I can do better”.
    This attitude is actually the reason for the progress as much as for the regress, unfortunately.
    If moderated in a certain way so ‘smart asses’ aren’t allowed to re-invent bicycles with all sorts of different wheels, we could concentrate on more important problems and don’t loose time on pointless flaming, making conversions and porting code, etc.

    Can write more on the subject, but don’t see any reason.

    1. H2CO3 says:

      > “The new generations of ‘software engineers’ are always dead determined that everything old is bad and new is good.”

      Quite true, sadly.

    2. Angry Bob says:

      Angry Bob says Yay Sergay, My point was and still is what a waste of time! Ps 20+ years in industry.

  36. Olumide says:

    Its been said that “If C++ did not exist, it would have to be invented.”

    I agree and add that this just as true for C.

  37. ScottM says:

    “Spaces are for the human eye, while braces are for the compiler.”

    One reason I disagree with Eevee’s post is the same reason I disagree with your statement here. Braces absolutely and unequivocally can make errors easier for humans to detect when used in certain ways. For example, if you place each brace on its own line and utilize consistent indentation, then a missing brace is extremely easy to spot. And since your code won’t even compile with mismatched braces, it’s a matter of tracking the missing one down, which, again, is quite easy — especially if you have small functions. Eevee’s complaint that the compiler shows the error at the end of the file is irrelevant; you know that’s most likely incorrect anyway, and you’ve created an environment that minimizes the time it takes to find the real culprit.

    Indentation issues, however, are a completely different animal. There may not be any obvious visual clue that an indentation error is actually an error. Even worse, the compiler likely won’t notice and you end up with a bug that could be very difficult to track down. Eevee’s primary rebuttal to this is that it rarely happens because people are more careful with indentation in languages like Python. But relying on people to be more careful completely undermines the argument that indentation is objectively better.

    No, I prefer braces precisely because they work so well as a complement to a strong indentation system. C-based languages not only let me do that, but they make it easy to adapt code from others who may not write as cleanly as I do, because respacing code is trivial in most IDEs.

    1. H2CO3 says:

      I don’t even think we disagree here. I too find braces easy to spot, it’s just that indentation helps more for me. But I can spot missing braces too, and they help me understand the structure of the code too.

      > “I prefer braces precisely because they work so well as a complement to a strong indentation system”

      We are on the same side here.

  38. Havard says:

    “…the 40-year-old need for “speed above everything”, and which are consequently obsolete nowadays.”

    Not if you do anything with audio and video it is not, it is the only viable thing actually.

    1. H2CO3 says:

      Again, you seem to have misunderstood my words. It’s not that speed is irrelevant nowadays; maybe it’s more important than ever. But the industry has developed zero-cost abstractions that are superior in terms of safety, a prime example of this are many abstractions found in modern C++ and in Rust.

  39. austo says:

    I rarely comment on blog posts, but it just seems funny to me that people are spending more time and effort debating the validity of a language including pre- and post-fix increment operators than it would take to just learn how they work (http://stackoverflow.com/questions/7031326/what-is-the-difference-between-prefix-and-postfix-operators).

    Is anyone here a better programmer than Denis Ritchie (or Guido van Rossum)? Certainly not me.

    Maybe these languages are trying to teach us something.

  40. […] Let’s Stop Bashing C. Answer on previous blogpost. […]

Leave a Reply

Your email address will not be published. Required fields are marked *