100 comments
  • mihaic9m

    When I first heard the maxim that an intelligent person should be able to hold two opposing thoughts at the same time, I was naive to think it meant weighing them for pros and cons. Over time I realized that it means balancing contradictory actions, and the main purpose of experience is knowing when to apply each.

    Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.

    My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?

    • tetha9m

      I had exactly this discussion today in an architectural discussion about an infrastructure extension today. As our newest team member noted, we planned to follow the reference architecture of a system in some places, and chose not to follow the reference architecture in other places.

      And this led to a really good discussion pulling the reference architecture of this system apart and understanding what it optimizes for (resilience and fault tolerance), what it sacrifices (cost, number of systems to maintain) and what we need. And yes, following the reference architecture in one place and breaking it in another place makes sense.

      And I think that understanding the different options, as well as the optimization goals setting them apart, allows you to make a more informed decision and allows you to make a stronger argument why this is a good decision. In fact, understanding the optimization criteria someone cares about allows you to avoid losing them in topics they neither understand nor care about.

      For example, our CEO will not understand the technical details why the reference architecture is resilient, or why other choices are less resilient. And he would be annoyed about his time being wasted if you tried. But he is currently very aware of customer impacts due to outages. And like this, we can offer a very good argument to invest money in one place for resilience, and why we can save money in other places without risking a customer impact.

      We sometimes follow rules, and in other situations, we might not.

      • mandevil9m

        Yes, and it is the engineering experience/skill to know when to follow the "rules" of the reference architecture, and when you're better off breaking them, that's the entire thing that makes someone a senior engineer/manager/architect whatever your company calls it.

      • jschrf9m

        Your newest team member sounds like someone worth holding onto.

    • ragnese9m

      > My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?

      I think the truth is that we just CAN'T scale that way with the current programming languages/models/paradigms. I can't PROVE that hypothesis, but it's not hard to find examples of big software projects with lots of protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc) or still have plenty of bugs (macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...).

      I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)

      Just my navel gazing for the morning.

      • twh2709m

        I think the only way this gets better is with software development tools that make it impossible to create invalid states.

        In the physical world, when we build something complex like a car engine, a microprocessor, or bookcase, the laws of physics guide us and help prevent invalid states. Not all of them -- an upside down bookcase still works -- but a lot of them.

        Of course, part of the problem is that when we build the software equivalent of an upside down bookcase, we 'patch' it by creating trim and shims to make it look better and more structurally sound instead of tossing it and making another one the right way.

        But mostly, we write software in a way that allows for a ton of incorrect states. As a trivial example, expressing a person's age as an 'int', allowing for negative numbers. As a more complicated example, allowing for setting a coupon's redemption date when it has not yet been clipped.

        • bunderbunder9m

          John Backus's Turing Award lecture meditated on this idea, and concluded that the best way to do this at scale is to simply minimize the creation of states in the first place, and be careful and thoughtful about where and how we create the states that can't be avoided.

          I would argue that that's actually a better guide to how we manage complexity in the physical world. Mechanical engineers generally like to minimize the number of moving parts in a system. When they can't avoid moving parts, they tend to fixate on them, and put a lot of effort into creating linkages and failsafes to try to prevent them from interacting in catastrophic ways.

          The software engineering way would be to create extra moving parts just because complicated things make us feel smart, and deal with potential adverse interactions among them by posting signs that say "Careful, now!" without clearly explaining what the reader is supposed to be careful of. 50 years later, people who try to stick to the (very sound!) principles that Backus proposed are still regularly dismissed as being hipsters and pedants.

        • james_marks9m

          To determine what states should be possible is the act of writing software.

      • bluGill9m

        I don't think we will ever get the breakthrough you are looking for. Things like design patterns and abstractions are our attempt at this. Eventually you need to trust that however wrote the other code you have to deal with is sane. This assumption is false (and it might be you who is insane thinking they could/would make it work they way you think it does).

        We will never get rid of the need for QA. Automated tests are great, I believe in them (Note that I didn't say unit tests or integration tests). Formal proofs appear great (I have never figured out how to prove my code), but as Knuth said "Beware of bugs in the above code; I have only proved it correct, not tried it". There are many ways code can be meet the spec and yet wrong because in the real world you rarely understand the problem well enough to write a correct spec in the first place. QA should understand the problem well enough to say "this isn't what I expected to happen."

      • austin-cheney9m

        I suppose that depends on the language and the elegance of your programming paradigm. This is where primitive simplicity becomes important, because when your foundation is composed of very few things that are not dependent upon each other you can scale almost indefinitely in every direction.

        Imagine you are limited to only a few ingredients in programming: statements, expressions, functions, objects, arrays, and operators that are not overloaded. That list does not contain classes, inheritance, declarative helpers, or a bunch of other things. With a list of ingredients so small no internal structure or paradigm is imposed on you, so you are free to create any design decisions that you want. Those creative decisions about the organization of things is how you dictate the scale of it all.

        Most people, though, cannot operate like that. They claim to want the freedom of infinite scale, but they just need a little help. With more help supplied by the language, framework, whatever the less freedom you have to make your own decisions. Eventually there is so much help that all you do as a programmer is contend with that helpful goodness without any chance to scale things in any direction.

      • DSMan1952769m

        > protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc)

        To be fair here, I don't think it's reasonable to expect that once you have "software development skills" it automatically gives you the ability to fix any code out there. The Linux Kernel and web browsers are not hard to contribute to because of conventions, they're hard because most of that code requires a lot of outside knowledge of things like hardware or HTML spec, etc.

        The actual submitting part isn't the easiest, but it's well documented if you go looking, I'm pretty sure most people could handle it if they really had a fix they wanted to submit.

        • ragnese9m

          There are multiple reasons that contributing to various projects may be difficult. But, I was replying to a specific comment about writing code in a way that is easy to understand, and the comment author's acknowledgement that this idea/practice is hard to scale to a large number of developers (presumably because everyone's skills are different and because we each have different ideas about what is "clear", etc).

          So, my comment was specifically about code. Yes, developing a kernel driver requires knowledge of the hardware and its quirks. But, if we're just talking about the code, why shouldn't a competent C developer be able to read the code for an existing hardware driver and come away understanding the hardware?

          And what about the parts that are NOT related to fiddly hardware? For example, look at all of the recent drama with the Linux filesystem maintainer(s) and interfacing with Rust code. Forget the actual human drama aspect, but just think about the technical code aspect: The Rust devs can't even figure out what the C code's semantics are, and the lead filesystem guy made some embarrassing outbursts saying that he wasn't going to help them by explaining what the actual interface contracts are. It's probably because he doesn't even know what his own section of the kernel does in the kind of detail that they're asking for... That last part is my own speculation, but these Rust guys are also competent at working with C code and they can't figure out what assumptions are baked into the C APIs.

          Web browser code has less to do with nitty gritty hardware. Yet, even a very competent C++ dev is going to have a ton of trouble figuring out the Chromium code base. It's just too hard to keep trying to use our current tools for these giant, complex, software projects. No amount of convention or linting or writing your classes and functions to be "easy to understand" is going to really matter in the big picture. Naming variables is hard and important to do well, but at the scale of these projects, individual variable names simply don't matter. It's hard to even figure out what code is being executed in a given context/operation.

      • knodi9m

        > I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)

        Readability is for human optimization for self or for other people's posterity and code comprehension to the readers mind. We need a new way to visualize/comprehension code that doesn't involve heavy reading and the read's personal capabilities of syntax parsing/comprehension.

        This is something we will likely never be able to get right with our current man machine interfaces; keyboard, mouse/touch, video and audio.

        Just a thought. As always I reserve the right to be wrong.

        • skydhash9m

          Reading is more than enough. What’s often lacking is usually the why? I can understand the code and what it’s doing, but I may not understand the problem (and sub problems) it’s solving . When you can find explanations for that (links to PR discussions, archives of mail threads, and forums post), it’s great. But some don’t bother or it’s somewhere in chat logs.

      • madisp9m

        calculator app on latest macos (sequoia) has a bug today - if you write FF_16 AND FF_16 in the programmer mode and press =, it'll display the correct result - FF_16, but the history view displays 0_16 AND FF_16 for some reason.

      • JadeNB9m

        > macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...

        This is stated as if surprising, presumably because we think of a calculator app as a simple thing, but it probably shouldn't be that surprising--surely the calculator app isn't used that often, and so doesn't get much in-the-field testing. Maybe you've occasionally used the calculator in Spotlight, but have you ever opened the app? I don't think I have in 20 years.

        • ragnese9m

          I think this is backwards. A calculator app should be a simple thing. There's nothing undefined or novel about a calculator app. You can buy a much more capable physical calculator from Texas Instruments for less than $100 and I'm pretty sure the CPU in one of those is just an ant with some pen and paper.

          You and I only think it's complex because we've become accustomed to everything being complex when it comes to writing software. That's my point. The mathematical operations are not hard (even the "fancy" ones like the trig functions). Formatting a number to be displayed is also not hard (again, those $100 calculators do it just fine). So, why is it so hard to write the world's 100,000th calculator app that the world's highest paid developers can't get it 100% perfect? There's something super wrong with our situation that it's even possible to have a race condition between the graphical effects and the actual math code that causes the calculator to display the wrong results.

          If we weren't forced to build a skyscraper with Lego bricks, we might stand a better chance.

        • smrq9m

          Constantly, to keep the results of a calculation on screen. It's fallacious to assume that your own usage patterns are common. Hell, with as much evidence as you (none), I would venture that more people use the Calculator app than know that you can type calculations in Spotlight at all.

      • mgsouth9m

        We've been there, done that. CRUD apps on mainframes and minis had incredibly powerful and productive languages and frameworks (Quick, Quiz, QTP: you're remembered and missed.) Problem is, they were TUI (terminal UI), isolated, and extremely focused; i.e. limited. They functioned, but would be like straight-jackets to modern users.

        (Speaking of... has anyone done a 80x24 TUI client for HN? That would be interesting to play with.)

    • lifeisstillgood9m

      I often Bang on about “software is a new form of literacy”. And this I feel is a classic example - software is a form of literacy that not only can be executed by a CPU but also at the same time is a way to transmit concepts from one humans head to another (just like writing)

      And so asking “will AI generated code help” is like asking “will AI generated blog spam help”?

      No - companies with GitHub copilot are basically asking how do I self-spam my codebase

      It’s great to get from zero to something in some new JS framework but for your core competancy - it’s like outsourcing your thinking - always comes a cropper

      (Book still being written)

      • davidw9m

        > is a way to transmit concepts from one humans head to another (just like writing)

        That's almost its primary purpose in my opinion... the CPU does not care about Ruby vs Python vs Rust, it's just executing some binary code instructions. The code is so that other people can change and extend what the system is doing over time and share that with others.

        • rileymat29m

          I get your point, but often the binary code instructions between those is vastly different.

      • debit-freak9m

        I think a lot of the traditional teachings of "rhetoric" can apply to coding very naturally—there's often practically unlimited ways to communicate the same semantics precisely, but how you lay the code out and frame it can make the human struggle to read it straightforward to overcome (or near-impossible, if you look at obfuscation).

      • j7ake9m

        Computational thinking is more important than software per se.

        Computational thinking is the mathematical thinking.

    • tomohawk9m

      What makes an apprentice successful is learning the rules of thumb and following them.

      What makes a journeyman successful is sticking to the rules of thumb, unless directed by a master.

      What makes a master successful is knowing why the rules of thumb exist, what their limits are, when to not follow them, and being able to make up new rules.

    • codeflo9m

      There’s also the effect that a certain code structure that’s clearer for a senior dev might be less clear for a junior dev and vice versa.

      • rob749m

        Or rather, senior devs have learned to care more for having clear code rather than (over-)applying principles like DRY, separation of concerns etc., while juniors haven't (yet)...

        • JauntyHatAngle9m

          I know it's overused, but I do find myself saying YAGNI to my junior devs more and more often, as I find they go off on a quest for the perfect abstraction and spend days yak shaving as a result.

        • stahorn9m

          When you thought you made "smart" solutions and many years later you have to go in and fix bugs in it, is usually when you learn this.

        • orwin9m

          My 'principle' for DRY is : twice is fine, trice is worth an abstraction (if you think it has a small to moderate chance to happen again). I used to apply it no matter what, soi guess it's progress...

        • zeroq9m

          As someone who recently had to go over a large chunk of code written by myself some 10-15 years ago I strongly agree with this sentiment. Despite being a mature programmer already at that time, I found a lot of magic and gotchas that were supposed to be, and felt at the time, super clever, but now, without a context, or prior version to compare, they are simply overcomplicated.

        • devjab9m

          I find that it’s typically the other way around as things like DRY, SOLID and most things “clean code” are hopeless anti-patterns peddled by people like Uncle Bob who haven’t actually worked in software development since Fortran was the most popular language. Not that a lot of these things are bad as a principle. They come with a lot of “okish” ideas, but if you follow them religiously you’re going to write really bad code.

          I think the only principle in programming I think can be followed at all times is YAGNI (you aren’t going to need it). I think every programming course, book, whatever should start by telling you to never, ever, abstract things before you absolutely can’t avoid it. This includes DRY. It’s a billion times better to have similar code in multiple locations that are isolated in their purpose, so that down the line, two-hundred developers later you’re not sitting with code where you’ll need to “go to definition” fifteen times before you get to the code you actually need to find.

          Of course the flip-side is that, sometimes, it’s ok to abstract or reuse code. But if you don’t have to, you should never ever do either. Which is exactly the opposite of what junior developers do, because juniors are taught all these “hopeless” OOP practices and they are taught to mindlessly follow them by the book. Then 10 years later (or like 50 years in the case of Uncle Bob) they realise that functional programming is just easier to maintain and more fun to work with because everything you need to know is happening right next to each other and not in some obscure service class deep in some ridiculous inheritance tree.

        • sgu9999m

          good devs*, not all senior devs have learned that, sadly. As a junior dev I've worked under the rule of senior devs who were over-applying arbitrary principles, and that wasn't fun. Some absolute nerds have a hard time understanding where their narrow expertise is meant to fit, and they usually don't get better with age.

      • kolinko9m

        I bumped into that issue, and it caused a lot of friction between me and 3 young developers I had to manage.

        Ideas on how to overcome that?

        • whstl9m

          Teaching.

          I had this problem with an overzealous junior developer and the solution was showing some different perspectives. For example John Ousterhout's A Philosophy of Software Design.

    • peepee19829m

      That's exactly what I try to do. I think it's an unpopular opinion though, because there are no strict rules that can be applied, unlike with pure ideologies. You have to go by feel and make continuous adjustments, and there's no way to know if you did the right thing or not, because not only do different human minds have different limits, but different challenges don't tax every human mind to the same proportional extent.

      I get the impressions that programmers don't like ambiguity in general, let alone in things they have to confront in real life.

      • mr_toad9m

        > there are no strict rules that can be applied

        The rules are there for a reason. The tricky part is making sure you’re applying them for that reason.

        • peepee19829m

          I don't know what your comment has to do with my comment.

    • gspencley9m

      My intro to programming was that I wanted to be a game developer in the 90s. Carmack and the others at Id were my literal heroes.

      Back then, a lot of code optimizations was magic to me. I still just barely understand the famous inverse square root optimization in the Quake III Arena source code. But I wanted to be able to do what those guys were doing. I wanted to learn assembly and to be able to drop down to assembly and to know where and when that would help and why.

      And I wasn't alone. This is because these optimizations are not obvious. There is a "mystique" to them. Which makes it cool. So virtually ALL young, aspiring game programmers wanted to learn how to do this crazy stuff.

      What did the old timers tell us?

      Stop. Don't. Learn how to write clean, readable, maintainable code FIRST and then learn how to profile your application in order to discover the major bottlenecks and then you can optimize appropriately in order of greatest impact descending.

      If writing the easiest code to maintain and understand also meant writing the most performant code, then the concept of code optimization wouldn't even exist. The two are mutually exclusive, except in specific cases where it's not and then it's not even worth discussing because there is no conflict.

      Carmack seems to acknowledge this in his email. He realizes that inlining functions needs to be done with careful judgment, and the rationale is both performance and bug mitigation. But that if inlining were adopted as a matter of course, a policy of "always inline first", the results would quickly be an unmaintainable, impossible to comprehend mess that would swing so far in the other direction that bugs become more prominent because you can't touch anything in isolation.

      And that's the bane of software development: touch one thing and end up breaking a dozen other things that you didn't even think about because of interdependence.

      So we've come up with design patterns and "best practices" that allow us to isolate our moving parts, but that has its own set of trade-offs which is what Carmack is discussing.

      Being a 26 year veteran in the industry now (not making games btw), I think this is the type of topic that you need to be very experienced to be able to appreciate, let alone to be able to make the judgment calls to know when inlining is the better option and why.

    • skummetmaelk9m

      That doesn't seem like holding two opposing thoughts. Why is balancing contradictory actions to optimize an outcome different to weighing pros and cons?

      • mihaic9m

        What I meant to say was that when people encounter contradictory statements like "always inline one-time functions" and "breakdown functions into easy to understand blocks", they try to only pick one single rule, even if they consider the pros and cons of each rule.

        After a while they consider both rules as useful, and will move to a more granular case-by-base analysis. Some people get stuck at rule-based thinking though, and they'll even accuse you of being inconsistent if you try to do case-by-case analysis.

    • leoh9m

      You are probably reaching for Hegel’s concept of dialectical reconciliation

      • mihaic9m

        Not sure, didn't Hegel say that there should be a synthesis step at some point? My view is that there should never be a synthesis when using these principles as tools, as both conflicting principles need to always maintain opposites.

        So, more like Heraclitus's union of opposites maybe if you really want to label it?

        • greenie_beans9m

          the synthesis would be the outcome maybe? writing code that doesn't follow either rule strictly:

          > Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.

    • hnuser1234569m

      On a positive note, most AI-gen code will follow a style that is very "average" of everything it's seen. It will have its own preferred way of laying out the code that happens to look like how most people using that language (and sharing the code online publicly), use it.

    • SoftTalker9m

      > other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on

      Absolutely, I'll break up a long block of code into several functions, even if there is nowhere else they will be called, just to make things easier to understand (and potentially easier to test). If a function or procedure does not fit on one screen, I will almost always break it up.

      Obviously "one screen" is an approximation, not all screens/windows are the same size, but in practice for me this is about 20-30 lines.

    • JamesBarney9m

      My go to heuristic for how to break up code is white board or draw up in lucidchart your solution to explain it to another dev. If your methods don't match the whiteboard refactor.

    • mjburgess9m

      To a certain sort of person, conversation is a game of arriving at these antithesis statements:

         * Inlining code is the best form of breaking up code. 
         * Love is evil.
         * Rightwing populism is a return to leftwing politics. 
         * etc.
      
      
      The purpose is to induce aporia (puzzlement), and hence make it possible to evaluate apparent contradictions. However, a lot of people resent feeling uncertain, and so, people who speak this way are often disliked.
    • j7ake9m

      To make an advance in a field, you must simultaneously believe in what’s currently known as well as distrust that the paradigm is all true.

      This gives you the right mindset to focus on advancing the field in a significant way.

      Believing in the paradigm too much will lead to only incremental results, and not believing enough will not provide enough footholds for you to work on a problem productively.

    • defaultcompany9m

      > My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode)

      I think you would appreciate the philosophy of the Grug Brained Developer: https://grugbrain.dev

    • xnx9m

      > I was creating inconsistencies that younger developers nitpick

      Obligatory: “A foolish consistency is the hobgoblin of little minds"

      Continued because I'd never read the full passage: "... adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.” ― Ralph Waldo Emerson, Self-Reliance: An Excerpt from Collected Essays, First Series

    • grbsh9m

      > limits of the human mind when more and more AI-generated code will be used

      We already have a technology which scales infinitely with the human mind: abstraction and composition of those abstractions into other abstractions.

      Until now, we’ve focused on getting AI to produce correct code. Now that this is beginning to be successful, I think a necessary next step for it to be useful is to ensure it produces well-abstracted and clean code (such that it scales infinitely)

    • hinkley9m

      That’s undoubtedly a Zelda Fitzgerald quote (her husband plagiarized her shamelessly).

      As a consequence of the Rule of Three, you are allowed to have rules that have one exception without having to rethink the law. All X are Y except for Z.

      I sometimes call this the Rule of Two. Because it deserves more eyeballs than just being a subtext of another rule.

    • hibernator1499m

      Wait, isn't that just Doublethink from 1984? Holding two opposing thoughts is a sign that your mental model of the world is wrong and that it needs to be fixed. Where have you heard that maxim?

      • perrygeo9m

        No you've got it completely backwards. Reality has multiple facets (different statements, all of which can be true) and a mental model that insists on a singular judgement is reductionist, missing the forest for the trees. Light is a wave and a particle. People are capable of good and bad. The modern world is both amazing and unsustainable. etc.

        Holding multiple truths is a sign that you understand the problem. Insisting on a singular judgement is a sign that you're just parroting catchy phrases as a short cut to thinking; the real world is rarely so cut and dry.

      • HKH29m

        It's not referring to cognitive dissonance.

  • ninetyninenine9m

    His overall solution highlighted in the intro is that he's moved on from inlining and now does pure functional programming. Inlining is only relevant for him during IO or state changes which he does as minimally as possible and segregates this from his core logic.

    Pure functional programming is the bigger insight here that most programmers will just never understand why there's a benefit there. In fact most programmers don't even completely understand what FP is. To most people FP is just a bunch of functional patterns like map, reduce, filter, etc. They never grasp the true nature of "purity" in functional programming.

    You see this lack of insight in this thread. Most responders literally ignore the fact that Carmack called his email completely outdated and that he mostly does pure FP now.

  • VyseofArcadia9m

    > That was a cold-sweat moment for me: after all of my harping about latency and responsiveness, I almost shipped a title with a completely unnecessary frame of latency.

    In this era of 3-5 frame latency being the norm (at least on e.g. the Nintendo Switch), I really appreciate a game developer having anxiety over a single frame.

  • gorgoiler9m

    > Inlining functions also has the benefit of not making it possible to call the function from other places.

    I’ve really gone to town with this in Python.

      def parse_news_email(…):
        def parse_link(…):
          …
    
        def parse_subjet(…):
          …
    
        …
    
    If you are careful, you can rely on the outer function’s variables being available inside the inner functions as well. Something like a logger or a db connection can be passed in once and then used without having to pass it as an argument all the time:

      # sad
      def f1(x, db, logger): …
      def f2(x, db, logger): …
      def f3(x, db, logger): …
      def g(xs, db, logger):
        for x0 in xs:
          x1 = f1(x0, db, logger)
          x2 = f2(x1, db, logger)
          x3 = f3(x2, db, logger)
          yikes x3
    
    
      # happy
      def g(xs, db, logger):
        def f1(x): …
        def f2(x): …
        def f3(x): …
        for x in xs:
          yield f3(f2(f1(x)))
    
    Carmack commented his inline functions as if they were actual functions. Making actual functions enforces this :)

    Classes and “constants” can also quite happily live inside a function but those are a bit more jarring to see, and classes usually need to be visible so they can be referred to by the type annotations.

  • BenoitEssiambre9m

    Here are some information theoretic arguments why inlining code is often beneficial:

    https://benoitessiambre.com/entropy.html

    In short, it reduces scope of logic.

    The more logic you have broken out to wider scopes, the more things will try to reuse it before it is designed and hardened for broader use cases. When this logic later needs to be updated or refactored, more things will be tied to it and the effects will be more unpredictable and chaotic.

    Prematurely breaking out code is not unlike using a lot of global variables instead of variables with tighter scopes. It's more difficult to track the effects of change.

    There's more to it. Read the link above for the spicy details.

  • dang9m

    Related:

    John Carmack on Inlined Code - https://news.ycombinator.com/item?id=39008678 - Jan 2024 (2 comments)

    John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=33679163 - Nov 2022 (1 comment)

    John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=25263488 - Dec 2020 (169 comments)

    John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=18959636 - Jan 2019 (105 comments)

    John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=14333115 - May 2017 (2 comments)

    John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=12120752 - July 2016 (199 comments)

    John Carmack on Inlined Code - https://news.ycombinator.com/item?id=8374345 - Sept 2014 (260 comments)

  • dehrmann9m

    Always read older stuff from Carmack remembering the context. He made a name for himself getting 3D games to run on slow hardware. The standard advice of write for clarity first, make sure algorithms have reasonable runtimes, and look at profiler data if it's slow is all you need 99% of the time.

  • low_tech_love9m

    Interesting: this is a 2014 post from Jonathan Blow reproducing a 2014 comment by John Carmack reproducing a 2007 e-mail by the same Carmack reproducing a 2006 conversation (I assume also via e-mail) he had with a Henry Spencer reproducing something else the same Spencer read a while ago and was trying to remember (possibly inaccurately?).

    I wonder what is the actual original source (from Saab, maybe?), and if this indeed holds true?

  • donatj9m

    I have a coworker that LOVES to make these one or two line single use functions that absolutely drives me nuts.

    Just from a sheer readability perspective being able to read a routine from top to bottom and understand what everything is doing is invaluable.

    I have thought about it many times, I wish there was an IDE where you could expand function calls inline.

  • adamrezich9m

    I find that when initially exploring a problem space, it's useful to consider functions as “verbs” to help me think through the solution, and that feels useful in helping me figure out a solution to my problem—I've isolated some_operation() into its own function, and it's easy to see at a glance whether or not some_operation() does the specific thing its name claims to do (and if so, how well).

    But then after things have solidified somewhat, it's good practice to go back through your code and determine whether those “verbs” ended up being used more than once. Quite often, something that I thought would be repeated enough to justify being its own function, is actually only invoked in one specific place—so I go back and inline these functions as needed.

    The less my code looks like a byzantine tangle of function invocations, and the more my code reads like a straightforward list of statements to execute in order, the better it makes me feel, because I know that I'm not unnecessarily hiding complexity, and I can get a better, more concrete feel for what my program's execution looks like.

  • Cthulhu_9m

    I feel like this style is also encouraged in Go and / or the clean/onion architecture / DDD, to a point, where the core business logic can and should be a string of "do this, then do that, then do that" code. In my own experience I've only had a few opportunities to do so (most of my work is front-end which is a different thing entirely), the one was application initialisation (Create the logger, then connect to the database, then if needed initialize / migrate it, then if needed load test data. Then create the core domain services that uses the database connection. Then create the HTTP handlers that interface with the domain services. Then start the HTTP server. Then listen for an end process command and shut down gracefully), the other was pure business logic (read the database, transform, write to file, but "database" and "file" were abstract concepts that could be swapped out easily). You don't really get that in front-end programming though, it's all event driven etc.

  • torginus9m

    "Typically I am there to rail against the people that talk about using threads and an RTOS for such things, when a simple polled loop that looks like a primitive video game is much more clear and effective. "

    Yess, I finally feel vindicated. I've been having this argument with embedded people since forever. I was of the opinion that if million line big boy PC apps can make do with just one thread, having fifteen threads and synchronizing between them using mutexes and condition variables on a microcontroller with 64kb RAM is just bonkers.

    For some reason, the statement that a while(true) loop + ISRs + DMA can do everything an RTOS like FreeRTOS can do, can rile up embedded folks to no end.

  • otikik9m

    > I have gotten much more bullish about pure functional programming, even in C/C++ where reasonable: (link)

    The link is no longer valid, I believe this is the article in question:

    https://www.gamedeveloper.com/programming/in-depth-functiona...

  • djha-skin9m

    This largely concurs with clean architecture[1], especially considering his foreword containing hindsight.

    Clean architecture can be summarized thusly:

    1. Bubble up mutation and I/O code.

    2. Push business logic down.

    This is how it's stated in [1]:

    > The concentric circles represent different areas of software. In general, the further in you go, the higher level the software becomes. The outer circles are mechanisms. The inner circles are policies.

    Inlining as a practice is in service of #1, while factoring logic into pure functions addresses #2, noted in the foreword:

    > The real enemy addressed by inlining is unexpected dependency and mutation of state, which functional programming solves more directly and completely. However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing. When it gets to be too much to take, figure out how to factor blocks out into pure functions (and don.t let them slide back into impurity!).

    1: https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-a...

  • physicsguy9m

    I think when developing something from scratch, it's actually not a terrible strategy to do this and pick out boundaries when they become clearer. Creating interfaces that make sense is an art, not a science.

  • nuancebydefault9m

    > The function that is least likely to cause a problem is one that doesn’t exist, which is the benefit of inlining it.

    I think that summarizes the case pro inlining.

  • exodust9m

    For some reason this quote by Carmack stands out for me:

    > "it is often better to go ahead and do an operation, then choose to inhibit or ignore some or all of the results, than try to conditionally perform the operation."

    I'm not the audience for this topic, I do javascript from a designer-dev perspective. But I get in the weeds sometimes, maxing out my abilities and bogged down by conditional logic. I like his quote it feels liberating... "just send it all for processing and cherry-pick the results". Lightbulb moment.

  • wruza9m

    I wish languages had the following:

      let x = block {
         …
         return 5
      } // x == 5
    
    And the way to mark copypaste, e.g.

      common foo {
        asdf(qwerty(i+j));
        printf(“%p”, write));
        bar();
      }
      …(repeats verbatim 20 times)…
      …
      common foo {
        asdf(qwerty(i+k));
        printf(“%d”, (int)write); // cast to int
        bar();
      }
      …
    
    And then you could `mycc diff-common foo` and see:

      <file>:<line>: common
      <file>:<line>: common
      …
      <file>:<line>:
        @@…@@
        -asdf(qwerty(i+j));
        +asdf(qwerty(i+k));
        @@…@@
        -printf(“%p”, write));
        +printf(“%d”, (int)write); // cast to int
    
    With this you can track named common blocks (allows using surrounding context like i,j,k). Without them being functions and subject for functional entanglement $subj discusses. Most common code gets found out and divergences get bold. IDE support for immediate highlighting, snippeting and auto-common-ing similar code would be very nice.

    Multi-patching common parts with easily reviewing the results would also be great. Because the bugs from calling a common function arise from the fact that you modify it and it suddenly works differently for some context. Well, you can comment a common block as fragile and then ignore it while patching:

      common foo {
        // @const: modified and fragile!
        …
      }
    
    You still see differences but it doesn’t add in a multi-patch dialog.

    Not expecting it to appear anywhere though, features like that are never considered. Maybe someone interested can feature it in circles? (without my name associated)

  • kazinator9m

    In my opinion, there is value in functions that have only one caller: it's called functional decomposition. The right granularity of functional decomposition can make the logic easier to understand.

    To prevent unintended uses of a helper function in C, you can make it static. Then at least nothing from outside of that translation unit can call it.

  • rcv9m

    > The fly-by-wire flight software for the Saab Gripen (a lightweight fighter) went a step further...

    I would love to hear some war stories about the development of flight software. A lot of it is surely classified, but I'm fascinated by how those systems are put together.

  • IshKebab9m

    I think the major problem with this is scope. Now a variable declared at the top of your function is in scope for the entire function.

    Limiting scope is one of the best tools we have to prevent bugs. It's one reason why we don't just use globals for everything.

  • endlessmike899m

    Link to the Wayback Machine cache/mirror, in case you're also experiencing a "Bad Gateway/Connection refused" error

    https://web.archive.org/web/20241009062005/http://number-non...

  • roeles9m

    > No bug has ever been found in the “released for flight” versions of that code.

    I thought that at least his crash was a result of bad constants in flight software: https://www.youtube.com/watch?v=SWZLmVqNaQc

    The first comment appears to agree with me.

  • lencastre9m

    I’m not even pretending I understood Carmack’s email/mailing list post but if more intelligent/experienced programmers than me care to help me out, what exactly is meant by this he wrote in 2007:

    _If a function is called from multiple places, see if it is possible to arrange for the work to be done in a single place, perhaps with flags, and inline that._

    Thanks,

  • easeout9m

    Come to think of it, execute-and-inhibit style as described here is exactly what's going on when in continuous deployment you run your same pipeline many times a day with small changes, and gate new development behind feature flags. We're familiar with the confidence derived from frequently repeating the whole job.

  • sylware9m

    I have been super picky about what JC says since he moved the ID engine from plain and simple C99 to c++.

  • 9m
    [deleted]
  • shortrounddev29m

    Can someone explain what inlined means here? It was my assumption that the compiler will automatically inline functions and you don't need to do it explicitly. Unless it means something else in this context

  • rossant9m

    (2014)

  • fabiensanglard9m

    How does a program work when its disallow "backward branches". Same thing with "subroutine calls" how do you structure a program without them?

  • randomtoast9m

    My browser says "The connection to number-none.com is not secure". Guess it is only a matter of time until HTTPS becomes mandatory.

  • 9m
    [deleted]
  • ydnaclementine9m

    > do always, then inhibit or ignore strategy

    can anyone expound on this? I'm not sure what he's exactly referring to here

  • 9m
    [deleted]
  • Ono-Sendai9m

    There is actually a major problem with long functions - they take a long time to compile, due to superlinear complexity in computation time as a function of function length. In other words breaking up a large function into smaller function can greatly reduce compile times.

  • gdiamos9m

    How much of this is specific to control loops that execute at 60hz?

  • jjallen9m

    One benefit that I can think of for inlined code is the ability to "step" through each time step/tick/whatever and debug the state at each step of the way.

    And one drawback I can think of is that when there are more than something like ten variables finding a particular variable's value in an IDE debugger gets pretty difficult. It would be at this point that I would use "watches", at least in the case of Jetbrains's IDEs.

    But then yeah you can also just log each step in a custom way verifying the key values are correct which is what I am doing as we speak.

  • rickreynoldssf9m

    The clean code people are losing their collective minds reading that. lol

  • oglop9m

    Oh good, a FP post. I love watching people argue over nothing.

    Here’s the actual rule, do what works and ships. Don’t posture. Don’t lament. Don’t idealize. Just solve the fucking problem with the tool and method that fits and move on.

    And do not try to use this comment threat to understand FP. Too many cooks, and most of the are condescending douchebags. Go look at Wikipedia or talk with an AI about it. Don’t ask this place, it’s all just lectures and nitpicks.

  • mellosouls9m

    (2014)

    Ten years ago - a long time in coding.

  • Myrmornis9m

    This is the real religious war among programmers -- it's a genuinely consequential question: someone who favors abstraction and modularity is going to absolutely hate working in a codebase with pervasively inlined code.

    It's clear that Carmack's article is addressing a particular sort of C++ codebase that might be familiar to game developers, but isn't familiar to a lot of us here who work on web applications and backend distributed systems. His "functions" aren't really what we think of as functions: they're clearly mutating huge amounts of global state. They sound more like highly undisciplined methods on large namespaces. You can see that from the following quotes:

    > There might be a FullUpdate() function that calls PartialUpdateA(), and PartialUpdateB(), but in some particular case you may realize (or think) that you only need to do PartialUpdateB(), and you are being efficient by avoiding the other work. Lots and lots of bugs stem from this. Most bugs are a result of the execution state not being exactly what you think it is.

    > if a function only references a piece or two of global state, it is probably wise to consider passing it in as a variable.

    In the world of many people here, i.e. away from Carmack's C++ game dev codebases of the 2000s with huge amounts of global mutable state, the standard common sense still applies: we invented structured programming with functions for profoundly important reasons: modularity and abstraction. Those reasons haven't gone away; use functions.

    - In a large codebase you do not need or want to read the full tree of implementation in one go. Use functions: they have return types; you know what they do. A substantial piece of implementation should be written as a sequence of calls to subfunctions with very carefully chosen names that serve as documentation in themselves.

    - Make your functions as pure as possible subject to performance considerations etc.

    - This brings a huge advantage to helper functions over inlining: it is now easy to see which variables in the top-level function are being mutated.

    - The implementation is much harder to understand in a single function with 10 mutable variables, than in two functions with 5 mutable variables. I think ultimately that's just a fact of combinatorics; not something we can hold opinions about.

    - But sure, if the 10 mutable variables cannot be decomposed into two independent modules then don't create spurious functions.

    - A separate function is testable; a block inside a function is not. It wasn't really clear that the sort of test suites that many of us here work with were part of Carmack's codebases at all!

    - It is absolutely fine to use a function if it improves modularity / readability even if it only called once.

  • atulvi9m

    who read this in John Carmack's voice?

  • 9m
    [deleted]