104 comments
  • AnarchismIsCool1y

    Some concepts that you'll need to get familiar with:

    Real time operating systems. Less fancy than they sound but the devil is in the details. Robots need things to happen at a certain speed and at the right time so we have a type of scheduler (that can be patched into the linux kernel) that sacrifices absolute throughput to try and guarantee tasks start inside a particular window. Funny enough, if you've done game development and recognize that everything needs to happen inside 1/60th of a second or better, you know some of the hard parts here.

    Memory mapped addresses. C is scary but ultimately fairly simple. Once you get the hang of doing silly things with pointers and arrays, the next step is dealing with microcontrollers. You probably wonder how they do anything without an operating system though, and the answer is memory mapped IO. They have a fully flat memory space, starting with 0x0 and going up from there. That space usually contains basically everything from your stack, heap, flash storage, and all the peripherals like GPIO, I2C, SPI, serial, and so on. You can literally do things like int* x = 0x12345678; *x = 0x1; to turn on an LED because the device is listening for changes to that address to set the output state.

    There's a ton of other stuff, but these are the gateways to understanding the space you're dealing with at a basic level.

    • iamcreasy1y

      Ah, I remember Casey Muratori speaking having desktop OS where everything is memory mapped.

      I remember reading a book that explains the low level Arduino schematic paired with C code that does exactly what you are describing. Is there any such book for modern microcontrollers?

      • AnarchismIsCool1y

        https://www.st.com/resource/en/programming_manual/pm0214-stm...

        I'm only half kidding, the programming manuals are pretty good these days. Honestly there's more in common with the arduino and a modern uC than there is different. If you want to use the knowledge in anger you need to learn linker scripts, makefiles, gcc, gdb and jtag so you can deploy and debug code on boards that don't have self-flashing bootloaders and FTDI baked in. There's also some tribal knowledge about how ISRs work like keeping ISRs as short as possible (no print statements) but I don't know what books cover that.

  • msadowski1y

    My journey is a little bit opposite to the author’s: I studied robotics and started learning programming by myself to create software for robots. There are two observations I have that can be useful to people making the software->robotics journey.

    * Agile for software-hardware is hard if not impossible. For software it can be reasonable, but it’s really hard to iterate on both hardware and software at the same time. It’s easier if the hardware designed is locked or the process involves waterfall on the hardware side. * I often found that people that come from pure software to robotics don’t have control experience and something that can be easily solved with a PID controller end up being a custom code that ends up being way more complicated than needed.

    • fake-name1y

      I'd go one further and say that basically agile in hardware is fundamentally impossible in any way people commonly use agile.

      If you have any custom hardware, you are basically stuck with a turn time of at minimum a day or two for any changes (often more, weeks + for new PCBs is common if you're not throwing huge amounts of money at people).

      In this context, any process that depends on rapid small iterations is basically impossible, because each iteration just takes too much time.

      • chillingeffect1y

        Ive done firmware for a long time. Platforms vary considerably and most have surprises. Debugging facilities are limited. Reference examples are sparse and chatgpt hasnt got much info on niche OSes like zephyr, and safertos except some FreeRTOS.

        Pieces of code interact more heavily than a linux machine. Testing requires more hand-holding/baby-sitting. Cross-platform architectures don't scale down well. There are many types of comms busses with no/few standard embeddings.

        Some teams know how to make it a lot easier. Some CTOs know this, but most find out the hard way. Embedded practices lag webdev by 5-10 or more years bc they were good enough for a long time or for small projects. Expectations are rising, but there is less leverage than adtech so salaries are ok but not explosive.

        • jsjohnst1y

          > Reference examples are sparse and chatgpt hasnt got much info

          Chicken, meet egg. If a web crawler can’t find plenty of reference examples, then LLMs trained on web crawled data aren’t going to be very useful.

      • nostrademons1y

        I wonder (as someone who's basically always been in the pure software land) if the way to get around this is to overbuild your hardware prototype. Throw on more sensors, actuators, and motors than you actually need, and parameterize the physical properties of the hardware (like mass, power, and center of gravity). Then do your iterations in software. You can always ignore a signal coming from a sensor, but it takes weeks to add a new sensor and get a new prototype. So work out all the behavior in software, where you can iterate on a minutes -> hours timetable.

        Then once you know how it works and have done most of your optimizations, you can trim down the BOM and get rid of hardware that proved useless. You'd probably want another round of tuning at this point, but at least you know basically how it works, and have tried out many different combinations of hardware in your software iterations.

        • calamari40651y

          It's unfortunately not that simple. For most parts you can get breakout boards or dev kits that implement the chip and all its support circuitry. You can drop those into a simple PCB or just a breadboard and get going pretty quick. This is appallingly expensive for more than a handful of prototypes, but it does work. IME, this is how software people approach electronics.

          The real trouble comes during the next step. You have to integrate all these disparate boards into one unit. You can copy schematics and layouts from the dev boards, but that takes a lot of time. Once you've done that, you need to re-validate your entire stack, electronics, firmware, and software. You will inevitably find that you've missed a connection or the routing introduced some noise or you got this component value wrong. So you spin a new board, wait a week, and do it all over again several more times.

          Debugging electronics is a lot like debugging software. Except that the code is physical objects and the bugs are emergent properties of the whole system and are inherently chaotic. You (generally) can't step through the system like you would with code, you can only observe a small number of nodes in the system at any one time.

          It's harder than you expect because you're dealing with invisible forces and effects of the fundamental laws of physics. It requires either very deep knowledge or a lot of time in iterations to solve some of these problems.

          • JoeCortopassi1y

            To expand on this, if software was like hardware:

                - when your function calls don't have enough white space between them, they'll some times mix up their return value (crosstalk)
                - the more deeply your if/else statements are nested, the more random their results end up being (voltage/IR drop)
                - when the user makes a selection, your bound function is called somewhere between once and a dozen times. Each time with a different value (bounce)
                - their is no `true` or `false`. You just are given a float between one and zero, where one is true and zero is false. Sometimes `true` is as low as 0.3 and sometimes `false` is as high as 0.7
                - the farther apart your variable declaration is from it's use, the less it represents what you actually declared. If it's two lines apart, a 'foo' string is still 'foo'. Hundred lines apart though, and 'foo' might become 5.00035 (attenuation)
            • calamari40651y

              If you want your program to execute as fast as possible, you have to worry about the speed of light! A trace a few millimeters longer than its partner can have nanoseconds of delay which can easily corrupt your data!

              And don't forget that you have to balance the physical shape and arrangement of components and traces with frequency and surrounding components otherwise you've created a transmitter spewing out noise at tens of MHz.

              Or the corollary: if you aren't careful you can receive radio signals that will corrupt your data.

              Oh, you think your wire can handle the 0.5A your widget needs? Let me tell you about transients that spike to tens of amps for a few hundred nanoseconds. But it's okay, that problem can be solved with a bit of trigonometry.

              On the plus side, if you forget to connect your ADC to something, you now have a surprisingly decent random number source.

              I love the absolute chaotic insanity of electronics. On the surface things make sense, but one level deeper and nothing makes sense. If you go further than that, at the bottom you'll find beautiful and pure physics and everything makes sense again.

              I feel the same way about software. It's a hot mess, but under everything there's this little clockwork machine that simply reads some bits, then flips some other bits based on a comparison to another set of bits. There's no magic, just pure logic. I find it a very beautiful concept.

              • GianFabien1y

                Not so fast, some alpha particles from a distant galaxy strike your memory chips and some bits flip. If the CPU gets too hot or too cold it starts misinterpreting opcodes, branches, etc.

                The reality is that computers are comprised of several PCBs running with thousands of multi-GHz signals. So all of the foregoing engineering design principles had to be observed to make our systems as reliable as they are.

          • c_o_n_v_e_x1y

            I came here to say what you just did. Hardware modularity is incredibly useful in the face of unstable product requirements. Focusing on integration, optimizing form factor can come once the requirements are locked in (if such a condition ever exists lol).

        • Animats1y

          It's common to design PC boards that have holes and traces for components that aren't installed. If you need three motor controllers now, design a board with space for six, plus a prototyping area of plain holes. Allow for extra sensors, inputs, and outputs. It's easy to take the extras out of the design later when you make the board for the production product.

        • throwup2381y

          That's what dev kits are in electronics. They usually even come with schematics/PCB layouts so an engineer can quickly copy a working design and remove the stuff they don't need. There's still a huge gulf between those prototypes and production and there's plenty of mistakes to make requiring multiple revisions.

          • bbarnett1y

            It was the same end-position prior to internet publishing of software.

            No updates. No bugfixes. What you pressed to CD (as much as a buck per CD qty 1000), was what your customers got. Period.

            And it hurt to republish, because those updates were via mail. The box/sleeve, the labelling, the labour, the shipping.

            DEVs today have no idea how easy their push is.

            Get it right or you fail.

      • brailsafe1y

        I'd go one step further and say that Agile might as well be tossed away entirely since it's common enough for companies to treat it like waterfall anyway, making the practice of many small iterations or iteration at all unwelcome. If your software team embraces iteration and incremental improvement, any presence of agile is probably redundant; if they aren't, or the nature of the work doesn't facilitate it, then Agile gets in the way regardless of the domain.

      • gregwebs1y

        When hardware design can be software emulated then iteration can be more agile. There's a story how NVIDIA was the first one to do this for GPUs- this was done out of desperation because they were out of money and had to ship quickly. They didn't have time and money to do any revisions so they just shipped what they had done in software emulation even though some of the features were defective. https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...

        • FirmwareBurner1y

          Absolutely all chip designs get software emulated/simulated for rigorous testing before being sent to the fab for production.

          It seems like the publication turned what is industry standard into a sensationalistic article.

          The only thing Nvidia did different was rolling directly to tapeout with their simulated design without any intermediate prototypes which was indeed a risky move but not unheard of for cash strapped semi startups.

  • bemusedthrow751y

    This is very interesting.

    But quite a lot of CS people -- well at least I hope it is not just me -- end up on the web development trajectory despite or because of having less than good enough understanding of e.g. trigonometry, geometry and calculus... all of which start to matter a lot when you are starting to make things; especially things that move and consume electricity.

    I am a good programmer but a weak mathematician. I have a 3D printer now, and I'm starting to learn FreeCAD and microcontroller stuff actively rather than just reading about it, and run up against my maths weakness all the time.

    I have recently discovered the three books Joan Horvath and Rich Cameron wrote:

    https://www.makershed.com/products/make-trigonometry-print

    https://www.makershed.com/products/make-calculus-print

    https://www.makershed.com/products/make-geometry

    • bouk1y

      Some trig and geometry is definitely useful—but it's quite limited to be honest. For most things you can use existing algebra libraries and for most things someone else has figured it out before!

      • tonyarkles1y

        I don’t know that I agree, even at the simple project level. If the problem you’re trying to solve has been solved exactly before and you can find a good reference to copy then you might be ok (for example the inverted pendulum problem). But in my experience you end up solving a related problem instead and that’ll often require re-deriving the equations of motion with whatever quirk is required.

        The other part that’ll catch you pretty hard if you don’t have strong trig and linear algebra going for you is that debugging is going to be really hard. I admittedly have been deep in positioning-and-attitude control for 5 years now, but the ability to look at a transformation matrix or quaternion and mentally grok exactly what it means odd going to be incredibly useful when you’re trying to figure out why your system isn’t doing what you think it should be.

      • bemusedthrow751y

        Indeed. But I think I would like to be oriented better, and not feel completely outside it.

        My calculus is strongest; I can deal with that basically. My geometry is so poor that I am surprised I passed my maths A level.

        These books excite me because they give back that, er, constructivist thing that has always appealed to me -- Piaget, Papert, Minsky.

        I enjoyed your article, thank you for writing it.

      • DragonStrength1y

        It’s just used to gatekeep still. It justifies why they should hire the new grad masters student with a similar background over someone who has actually written the code before.

        • bemusedthrow751y

          I guess this may be true. But my lack of knowledge of these things is on a different plane to the "knows well-enough" vs "expert in" distinction that might be useful for gatekeeping.

          My grasp of trig has relapsed to 13-year-old level. It's an embarrassment!

    • dartos1y

      Web dev just has the best pay/perks for the amount of work.

      • bemusedthrow751y

        It was exciting thirty, twenty-five, even twenty years ago. I loved it.

        But the pay/perks situation is dependent on putting up with a lot of corporate BS; if you're a solo developer, or the only developer in a small team, web stuff is hard, the task shrinks over time as things commoditise and you become more dependent on hosted offerings, and pays relatively poorly. I am tired and seeking a direction change before it is too late.

        Literally the only topic on which I agree with Elon Musk is that more developers need to be persuaded to do something more interesting than getting dragged into web dev BS. And I wish I'd paid a lot, lot more attention in maths lessons away on topics that didn't have to do with logic.

        (It is, nevertheless, amusing that it is web dev BS where Musk is getting his posterior handed to him)

        • dartos1y

          I’m with you. I’d love to work in not web dev, but I don’t even know how to do that.

          Feels like everything is server programming.

          • bemusedthrow751y

            I currently have the short-term luxury to rethink.

            I am steering myself away from web dev tools (I'm a Laravel/PHP/Rails/Node.js/Vue/React/GraphQL/DB/server/Perl/HTML/CSS/front-end/back-end/every-bloody-end guy, jack-of-all-trades, master-of-some).

            I've been doing it too long and I'm a solo/design agency worker now. I've built some big stuff (and some early semi-famous stuff) and I've also ended up doing the crappy-little-WP-site shuffle.

            I am aiming specifically towards tools that reach out a little further.

            Basically I am finally, after all these years, properly learning Python, which I have never once actually needed. Because it reaches further than the programming languages on that list. Into desktop app development. Into scientific computing. Into embedded. More towards 3D printing, GIS etc.

            And it still works for all the web things, so I can bridge.

            And perhaps most importantly, more towards teaching and education, so I can actually be some use before I die.

            • dartos1y

              Learning different areas of programming is all well and good.

              I started in game mechanics programming and graphics programming when I was a kid.

              But that isn’t the same as knowing how to get a job in those fields.

  • eschneider1y

    Nice article. As someone who's also gotten into robotics from the software side, I'd also suggest learning a bit about the hardware side. If you're doing anything with custom boards and board bring-up, you'll need to be able to read a schematic and data sheet at a basic level. Being able to use an oscilloscope and logic analyzer is also _very_ useful. None of this stuff is terribly hard, but it's nice to know what to expect.

    • Animats1y

      Yes.

      The number of people who both know C++ (not C) and how to use an oscilloscope is surprisingly small.

      Electronics at robotic speeds isn't that hard. Most signals are audio bandwidth or lower. You rarely need all the elaborate design techniques needed when you get into the MHz-Ghz range.

      • FirmwareBurner1y

        >The number of people who both know C++ (not C) and how to use an oscilloscope is surprisingly small.

        Statements like these make my eyes roll.

        The difficult part in embedded, and what makes the difference between a n00b spending 2 months finding a hardware/firmware issue and a graybeard finding it in under 2 days, is not how to use an oscilloscope or how to write "clean" c++, it's knowing where exactly to put the oscilloscope probes and what exactly to look for on the oscilloscope screen, and how to make the bug/edge case reproducible.

        You don't have unlimited scope probes, trace memory and time to probe every single signal on the board and stare at blank signals not knowing what to look for, for weeks, while th clock is ticking.

        Tinkering with STM32s and Raspberry Pis at home for fun, doesn't prepare you for the issues you'll encounter in debugging production devices (especially battery powered or RF) you'll need to ship on time and on budget.

        Knowing how to use a scope and how to write c++ is only 10% of a successful product. The other 90% is blood, sweat and tears hunched over for hours/days over boards and breakpoints to find out your race condition comes from a cheap crystal oscillator operating out of spec and not from your software.

        Especially that at small companies and start-ups you don't have the budgets and HW lab equipment that the likes of Apple or Qualcomm can afford to brute force your way to the solution, so you need to be very shrewd and clever with your debugging to make the most out of your limited resources.

    • shafyy1y

      What can you recommend to start experimenting at home with robotics? Raspberry Pi?

      • Galanwe1y

        I guess that depends on your software background. If C/C++ is not a problem, I would recommend Arduino. I found it simpler to just upload your program as the sole brain of your MCU, rather than working with a Raspberry Pi with Linux and all the abstractions, devices, etc that goes with it.

        • sokoloff1y

          I’d recommend starting with this as well. The setup() and loop() model gets you started quickly and can take you through a lot of small projects before you want timers or other advanced features.

          Plenty of tutorials and libraries (often “working but not high-quality”) are available as well.

          Even if c/c++ scares you a little, I’d still start there (I came from a long c/c++ background, but I’ve seen my kids be able to do it with not nearly as much help as I expected to give them).

          Dirt cheap and readily available as well. Clones are fine; you don’t need the name brand Arduino boards.

        • jacquesm1y

          Raspberry Pi will work with the Arduino environment as well.

      • jacquesm1y

        Raspberry Pi, Teensy, Micro:BIT, ESP32, Arduino. Some steppers, some servos and some sensors. $200 will get you more gear than you'll be able to use in the first six months or so, it is incredible how cheap this stuff has become.

      • GianFabien1y

        No, no - I have a half a dozen RPi's of varying revisions. Since they run Linux and the SDcards are less than robust, anytime you accidentially trip the power (which to me happens several times during hardware debugging sessions) you risk scrambling the rootfs and thus need to reflash a new SDcard. Some SDcards get damaged.

        I recommend using Arduino and/or Wokwi (https://wokwi.com/) to get started.

      • 1y
        [deleted]
      • eschneider1y

        Arduinos or Raspberry Pi's are both fine. I'd definitely pick up some motors and figure out how to do precision control. Lots of good info online for that. :)

  • AYBABTME1y

    I often feel the same urge to work on soft/hard stuff. Not sure why I don't actually do it. Although the current stuff I work on is very cool on its own and not Candy Crush. But it's really a scourge on this world that the top tier pay in the industry is to shove ads in front of people.

    So I do physical things in my personal projects and then come to realize that the potential return on investment (time spent vs. expected money) would be peanuts in comparison to pure software. I would love to know of hardware related work that has decent margins. Or at least enough margins to justify paying a good salary to enough engineers to sustain them.

    • bemusedthrow751y

      > But it's really a scourge on this world that the top tier pay in the industry is to shove ads in front of people.

      This cannot be said enough, IMO. The web software world's endless quest for the shallow and lucrative is exhausting.

  • NordSteve1y

    If you have a team nearby, go work with a FIRST Robotics Competition team as a mentor/volunteer. Great community, you'll learn a lot, and make great connections if you want to get into this area as a career.

    • bouk1y

      Really wish this existed in the Netherlands when I was growing up!

  • kaycebasques1y

    The article itself was a bit too holier-than-thou for me but I want more robotics content here on HN so let's hijack this thread and share robotics passion!!

    My own shitty first foray into robotics is an RPi that I can talk to [2].

    If you didn't see "the coolest robot I've ever built" you gotta watch that... so inspiring [3]

    Latent Space has a robotics demo day coming up, pretty curious to see what comes out of that [4]

    Some stream-of-conscious thoughts about why I'm drawn to robotics:

    * The maker / hacker / homebrew communities that are basically just using robots to express art. Maker Faire, Burning Man, etc.

    * The satisfaction of writing code and seeing something physical happen. Last week I was trying to figure out how to get a shitty third party Amazon robot hat [5] to actually do something useful so I was iterating through the GPIOs and I somehow made it actually smoke. I'm weirdly proud of messing up so badly that my hardware actually smoked!

    * The joy of demystifying hardware and learning all the layers of abstraction just within hardware

    [2] https://www.biodigitaljazz.net/blog/STTTGTS.html

    [3] https://news.ycombinator.com/item?id=38162881

    [4] https://lu.ma/latent-space-final-frontiers

    [5] A Xmas present from my sweet wife, really touching that she's encouraging me to actually pursue my interest in robotics

  • mdorazio1y

    Really wish the author would have commented on salary. I personally think the reason more people don't end up on the hardware/robotics side is that the questionable-value fintech/adtech/socialtech pure software side of the world pays so much better and most developers follow the money.

    • DoctorDabadedoo1y

      Robotics is very niche and the market is dominated by early stage startups (since most of them go out of business a few years in), so salaries are average unless you are working specific jobs for FAANG. Job hoping usually means moving elsewhere, since working close to the hardware makes it much easier, which in turn means having a good picture of what is a competitive salary sometimes is not obvious.

      source: I work in robotics. AMA.

      • badpun1y

        Thanks for the AMA! Do you see a lot of opportunities for people utilizing computer-vision based SLAM (aka Structure from Motion)? I quite like that technique, but it seems too niche to make a consistent career out of it.

        • DoctorDabadedoo1y

          SLAM and localization in general are early bottlenecks in mobile robotics solution, unless the core business of the product is localization itself (e.g. waymo, cruise, etc.) it's not uncommon for that to take a back seat during product development later on. In my experience lidar based solutions are more common out there due to their "simplicity" and safety for industrial applications (i.e. PL-rated)

          You certainly can work with visual slam in the industry, but the job pool for that is not very big, I would say.

        • 0xfaded1y

          I work for a self driving doing mapping. There are definitely roles out there, more broadly look into other areas of state estimation.

          It gets interesting when these systems need to be productionized and scaled. A strong software engineer who can speak the math is invaluable. My first year on the job was basically removing N^2 loops and making 3D visualizations.

        • jpace1211y

          Computer vision based SLAM definitely has use cases and you can definitely make money applying it at the right companies to the right set of problems. Building a whole career around a single technique, any technique, probably is probably not gonna work. You need to be broader than that.

      • penjelly1y

        how would you recommend a software dev break into the industry over the next 3-5 years? Preferably without going back to university. What sorts of projects are best to focus on initially?

        • AlotOfReading1y

          Not the person you asked, but the main qualifications are having a pulse and knowing any of C, C++, and Python. The main obstacle you're going to run into from an educational perspective is that people won't want to hire you for controls-specific roles, but those are a niche within a niche. Interview questions will often involve talking about things like RTOSes, writing a queue in C, handling ISRs, and sending messages on a bus (e.g. CAN/i2c/spi).

        • DoctorDabadedoo1y

          Robotics is a broad field and is a confluence of many specialties: mechanical engineering, hardware engineering, software engineering, control, machine learning, computer vision, anything in between is a good entrance.

          Coming from software, if you are interested, I would suggest either:

          - Backend platform development (Python, C++ as main programming languages with a strong focus on ROS[1]).

          - Frontend development (nothing too different from what's out there).

          As small projects I would suggest playing with ROS to learn it and getting a running simulation with a simple robot that you can teleoperate, most of the stack already exists, it's just connecting everything together [2].

          Another venue is open source contribution [1] to get known within the community and potentially attract interest from companies. ROS has multiple packages, from cloud infrastructure to drivers and simulation, if you see anything there you could contribute to, they will gladly take contributions.

          In general robotics greatly benefits of good technologies from other areas, if there is a tool we use you believe could be better or a lack of good tooling in a specific area, it will get noticed.

          So this would be my suggested path: learn C++/Python if you're not familiar with, learn ROS and watch which specialties appear more often in robot related jos posts [3]. If you are really invested, maybe go to a robotics conference as ROSCon to meet other enthusiasts, which companies are engaged with the community, etc.

          Good luck!

          Note: not everything robot related is done in ROS, but it's almost a standard within the field save for a few exceptions.

          [1]: https://www.ros.org/ [2]: http://wiki.ros.org/ROS/Tutorials [3]: https://discourse.ros.org/c/jobs/15

          • Animats1y

            > not everything robot related is done in ROS, but it's almost a standard within the field save for a few exceptions.

            In academia, yes. ROS is a piece of middleware for passing messages around, and a standard for talking to it. Funding agencies pushed academic robotics projects to talk to ROS so that results from different projects could interoperate. Which they sort of do. You get a lot of tooling for logging, user interfaces, wiring things up, etc. Think of it as a solderless breadboard for robotics software. The final product probably doesn't use it.

            • DoctorDabadedoo1y

              > The final product probably doesn't use it.

              I disagree here, usually projects that are not keen on relying on ROS end up re-implementing a lot of the groundwork (shared memory, message distribution, parameter server, logging, etc.), usually they are older products (10 or 15+ years old), where ROS wasn't really a thing and migrating to ROS after implementing their own tailored stack is not worth it.

      • moffkalast1y

        What kind of robotics? It's a pretty wide field.

        • DoctorDabadedoo1y

          Mobile robotics, slam/localization and robotics backend.

          • moffkalast1y

            Ah neat, then I do have an actual question for you. If you're using ROS 2, which version and middleware? And does it work reliably in your application?

            I feel like lots of companies have ditched ROS 1 only because of support being cut and cargo culting about how ROS 2 is better for unclear reasons, but in practice it feels anything but production ready to me. Everyone talks big how Zenoh will solve everything, but the thing is a damn prototype and really goes against the whole idea that having a DDS will be somehow better instead of just adding absurd overhead in both CPU and networking.

            • DoctorDabadedoo1y

              I'm using ROS 1 Noetic on Ubuntu Focal at my current job.

              I've touched ROS 2 back in 2019 during the Crystal Clemmys era but it still felt experimental back then (and I believe still was considered beta), I plan to re-evaluate later this year/early 2025 how stable ROS 2 is now.

              ROS 2 is better in some aspects that weren't accounted for back when ROS 1 was developed (which started as research platform), mainly distributed systems (multi-robot environments) and good tech developed elsewhere that the system could benefit from.

              My personal take is that there is a lot of hit and miss between versions, no guarantee that what works now will work on the next one due to the decentralized nature of the community and the fact the team developing all this is relatively small. I would love to have a way to do small iterations on my own stack to bring it to the latest and greatest ROS release, but the combo of ROS 2 + major Ubuntu upgrade every 2 years is maybe too much for my own peace of mind.

              • moffkalast1y

                That makes sense, Noetic is rock solid compared to any ROS 2 release so far imo. I hope that this year's LTS will be more reliable (again with the whole promise of Zenoh rmw) but I remain sceptical since all the major fixes always get backported and Humble is not exactly a beacon of stability.

                > mainly distributed systems (multi-robot environments) and good tech developed elsewhere that the system could benefit from

                Yeah that was the theory anyway, I'm not sure how much of that had really worked out as intended. Automatic discovery without a roscore has proven... unfeasible. Especially over wifi since the required amount of multicasts basically runs a continuous ddos attack and now we're moving back to the same old concept with FastDDS Discovery Server, Zenohd, and the ros2 daemon. Sure the DDS technically has encryption now, but I've yet to hear of anybody using it. Overall it's just a mess.

                When you're doing that evaluation, take a good look at python nodes and their performance compared to Noetic. Currently it's bad, like horrendously bad, 30x less efficient in some cases that I've tested. And they've also ported the CLI parts from bash to python so tab autocomplete and parameter fetching take like actual seconds. I just don't know...

                Maybe there's lots of systems out there already running on Cyclone or FastRTPS that make it all worthwhile, but I haven't really heard of any such cases.

                > the combo of ROS 2 + major Ubuntu upgrade every 2 years is maybe too much for my own peace of mind

                Yeah 2 years seems a long time, but it never turns out that way. It really pains me to see useful packages made obsolete and unusable again and again for no reason but moving to the next release which usually brings effectively nothing by itself.

                • DoctorDabadedoo1y

                  > Currently it's bad, like horrendously bad, 30x less efficient in some cases that I've tested.

                  Ouch. Back when I played with it efficiency wasn't even under discussion, as the basics had just gotten there (topic/service/action support), last I paid attention to this matter was during to some performance/inconsistencies of FastDDS (during Foxy, maybe?) from which the default was changed to Cyclone, but I must admit I'm a bit out of the loop.

                  > Yeah 2 years seems a long time, but it never turns out that way. It really pains me to see useful packages made obsolete and unusable again and again for no reason but moving to the next release which usually brings effectively nothing by itself.

                  Yup. Two years makes sense for cloud, but with hardware involved and a fleet deployed in varying conditions around the world, 2 years to migrate a major version of ROS and Ubuntu is maybe too short (not even considering packages that you depend on that might not be ported at all). I would love to see other distros with longer spans (Alma/RHEL/Debian) becoming 1st class citizens and ROS releases turning into incremental versions running on the same platform until they reach EOL, at "robotics pace" that would be fantastic.

    • edge171y

      Definitely agree here. I have been making this transition and currently work on life science lab automation robotics. It definitely takes intentionality because if you're good at software it's easy to get steered towards lucrative but higher level places in the stack.

    • okr1y

      What is questionable-value? People pay for questionable things, that brings value to them, though that value is questionable? Do not get it. Though i understand the personal resentment.

      • mdorazio1y

        Value to society, which is different than value to companies or individuals.

    • bouk1y

      There are certainly hardware companies that value software enough to pay well

      • FirmwareBurner1y

        But few companies do, usually American ones, and their open positions are usually less in number than software and they're only in certain regions. If you don't live there you're shit out of luck.

      • giancarlostoro1y

        I am assuming anything mission critical.

  • chris_st1y

    I've occasionally thought it would be fun (not necessarily productive!) to do something like the experiments folks have done with Genetic Algorithms, having simulated robots learn to walk, etc. in a simulated physics environment. The interesting bit would be to do it with real legs, sensors, servos, etc., to try to build up a "naive physics" library that can deal with balance, etc., by learning from physical experience rather than starting with equations of motion.

    That may be the most number of times I've typed "etc." in a paragraph.

    • mdorazio1y

      You're describing something similar to what TRI has been working on for several years and seems to be making progress on: https://www.tri.global/news/toyotas-robots-are-learning-do-h...

      • amelius1y

        Can anyone explain why diffusion policies are so powerful compared to just attaching a bunch of actuators to the output layer of a deep neural network?

    • FooBarBizBazz1y

      I remember articles about a competition DARPA had to this effect, using an off-the-shelf simulator (maybe Bullet?). At first, the simulated robots learned to hack the sim. Some did a vibration thing that messed with the friction model, sort of like a vibrating cell phone moving across a table. Others learned to hack collision detection/response, doing a sort of jump / pole-vault thing that would explosively hurl them forward. I assume they later tweaked the sim and the rewards until they actually got the walking behaviors they wanted.

      • mkoubaa1y

        I had seen that. The assumption seems to be that simulations are simplistic, and then most of what remains shows practical ways for real world training to work (while acknowledging that it's hard to do)

    • Avicebron1y

      My university had a program that did this, I took it as a sophomore years ago while I was studying "evolutionary robotics", although typically we would simulate the robot in 3D virtual environment to test the reward mechanism for learning to walk.

    • coderenegade1y

      System Identification is a fairly common strategy for developing a controller. Essentially, you learn the dynamics model from recorded data, and then use that as a simulator to iteratively improve the controller. The devil is in the details though, as you can run into issues with simulation fidelity (due to missing data), which can produce something that doesn't translate to real life performance.

    • flutas1y

      I've thought something similar but the reverse (if I'm understanding you correctly).

      Using simulation to train a DIY spot on something novel and see how good I could get the performance in the real world.

      Of course that's a dream of mine one day and not an active project.

  • ih1y

    I'm just getting into robotics with a background in software and I ended up choosing the Isaac Sim platform/ecosystem (https://docs.omniverse.nvidia.com/isaacsim/latest/index.html) and Jetbot (https://jetbot.org/master/) since they seem suited for neural networks and reinforcement learning. So far getting up and running with the simulation side hasn't been too bad and there is a model for the Jetbot in Isaac Sim since it's also by Nvidia. I haven't started on the hardware side so can't speak on that. The downside for some might be it's proprietary and Jetbot itself is a bit out of date and nearing end of life support.

    • m00x1y

      You'll be better served with a more capable platform. You can either 3d print it, or you can pick up something like the WaveRover from Waveshare. It's a solid metal frame with geared motors and differential drive. For ~$100 you won't get much better.

      You just need to pop in 3x18650 batteries in the chassis and connect a Jetson nano to the UART pins. It has an ESP32 inside, so you can also program that, but it comes programmed out of the box.

      You can control it with this python library https://github.com/msanterre/wave_rover_serial

      Isaac SIM is a terrific simulator, you won't get much better.

      • ih1y

        Cool, thanks for the advice!

  • manuel_w1y

    This is completely unrelated to the article the OP posted other than it's also related to robotics. But it's the coolest robot project I found recently. And since readers of this thread are likely to be interested in robotics (I am!), it might of interest to some:

    https://www.allesblinkt.com/projects/round-about-four-dimens...

    Hope this is not considered too off-topic here.

    • kaycebasques1y

      The original post is kinda lame so we should just hijack this thread into a general robotics appreciation discussion.

      That cube is amazing. Thanks for sharing. The aesthetic is so polished. Some true 21st century art right there.

      This made the rounds a few months ago but worth posting here again. So inspiring. https://www.youtube.com/watch?v=bO-DWWFolPw

    • 1y
      [deleted]
  • andrew_eu1y

    What would be a good starting platform for a programmable drone?

    The article suggests playing around with a Raspberry Pi Pico, but this is a bit bare bones. Are there any kits, or entry level programmable drones on the market with a reasonable toolchain to program sequences?

    • kaycebasques1y

      This one seems to have completely open source hardware / firmware / software:

      https://www.bitcraze.io/products/crazyflie-2-1/

      https://www.bitcraze.io/documentation/repository/

      It was mentioned in this IEEE Spectrum article: https://spectrum.ieee.org/drone-quadrotor-2667196800

    • jacquesm1y

      A Raspberry Pi Pico has a large multiple of the kind of compute that cruise missiles have so should be more than plenty for a programmable drone.

      • andrew_eu1y

        While true, it also doesn't include any of the hardware needed to move around. I'm sure it would be possible to build a drone that includes it as the microcontroller, but that creates a substantial barrier to actually getting something off the ground.

        I have looked at the Ryze Tello [0] before, and it seems like a decent entry-level device, but I'm not sure whether it's actually a huge pain to develop for. I'd like something to be extendable _like_ a Pico, for instance if I want to install additional sensors, but all of these platforms are quite foreign to me.

        [0] https://www.ryzerobotics.com/tello

        • jacquesm1y

          With the price of these you could stick a few in there (there would be a weight penalty, they are not exactly zero grams though for their weight the amount of compute is impressive but you then would also need more power).

          Depending on what kind of drone you want to build (powered fixed wing, glider, quad, hex or octocopter) that might be more or less of an issue. Given the power draw and weight of an octocopter I don't think it would be much of an issue there but on a quadcopter, especially a very small one that might be prohibitive. But I'd focus on getting it to work before optimizing for size and weight.

    • lijaf1y

      Have a look at the Crazyflie https://www.bitcraze.io/

    • xrd1y

      I came here to ask the same question. Anyone?

  • tonmoy1y

    As someone who works in making physical things, I disagree with the premise that the world of bits and bytes does not have impact. We will always need to manage resources including humans at a large scale and web apps are the best way to do that.

  • choonway1y

    The best way to get into it without investing money in all that hardware is to play with a simulator. My preference is CoppeliaSim - due to it's user friendliness.

    https://www.coppeliarobotics.com/

    But take your pick there are many to choose from.

    https://www.sciencedirect.com/science/article/abs/pii/S15691...

  • blah-yeah1y

    Make magazine has a great magazine on getting into robotics via the Battle Bots type of robots.

    I think it was an issue from 2022 or 2023 (not enough energy currently to track down the issue in their catalog)

    Here's an article on their site about the topic:

    https://makezine.com/tag/battlebots/

  • tamimio1y

    As long as software folks don’t bring their shenanigans (agile/scum (ahmm scrum) approach, daily stand up, ticket based management, ship products within 2 weeks, among others) into the robotics/engineering world, you are welcome. I have seen many cases of software “project or engineering managers” ruin a whole department because they were trying to force such stupid approaches, it’s always ironic seeing some are trying to force agile for example even though the whole idea of agile is being..agile.

  • tester7561y

    If only anything closer to hardware wasnt paying this bad...

    A lot of SEs leave industries like semiconductors for web dev jobs that pay waay better

  • Animats1y

    This might be a good time to get into robotics. For a long time, industrial robots were really dumb. Then there was a round of false enthusiasm for "intelligent robots" (Rethink Robotics went down that rathole.) Now, at last, compute power, vision, and AI are cheap enough that you can get something done.

    The components are much better. Batteries are better. Motors are better. Radio communications work. Cameras are cheap. Short-range LIDAR is affordable. Navigation systems work. Robotics work used to require using a lot of time building custom solutions for those problems. Now you can just order components.

    Here's an idea I'd like to see revisited. Back in the 1980s, someone built a pair of small forklifts that operated as a team. These were little things, about half a meter cubed, with maybe 50cm of lift. Individually, they couldn't do much. But a pair working together could pick up and move a couch, with one robot lifting each end. In the 1980s, the researchers had trouble coordinating two mobile robots. Communications alone were a big problem. Not today.

    There are many material handling tasks where one small robot isn't enough, and a big machine the size of a forklift is too bulky. But teams of small robots might work.

    I still have a small robot arm on my desk, but it's not connected to anything.

  • moffkalast1y

    > Also, if you’re not already, you should get into writing Rust. The powerful type system and safety is incredible. The build tooling and ecosystem is much better than C++ et al. Rust for embedded is not there yet, but improving all the time.

    Meh, maybe in a decade. ROS has no official support for it either. Robotics is still very much a C++ town and will probably remain as such for a long while as long as people remain stubbornly set in their ways. If you apply as a Rust only dev to a robotics company nobody will take you seriously.

  • hcks1y

    “You should base your life decisions on the koolaid flavour you prefer”

  • bella0141y

    [dead]

  • richrichie1y

    “ An app is not going to house people, feed them or put them on Mars. We can’t solve climate change with smarter software”

    Yes it can and we can.

    Software has been a critical component of much progress we have made over the last few decades.

    I understand the difficulty with the abstract many people have. Some may think Hilbert spaces are not real. For many they are as real as any space around them. Some live in them !

    • AnarchismIsCool1y

      Ad-tech isn't feeding the poor or curing cancer, it's just giving everyone depression.