50 comments
  • rkangel22h

    > MCU-class footprint (fits in 16 MB RAM)

    That is absolutely not an MCU class footprint. Anything with an "M" when talking about memory isn't really an MCU. For evidence I cite the ST page on all their micros: https://www.st.com/en/microcontrollers-microprocessors/stm32...

    Only the very very high performance ones are >1MB of RAM.

    • marci20h

      For squeezing erlang in KiB sized RAM, the AtomVM project is probably a better fit.

      https://github.com/atomvm/AtomVM

    • PinguTS20h

      I see their board uses a daughter board from Phytec, a German company too. This is based on very high performance NXP MCU, the i.MX 6UL, with additional external DDR RAM.

      • LeifCarrotson19h

        It's a $212 SBC. They've got more L2 cache than most microcontrollers have Flash memory.The fact that it's got an L2 cache at all, much less external LPDDR3 DRAM, is a bit ridiculous. In most parameters - cost, RAM, frequency, storage, power consumption - it's approximately 2 orders of magnitude beyond the specifications of a normal microcontroller.

      • magicalhippo19h

        NPX calls[1] it an application processor, and is based on a Cortex-A7, not a Cortex-M series microcontroller processor.

        That said these nomenclatures are a bit fuzzy these days.

        [1]: https://www.nxp.com/products/i.MX6UL

    • derefr16h

      Espressif calls the ESP32 an MCU, and at least 1/3 of ESP32 models have >1MiB of onboard PS ("pseudo-static") RAM (i.e. DRAM with its own refresh circuit.) At least 20 of the ESP32 models do have 16MiB.

      (And I would argue that the ESP32 is an MCU even in this configuration — mostly because it satisfies ultra-low-power-on-idle requirements that most people expect for "pick up, use, put down, holds a charge until you pick it up again" devices.)

      So, sure, if you mean the kind of $0.07 MCU IC you'd stuff in a keyboard or mouse, that's definitely not going to be running Nerves (or any other kind of dynamic runtime. You need to go full bare-metal on these.)

      But if you mean the kind of $2–$8 MCU IC you'd stuff in a webcam, or a managed switch, or a battery-powered soldering iron, or a stick vacuum cleaner with auto suction-level detection, or a kitchen range/microwave/etc with soft-touch controls and an LCD — use-cases where there's more-than-enough profit margin to go around — then yeah, those'll run Nerves just fine.

      • ACCount3715h

        Even ESP32, the quintessential "punches above its weight" MCU, only packs 520KB of RAM by default. At the time of its release, that was a shitton of RAM for an MCU to have!

        If you ship MCUs with 16MB of RAM routinely, you're either working with graphics or are actually insane.

        • defen14h

          The MCU I'm currently working with has 12KB of RAM and it feels luxurious.

          • ACCount3713h

            Ah, the cultural shock of going from 8 bit cores with 512 bytes to an actual modern chip.

      • the__alchemist14h

        ESP32's are on the high end of FLASH and RAM counts. You are pointing out that there is variance. The kind sof 2-8$ MCUs I've used generally have 512k-2Mb[it] onboard flash and <512k SRAM. (STM32G4, H7, ESP32C3 etc)

        The Sub <$1 you refer do will have <100k of these. (STM32C0, G0 etc)

        ST is actually moving away from the 2Mb MCUs, and instead offering ~1Mb with Octospi. I believe the intent is to use offboard flash if you want more. (e.g. the newer H7 variants)

      • dlcarrier14h

        Espressif's ESP32 line uses an MCU IC, either sold by itself (https://www.espressif.com/sites/default/files/documentation/...) or with flash memory and RAM ICs, all packaged together in a system-in-package footprint. (e.g. https://www.espressif.com/sites/default/files/documentation/...) There are various options for which flash and RAM ICs are packaged together, but the ESP32 die itself is very much an MCU and has only has 520 KB of SRAM.

        A managed switch is very computy heavy, and does usually run a microprocessor with a full RTOS, if not Linux itself, which probably costs in the mult-dollar range. It's also not something most people have at home, outside of the switch built into their router. Everything elese you mentioned usually runs on microcontrollers with under 1 MiB of RAM. For example, Infineon's CYUSB306X series ASICs for webcams come in two RAM sizes: 256 KiB or 512 KiB, despite handling gigabits per second of data, and having an MCU at all isn't even necessary. (https://www.latticesemi.com/usb3#_CDED461DE15245C78A2973D4A4...) The Pinecil's Bouffalo BL706 MCU has 123 KiB of RAM, despite being a low-volume product where design time matters more than component cost. (https://wiki.pine64.org/wiki/Pinecil, https://en.bouffalolab.com/product/?type=detail&id=8) Microwave ovens are so high volume that they often don't even use packaged microcontrollers, mounting the die directly on the PCB, with an epoxy blob protecting it, and there's no way any would splurge on megabytes of SRAM. The most advanced microwave oven I've seen was from the 90's and definitely didn't splurge on an microprocessor. (https://www.youtube.com/watch?v=UiS27feX8o0)

        A slow MCU with external memory could run anything, like booting into linux on an AVR (https://www.avrfreaks.net/s/topic/a5C3l000000BrFREA0/t392375) but it's going to be extremely slow and not practical for any commercial product, which if produced in any volume will have as little RAM as possible.

    • TrueDuality21h

      You don't necessarily need on-package RAM for this. I'm not sure I'd build a project around this, but 16MiB of RAM would hardly be BOM killer.

      • PinguTS20h

        Actually it is. If you want to build a cheap sensor or actuator, than any additional component is getting expansive. Remember it is not only the external component, it is also the PCB space, is the production, and the testing after production. This adds up all to the costs.

        When you use a µC to make it cheap, then you don't want to use additional components.

    • cmrdporcupine21h

      Eh. It's getting blurry and has been for some time. To me these days the differentiators are: does it have an MMU? Address lines for external memory? Do you write for an OS or for "bare metal" / RTOS kit? Are there dedicated pins for GPIO?

      If you choose some arbitrary memory amount as the criterion it will be out of date by next year.

    • jdndnc21h

      RAM on MCUs is getting cheaper by the minute.

      A couple of years ago it was measured in bytes. Before the RP2040 is was measured in dozens of KiB now it's measured in MiB

      While I agree that 16 MiB is on the larger side for now, it will only be a couple of years for mainstream MCUs having that amount on board

      • jbarberu21h

        Also curious what MCUs you're working with to give you this impression?

        RP2040 is 264k, RP2350 is 520k.

        I use NXP's rt1060 and rt1170 for work, and they have 1M and 2M respectively, still quite far away from 16M and those are quite beefy running at 500MHz - 1GHz.

        • tonyarkles19h

          While I generally agree with you, the RT106x line does support external SDRAM as well. I've got an MIMXRT1060-EVKB sitting here on my desk that has 32MB of SDRAM alongside the on-die 1MB of SRAM.

          • theamk12h

            Those specs are $50 for compute module - a very non-trivial cost.

        • 151559h

          > NXP's rt1060 and rt1170 for work

          These both have FlexSPI controllers capable of interfacing with $3-5 in PSRAM at 8M or 16M.

      • FirmwareBurner21h

        >RAM on MCUs is getting cheaper by the minute.

        It really isn't. The RP2040 has 256KB RAM. Far away from 16MB.

        >now it's measured in MiB

        Where? Very few so far and mostly for image processing applications, and cap out at less than 8MB. And those are already bordering on SoCs instand of MCUs.

        For applications where 8MB or more is needed, designers already use SoCs with external RAM chips.

        >it will only be a couple of years for mainstream MCUs having that amount on board

        Doubt very much. Clip it and let's see in 2 years who's right.

      • pessimizer19h

        Bigger processors with more RAM have always been available. The question has always been whether you're going to use a $20 processor when you could do the job with a 50¢ one. It's the difference between your product being cheap and disposable, and you getting to choose your margin based on your strategy; and not being able to move a unit without losing money, hoping to sell yourself to someone who knows how to do more with less.

        I'm an Erlang fanatic, and have been since forever, paid for classes when it was Erlang Training & Consulting at the center of things, flew cross-country to take them, have the t-shirt, hosted Erlang meetups myself in downtown Chicago. I'm not prototyping a microcontroller application in Erlang if I can get it done any other way. It's committing to losing from the outset.

        edit: I've always been hopeful for some bare-metal implementation that would at least almost work for cheap µcs, and there have been promising attempts in the past, but they seem to have gone nowhere.

        • toast016h

          AtomVM runs on esp32, right? It's not an ultra-cheap microcontroller, but it's pretty cheap. AtomVM isn't BEAM either, though. I have no experience with AtomVM though... it didn't seem like a good fit when I was building something with an ESP32 (I didn't see anything about outputting to LCDs, and that was reasonable with arduino libraries... I also saw a library for calendars and thought that would work for my needs and then I got dragged into making it work better), and it would have worked for the stuff i was doing with ESP8266, but I didn't know about it when I was shopping for boards, so I didn't want to pay extra.

  • hoppp22h

    Pretty cool. I am a fan of everything Erlang. Managing large clusters of IOT devices running Beam sounds like a good idea not just because of fault tolerance but for hot swapping code.

    • garbthetill20h

      I am the same but for elixir, the beam is awesome & I always wonder why it still hasn't caught on with all the success stories. The actor model just makes programming feel so simple

      • zwnow18h

        For me its the complete opposite of simple. I am a fan of BEAM and OTP but im a horrible programmer. I have constant fear of having picked the wrong restart strategy in a supervisor. Or about ghost processes or whatever. I have no mentors and learn everything myself. I have no way of actually checking whether my implementations are good. With my skills id manage to make an Elixir system brittle because its not clear to me what happens at all times.

        • toast017h

          WhatsApp did what it did and we didn't hire anyone who had experience with OTP until 2013 I think. One person who was very experienced in Erlang showed up for a week and bounced.

          We were doing all sorts of things wrong and not idiomatically, but things turned out ok for the most part.

          The fun thing with restart strategies is if your process fails quickly, you get into restart escalation, were your supervisor restarts because you restarted too many times, and so on and then beam shuts down. But that happens once or twice and you figure out how to avoid it (I usually put a 1 second sleep at startup in my crashy processes, lol).

          Ghost processes are easy-ish to find. erlang:processes() lists all the pids, and then you can use erlang:process_info() to get information about them... We would dump stats on processes to a log once a minute or so, with some filtering to avoid massive log spew. Those kinds of things can be built up over time... the nice thing is the debug shell can see everything, but you do need to learn the things to look for.

        • pton_xd13h

          > With my skills id manage to make an Elixir system brittle because its not clear to me what happens at all times.

          What's so cool about BEAM is you can connect a repl and debug the program as it's running. It's probably the best possible system for discovering what's happening as things are happening.

          • zwnow3h

            Yea IEx is pretty cool, that's how I test while programming as I do not write tests for everything.

      • AnEro20h

        Same, my personal theory where it excels and overachieves is where there is already really fleshed out and oversaturated developer ecosystems (and experienced developer pool) that organizations have alot of legacy software built on it. I think it will gain momentum as we see more need for distributed LLM agents and tooling pick up. (Or when people need extreme cost savings on front facing apis/endpoints that run simple operations)

    • worthless-trash21h

      Is this something you do regularly?

  • barbinbrad20h

    huge fan of elixir. and definitely have some dumb questions.

    in some of the realtime architectures i've seen, certain processes get priority, or run at certain Hz. but i've never seen this with the beam. afaik, it "just works" which is great most of the time. i guess you can do: Process.flag(:priority, :high) but i'm not sure if that's good enough?

    • toast019h

      Beam only promises soft realtime. When switching processes, runnable high priority tasks will be chosen before runnable normal or low priority tasks, and within each queue all (runnable) tasks run before a task runs again. But beam isn't really preemptive; a normal or low priority task that is running when a high priority task becomes runable won't be paused; the normal task will continue until it hits its reduction cap or blocks. There's also a chance that maybe you hit some operation that is time consuming and doesn't have yield points; most of ERTS has yield points in time consuming operations, but maybe you find one or maybe you have a misbehaving NIF.

      Without real preemption, consistently meeting strict timing requirements probably isn't going to happen. You might possibly run multiple beams and use OS preemption?

      • heeton17h

        I spoke with Peer (the creator of Grisp) about this at Elixirconf earlier in the year, and I'm not an expert here so I hope I don't misrepresent his comments:

        Grisp puts enough controls on the runtime that soft-realtime becomes hard-realtime for all intents and purposes, outside of problems that also cause errors in hard-realtime systems.

        (Also, thanks Peer for being tremendously patient with a new embedded developer! That kind of friendly open chat is a huge draw to the Elixir community)

        • cyberpunk16h

          I did the same workshop some years ago with him also, very nice and patient guy, I can recommend attending if anyone is curious how microelectronics actually work :}

  • whalesalad20h

    Sounds like nerves to me? But with soft realtime added in?

    • thenewwazoo19h

      Nerves is Erlang-as-init on Linux. GRISP is Erlang with RTEMS on metal.

    • toast019h

      My tldr is grisp is beam on an rtos; nerves is beam on a minimal linux; but they also have grisp allow and grisp forge that are beam on linux. Any of these gives you soft realtime.

  • Zaphoos22h

    What about Gleam?

  • juped22h

    I'm interested in the claimed real-time capabilities, but it's hard to find anything about them written there. Still, I like the hardware integration.

    • garbthetill20h

      yeah the claim is ambiguous because the beam itself is only guaranteed soft real time, leaving it open ended might make ppl think hard real-time especially since its hardware

      • elcritch16h

        They support writing RTOS tasks in C as I understand it.

    • Joel_Mckay13h

      Real-time is some of the most misused jargon in modern history.

      In general, most JIT or VM can't even claim guaranteed latency. People that mix these concepts betray their ignorance while seeming intelligent.

      FreeRTOS is small and feasible.

      VxWorks if your budget is unconstrained.

      LinuxRT kernel (used in LinuxCNC) with external context clocking, and or FPGA memory DMA overlap module (zynq SoC etc.)

      Real-time is a specialized underpaid area, and most people have too abstract of an understanding of hardware to tackle metastability problems. =3

  • thelastinuit16h

    would be possible to get my 90's computers and run erlang/elixir for a crypto node... or some version of it??

    • dlcarrier14h

      The page says their implementation requires 16 MB of RAM, so a late 90's computer could run it, but even mid-90's computers, like early Pentium models, often came with less RAM than that. If it shipped with Windows 98 or later, it should have 16 MB.

    • asa40016h

      Yes - Erlang/Elixir wouldn't be the bound here. 90s hardware is plenty for them. They were designed for far less.