Removing nommu feels wrong to me. Being able to run linux on a simple enough hardware that anybody sufficiently motivated could write an emulator for, help us, as individuals, remain in control. The more complex things are, the less freedom we have.
It's not a well argumented thought, just a nagging feeling.
Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings. A system that would allow communication, simple media processing and productivity.
These days it feels like we are at a tipping point for open computing. It feels like being a frog in hot water.
I don't think software emulation is very important.
Let's look at the lowest end chip in the discussion. Almost certainly the SAM9x60.... it is a $5 ARMv5 MMU chip supporting DDR2/LPDDR/DDR3/LPDDR3/PSRAM, a variety of embedded RAM and 'old desktop RAM' and mobile RAM.
Yes it's 32-bit but at 600MHz and GBits of RAM support. But you can seriously mass produce a computer under $10 with the chip (so long as you support 4-layer PCBs that can breakout the 0.75mm pitch BGA). As in, the reference design with DDR2 RAM is a 4-layer design.
There are a few Rockchips and such that are (rather large) TQFP that are arguably easier. But since DDR RAM is BGA I think it's safe to assume BGA level PCB layout as a point of simplicity.
---------
Everything smaller than this category of 32-bit / ARMv5 chips (be it Microchip SAM9x60, or competing Rockchips or AllWinner) is a microcontroller wholly unsuitable for running Linux as we know it.
If you cannot reach 64MBs of RAM, Linux is simply unusable. Even for embedded purposes. You really should be using like FreeRTOS or something else at that point.
---------
Linux drawing the line at 64MB hardware built within the last 20 years is.... reasonable? Maybe too reasonable. I mean I love the fact that the SAM9x60 is still usable for modern and new designs but somewhere you have to draw the line.
ARMv5 is too old to compile even like Node.js. I'm serious when I say this stuff is old. It's an environment already alien to typical Linux users.
Out by a factor of five or more.
A $1 Linux capable ARM: https://www.eevblog.com/forum/microcontrollers/the-$1-linux-...
I'd expect that there were even cheaper processors now since it's eight years later.
We need accessible open hardware. Not shoehorning proprietary hardware to make it work with generic standards they never actually followed.
Open source is one thing, but open hardware - that’s what we really need. And not just a framework laptop or a system76 machine. I mean a standard 64-bit open source motherboard, peripherals, etc that aren’t locked down with binary blobs.
> I mean a standard 64-bit open source motherboard, peripherals, etc that aren’t locked down with binary blobs.
The problem here is scale. Having fully-open hardware is neat, but then you end up with something like that Blackbird PowerPC thing which costs thousands of dollars to have the performance of a PC that costs hundreds of dollars. Which means that only purists buy it, which prevents economies of scale and prices out anyone who isn't rich.
Whereas what you actually need is for people to be able to run open code on obtainium hardware. This is why Linux won and proprietary Unix lost in servers.
That might be achievable at the low end with purpose-built open hardware, because then the hardware is simple and cheap and can reach scale because it's a good buy even for people who don't care if it's open or not.
But for the mid-range and high end, what we probably need is a project to pick whichever chip is the most popular and spend the resources to reverse engineer it so we can run open code on the hardware which is already in everybody's hands. Which makes it easier to do it again, because the second time it's not reverse engineering every component of the device, it's noticing that v4 is just v3 with a minor update or the third most popular device shares 80% of its hardware with the most popular device so adding it is only 20% as much work as the first one. Which is how Linux did it on servers and desktops.
> pick whichever chip is the most popular and spend the resources to reverse engineer it
is this even doable?
Not even hypothetical. See ATmega328P. It has no business being an actively supplied chip if we were to only care about technological supremacy of architectures and/or chip construction. Or countless "e8051" chips based on Intel 8051 microcontroller being https://xkcd.com/2347/ of USB.
Doable yes. Economical no.
Bunny Huang has been doing a lot of work on this:
Open hardware you can buy now: https://www.crowdsupply.com/sutajio-kosagi/precursor
The open OS that runs on it: https://betrusted.io/xous-book/
A secret/credential manager built on top of the open hardware and open software: https://betrusted.io
His blog section about it: https://www.bunniestudios.com/blog/category/betrusted/precur...
"The principle of evidence-based trust was at work in our decision to implement Precursor’s brain as an SoC on an FPGA, which means you can compile your CPU from design source and verify for yourself that Precursor contains no hidden instructions or other backdoors. Accomplishing the equivalent level of inspection on a piece of hardwired silicon would be…a rather expensive proposition. Precursor’s mainboard was designed for easy inspection as well, and even its LCD and keyboard were chosen specifically because they facilitate verification of proper construction with minimal equipment."
Lots of SoCs are "open" in the sense that complete documentation including programming manuals are available. With couple man-centuries of developer time each, you could port Linux over those SoCs. but that doesn't count as being "open". On the other hand, there are a lot of straight up proprietary hardware that are considered "open", like Raspberry Pi.
Which means, "open" has nothing to do with openness. What you want is standardization and commoditization.
There are practically no x86 hardware that require model-specific custom images to boot. There are practically no non-x86 hardware that don't require model-specific custom images to boot. ARM made perceptible amount of efforts in that segment with Arm SystemReady Compliance Program, which absolutely nobody in any serious businesses cares about, and it only concern ARM machines even if it worked.
IMO, one of problems in efforts going in from software side is over-bloated nature of desktop software stacks and bad experiences widely had with UEFI. They aren't going to upgrade RAM to adopt overbloated software that are bigger than the application itself just because that is the new standard.
Until we have affordable photolithography machines (which would be cool!), hardware is never really going to be open.
> affordable photolithography machines
We'll likely never have "affordable" photolithography, but electron beam lithography will become obtainable in my lifetime (and already is, DIY, to some degree.)
depends on what one means by affordable, but DIY versions have been successfully attempted
https://www.youtube.com/watch?v=IS5ycm7VfXg
Making at home transistors, or even small-scale integrated circuits is not exceedingly difficult.
However, making at home a useful microcontroller or FPGA would require not only an electron-beam lithography machine, but also a ion-implantation machine, a diffusion furnace, a plasma-etch machine, a sputtering machine and a lot of other chemical equipment and measurement instruments.
All the equipment would have to be enclosed in a sealed room, with completely automated operation.
A miniature mask-less single-wafer processing fab could be made at a cost several orders of magnitude less than a real semiconductor fab, but the cost would still be of many millions of $.
With such a miniature fab, one might need a few weeks to produce a batch of IC's worth maybe $1000, so the cost of the equipment will never be recovered, which is why nobody does such a thing for commercial purposes.
In order to have distributed semiconductor fabs serving small communities around them, instead of having only a couple of fabs for the entire planet, one would need a revolution in the fabrication of the semiconductor manufacturing equipment itself, like SpaceX has done for rockets.
Only if the semiconductor manufacturing equipment would be the result of a completely automated mass production, which would reduce its cost by 2 or 3 orders of magnitude, affordable small-scale but state-of-the-art fabs would be possible.
But such an evolution is contrary to everything that the big companies have done during the last 30 years, during which all smaller competitors have been eliminated, the production has become concentrated in quasi-monopolies and for the non-consumer products the companies now offer every year more and more expensive models, which are increasingly affordable only for other big companies and not for individuals or small businesses.
> However, making at home a useful microcontroller or FPGA would require not only an electron-beam lithography machine, but also a ion-implantation machine, a diffusion furnace, a plasma-etch machine, a sputtering machine and a lot of other chemical equipment and measurement instruments.
University nanofabs have all of these things today. https://cores.research.asu.edu/nanofab/
> but the cost would still be of many millions of $.
A single set of this equipment is only singular millions today commercially.
Using something like this for prototyping/characterization or small-scale analog tasks is where the real win is.
That ASU NanoFab has indeed almost everything that is needed.
It is weird that they do not have any ion implantation machine, because there are devices that are impossible to make without it. Even for simple MOS transistors, I am not aware of any other method for controlling the threshold voltage with enough precision. Perhaps whenever they need ion implantation they send the wafers to an external fab, with which they have a contract, to be done there.
Still, I find it hard to believe that all the equipment that they have costs less than 10 million $, unless it is bought second hand. There is indeed a market for slightly obsolete semiconductor manufacturing equipment, which has been replaced in some first tier fabs and now it is available at significant discounts for those who are content with it.
> one would need a revolution in the fabrication of the semiconductor manufacturing equipment itself, like SpaceX has done for rockets.
some revolution. still not even on the moon yet
https://en.wikipedia.org/wiki/Moore%27s_second_law: “Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years”
Wafer machines from the 1970s could be fairly cheap today, if there were sufficient demand for chips from the 1970s (~1MHz, no power states, 16 bit if you’re lucky, etc), but that trend would have to stop and reverse significantly for affordable wafer factories for modern hardware to be a thing.
The next 3D print revolution, photolithography your own chip wafers at home. Now that would be something!
I doubt anyone here has a clean enough room.
Jeri Ellsworth has that covered.
https://www.youtube.com/watch?v=PdcKwOo7dmM
Insane… I thought I was smart, she proves me wrong.
Peter Norvig, Fabrice Bellard, etc. The list of ultra smart people is quite long. A friend of mine thought he was pretty smart (and I would have happily agreed with him). Then he went to work for Google (early days). It didn't take long for him to realize that the only reason he seemed very smart was that he simply wasn't seeing a large enough slice of humanity.
> I doubt anyone here has a clean enough room.
Jordan Peterson has entered the building...
"Clean your rooms, men!" Starts sobbing
Maybe if he cleaned his own room, he’d find his copy of the Communist Manifesto in time to read it for a scheduled debate.
https://www.youtube.com/watch?v=qsHJ3LvUWTs
>> Until we have affordable photolithography....
If that comes to pass we will want software that run on earlier nodes and 32bit hardware.
Why not run on an FPGA?
That's being tried: https://www.crowdsupply.com/sutajio-kosagi/precursor
"The principle of evidence-based trust was at work in our decision to implement Precursor’s brain as an SoC on an FPGA, which means you can compile your CPU from design source and verify for yourself that Precursor contains no hidden instructions or other backdoors. Accomplishing the equivalent level of inspection on a piece of hardwired silicon would be…a rather expensive proposition. Precursor’s mainboard was designed for easy inspection as well, and even its LCD and keyboard were chosen specifically because they facilitate verification of proper construction with minimal equipment."
See also: https://betrusted.io
This is somewhere in the 10x-100x more expensive and consumes much more power, for lower effective clock speeds. It's not a production solution.
In addition to what the other comments have already highlighted, there's also the fact that you'd be back to using extremely opaque, even less "open source" hardware than regular CPUs/MCUs. Almost every FPGA that could even conceivably be used to run general purpose software is locked behind super proprietary stacks
We kinda have this with IBM POWER 9. Though that chip launched 8 years ago now, so I'm hoping IBM's next chip can also avoid any proprietary blobs.
Indeed with the OpenPOWER foundation.
Let’s hope some of that trickles down to consumer hardware.
Unlikely: POWER10 required blobs, and there's no sign that'll change for Power 11.
> Open source is one thing, but open hardware - that’s what we really need
This needs money. It is always going to have to pay the costs of being niche, lower performance, and cloneable, so someone has to persuade people to pay for that. Hardware is just fundamentally different. And that's before you get into IP licensing corner cases.
I would love to works with hardware, if you can foot my bill then I be happy to do that since open source software is one thing but open source hardware need considerable investment that you cant ignore from the start.
also this is what happen to prusa, everyone just take the design and outsource the manufacture to somewhere in china which is fine but if everybody doing that, there is no fund to develop next iteration of product (someone has to foot the bill)
and there is not enough sadly, we live in reality after all
Those operating systems already exist. You can run NetBSD on pretty much anything (it currently supports machines with a Motorola 68k CPU for example). Granted many of those machines still have an MMU iirc but everything is still simple enough to be comprehend by a single person with some knowledge in systems programming.
NetBSD doesn't support any devices without an mmu.
I think people here are misunderstanding just how "weird" and hacky trying to run an OS like linux on those devices really is.
Yeah, a lot of what defines "operating system" for us nowadays is downstream of having memory isolation.
Not having an MMU puts you more into the territory of DOS than UNIX. There is FreeDOS but I'm pretty sure it's x86-only.
Mmm... would beg to differ. I have ported stuff to NOMMU Linux and almost everything worked just as on a "real" Linux. Threads, processes (except only vfork, no fork), networking, priorities, you no name it. DOS gives you almost nothing. It has files.
The one thing different to a regular Linux was that a crash of a program was not "drop into debugger" but "device reboots or halts". That part I don't miss at all.
This was interesting. It reminded me how fork() is so weird and I found some explanation for its weirdness that loops back to this conversation about nommu:
"Originally, fork() didn't do copy on write. Since this made fork() expensive, and fork() was often used to spawn new processes (so often was immediately followed by exec()), an optimized version of fork() appeared: vfork() which shared the memory between parent and child. In those implementations of vfork() the parent would be suspended until the child exec()'ed or _exit()'ed, thus relinquishing the parent's memory. Later, fork() was optimized to do copy on write, making copies of memory pages only when they started differing between parent and child. vfork() later saw renewed interest in ports to !MMU systems (e.g: if you have an ADSL router, it probably runs Linux on a !MMU MIPS CPU), which couldn't do the COW optimization, and moreover could not support fork()'ed processes efficiently.
Other source of inefficiencies in fork() is that it initially duplicates the address space (and page tables) of the parent, which may make running short programs from huge programs relatively slow, or may make the OS deny a fork() thinking there may not be enough memory for it (to workaround this one, you could increase your swap space, or change your OS's memory overcommit settings). As an anecdote, Java 7 uses vfork()/posix_spawn() to avoid these problems.
On the other hand, fork() makes creating several instances of a same process very efficient: e.g: a web server may have several identical processes serving different clients. Other platforms favour threads, because the cost of spawning a different process is much bigger than the cost of duplicating the current process, which can be just a little bigger than that of spawning a new thread. Which is unfortunate, since shared-everything threads are a magnet for errors."
https://stackoverflow.com/questions/8292217/why-fork-works-t...
That's fair. If so, then you still can have things like drivers and HAL and so on too. However, there's no hard security barriers.
How do multiple processes actually work, though? Is every executable position-independent? Does the kernel provide the base address(es) in register(s) as part of vfork? Do process heaps have to be constrained so they don't get interleaved?
There are many options. Executables can be position-independent, or relocated at run-time, or the device can have an MPU or equivalent registers (for example 8086/80286 segment registers), which is related to an MMU but much simpler.
Executables in a no-MMU environment can also share the same code/read-only segments between many processees, the same way shared libraries can, to save memory and, if run-time relocation is used, to reduce that.
The original design of UNIX ran on machines without an MMU, and they had fork(). Andrew Tanenbaum's classic book which comes with Minix for teaching OS design explains how to fork() without an MMU, as Minix runs on machines without one.
For spawning processes, vfork()+execve() and posix_spawn() are much faster than fork()+execve() from a large process in no-MMU environments though, and almost everything runs fine with vfork() instead of fork(), or threads. So no-MMU Linux provides only vfork(), clone() and pthread_create(), not fork().
Thanks! I was able to find some additional info on no-MMU Linux [1], [2], [3]. It seems position-independent executables are the norm on regular (MMU) Linux now anyway (and probably have been for a long time). I took a look under the covers of uClibc and it seems like malloc just delegates most of its work to mmap, at least for the malloc-simple implementation [4]. That implies to me that different processes' heaps can be interleaved (without overlapping), but the kernel manages the allocations.
[1]: https://maskray.me/blog/2024-02-20-mmu-less-systems-and-fdpi...
[2]: https://popovicu.com/posts/789-kb-linux-without-mmu-riscv/
[3]: https://www.kernel.org/doc/Documentation/nommu-mmap.txt
[4]: https://github.com/kraj/uClibc/blob/ca1c74d67dd115d059a87515...
Under uClinux, executables can be position independent or not. They can run from flash or RAM. They can be compressed (if they run in RAM). Shared libraries are supported on some platforms. All in all it's a really good environment and the vfork() limitation generally isn't too bad.
I spent close to ten years working closely with uClinux (a long time ago). I implemented the shared library support for the m68k. Last I looked, gcc still included my additions for this. This allowed execute in place for both executables and shared libraries -- a real space saver. Another guy on the team managed to squeeze the Linux kernel, a reasonable user space and a full IP/SEC implementation into a unit with 1Mb of flash and 4Mb of RAM which was pretty amazing at the time (we didn't think it was even possible). Better still, from power on to login prompt was well under two seconds.
The original UNIX literally swapped processes, as in write all their memory to disk and read another program's state from disk to memory, it could only run as many processes as many times the swap was bigger than core, this is a wholly unacceptable design nowadays.
It also supported overlays on the PDP-11, although not in the beginning. I do not think that anybody makes use of the overlays anymore.
> The original design of UNIX ran on machines without an MMU, and they had fork().
The original UNIX also did not have the virtual memory as we know it today – page cache, dynamic I/O buffering, memory mapped files (mmap(2)), shared memory etc.
They all require a functioning MMU, without which the functionality would be severely restricted (but not entirely impossible).
Those features don't require an MMU.
The no-MMU version of Linux has all of those features except that memory-mapped files (mmap) are limited. These features are the same as in MMU Linux: page cache, dynamic I/O buffering, shared memory. No-MMU Linux also supports other modern memory-related features, like tmpfs, futexes. I think it even supoprts io_uring.
mmap is supported in no MMU Linux with limitations documented here: https://docs.kernel.org/admin-guide/mm/nommu-mmap.html For example, files in ROM can be mapped read-only.
That is not how a VMM subsystem works, irrespective of the operating system, be it Linux, or Windows, or a BSD, or z/OS. The list goes on.
Access to a page that is not resident in memory results in a trap (an interrupt), which is handled by the MMU – the CPU has no ability to do it by itself. Which is the whole purpose of the MMU and was a major innovation of BSD 4 (a complete VMM overhaul).
You're right about the VMM.
But three out of those four features: page cache, dynamic I/O buffering and shared memory between processes, do not require that kind of VMM subsystem, and memory-mapped files don't require it for some kinds of files.
I've worked on the Linux kernel and at one time understood it's mm intimately (I'm mentioned in kernel/futex/core.c).
I've also worked on uClinux (no-MMU) systems, where the Linux mm behaves differently to produce similar behaviours.
I found most userspace C code and well-known CLI software on Linux and nearly all drivers, networking features, storage, high-performance I/O, graphics, futex, etc run just as well on uClinux without source changes, as long as there's enough memory, with some more required because uClinux suffers from a lot more memory fragmentation due to needing physically-contiguous allocations).
This makes no-MMU Linux a lot more useful and versatile than alternative OSes like Zephyr for similar devices, but the limitations and unpredictable memory fragmentation issues make it a lot less useful than Linux with an MMU, even if you have exactly the same RAM and no security or bug concerns.
I'd always recommend an MMU now, even if it's technically possible for most code to run without one.
In an embedded scenario where the complete set of processes that are going to be running at the same time is known in advance, I would imagine that you could even just build the binaries with the correct base address in advance.
A common trick to decrease code size in RAM, is to link everything to a single program, then have the program check its argv[0] to know which program to call.
With the right filesystem (certain kinds of read-only), the code (text segment) can even be mapped directly, and no loading into RAM need occur at all.
These approaches saves memory even on regular MMU platforms.
To clarify: NetBSD has never supported non-MMU systems, or at least hasn't for decades. As opposed to something they removed recently(-ish).
FWIW, Linux is not the only OS looking into dropping 32bit.
FreeBSD is dumping 32 bit:
https://www.osnews.com/story/138578/freebsd-15-16-to-end-sup...
OpenBSD has this quote:
>...most i386 hardware, only easy and critical security fixes are backported to i386
I tend to think that means 32bit on at least x86 days are numbered.
https://www.openbsd.org/i386.html
I think DragonflyBSD never supported 32bit
For 32bit, I guess NetBSD may eventually be the only game in town.
> Maybe we need a simple posix os that would run on a simple open dedicated hardware that can be comprehended by a small group of human beings.
Simple and POSIX would be a BSD like NetBSD or OpenBSD.
This is why I gravitated to Plan 9. Overall a better design for a networked world and can be understood by a single developer. People can and have maintained their own forks. Its very simple, small and cross platform was baked in from day one. 9P makes everything into a IO socket organized as a tree of names objects. Thankfully it's not POSIX which IMO is not worth dragging along for decades. You can port Unix things with libraries. It also abandons the typewriter terminal and instead uses graphics. A fork, 9front, is not abandoning 32 bit any time soon AFIK. I netboot an older Industrial computer that is a 400MHz Geode (32 bit x86) with 128 MB RAM and it runs 9front just fine.
Its not perfect and lacks features but that stands to reason for any niche OS without a large community. Figure out what is missing for you and work on fixing it - patches welcome.
nommu is a neat concept, but basically nobody uses it, and I don't see that as likely to change. There's no real use case for using it in production environments. RTOSes are much better suited for use on nommu hardware, and parts that can run "real" Linux are getting cheaper all the time.
If you want a hardware architecture you can easily comprehend - and even build your own implementation of! - that's something which RISC-V handles much better than ARM ever did, nommu or otherwise.
There's plenty of use cases for Linux on microcontrollers that will be impossible if nommu is removed. The only reason we don't see more Linux on MCUs is the lack of RAM. RP2350 are very close! Running Linux makes it much easier to develop than a plain RTOS.
Linux, or any other full OS is simply a waste of that hardware. It makes no sense at all.
It's a 5 gallon pail of compute which is all used up in OS overhead so you can do a cup of work.
If the job were that small that it fits in the remainder, then you could and should have just used 1 cent hardware instead of 1 dollar hardware.
Many Adafruit boards come with micropython which could also be seen as a waste of resources. Yet for low volume semi pro applications, the ease of development warrants the overhead. Linux has solid network, WiFi, Bluetooth stacks and a rich box of tools that might be very nice to tap into without requiring something as big as an RPi.
> Many Adafruit boards come with micropython which could also be seen as a waste of resources. Yet for low volume semi pro applications, the ease of development warrants the overhead.
As a reality check: MicroPython can run in 16 KB of RAM; a typical development board has 192 KB. µCLinux requires at least 4 - 8 MB of RAM just to boot up, and recommends 32 MB for a "serious product" [1].
> Linux has solid network, WiFi, Bluetooth stacks and a rich box of tools that might be very nice to tap into without requiring something as big as an RPi.
I would absolutely not count on any of those tools being available and functional in a µCLinux environment.
[1]: https://www.emcraft.com/imxrt1050-evk-board/what-is-minimal-...
> µCLinux [...] recommends 32 MB for a "serious product"
My point exactly. There's currently a hole between ten-cent MCUs requiring RTOS and 5$+ RPi that can run Linux. Taking out nommu from the kernel would make any 64MB "super-MCU" a non-starter.
I used Ethernet, USB and some serial thingy (I2C? can't remember) without issue.
> Running Linux makes it much easier to develop than a plain RTOS.
I'm not convinced that's true. All of the microcontroller tooling I've seen has been built around RTOS development; I've never seen anything comparable for µCLinux.
Setting up and maintaining a custom Linux build for any hardware is pretty complicated. There's just so much complexity hidden under config options. The landscape of Linux for embedded computers is a huge mess of unmaintained forks and hacky patch files.
That's all worth it to have an application processor that can run your Python/Java app. It's probably worth it to have a consistent operating system across multiple devices.
Would you have many of those benefits if you were using Linux on a micro though? I can't imagine much 3rd party software would work reliably given the tiny amount of RAM. You'd basically just be using it as a task scheduler and wrapper over drivers. You could get most of the benefits by using an RTOS with a memory allocator.
There's no evidence that significant numbers of people are actually doing that, though.
> Running Linux makes it much easier to develop than a plain RTOS.
What's missing? What would it take to make a plain RTOS that's as easy to develop on/for/with as Linux?
A "plain RTOS" is the better idea most of the time.
That may change. There are some very powerful MCUs appearing, with astonishing features, including hardware virtualization (hypervisors on an MCU,) multicore superscalar, heterogeneous CPU cores with high performance context switching, "AI" co-processors with high throughput buses, and other exotic features.
At some point, it might start making sense to level up the OS to (nommu) Linux on these devices. When the applications get complex enough, people find themselves wanting a full blown network stack, full featured storage/file systems that are aligned with non-embedded systems, regular shells, POSIX userland, etc.
All of the architectures I have in mind are 32 bit and "nommu"[1]: Cortex-R52/F, Infineon TriCore, Renesas RH850, NXP Power e200. Then you have RISC-V MCU Cambrian Explosion underway.
I qualify all this with mays and mights: it hasn't happened yet. I'm just careful not to discount the possibly of a <50 mAh RP Pico 3 booting uLinux, running python and serving web pages being a big hit.
[1] They all have various "partition" schemes to isolate banks of RAM for security, reliability, etc., but real MMUs are not offered.
I've spent quite a lot of the last year setting up an embedded Linux device for mass deployment - so I've seen a lot of its downsides first hand.
When you have a fleet of embedded devices you want pre-compiled disk images, repeatable builds, read only filesystems, immutable state (apart from small and well controlled partitions), easy atomic updates, relatively small updates (often devices are at the other end of a very slow cell connection) and a very clear picture of what is running on every device in your fleet.
Linux can do all that, but it's not the paradigm that most distros take. Pretty much the entire Linux world is built around mutable systems which update in place. So you're left to manage it yourself. If you want to do it yourself, you end up in the hacky fragile world of Yocto.
Compared to that, using an RTOS or application framework like Zephyr is fairly easy - at the expense of app development time, you just need to worry about getting a fairly small compiled binary onto your device.
I do agree that there's some really powerful parts available which would benefit from the shared drivers and consistent syscalls a standardised operating system offers. But building and maintaining a Linux system for any part isn't a simple undertaking either - and so the complexity of that needs to be considered in total development time.
Generally, MCU people want predictable behavior and small TCB.
Linux is too unorthogonal for them.
There are some other open OSs, like Zephyr, NuttX and Contiki - so maybe they're the right thing to use for the nommu case rather than Linux?
Zephyr is not an OS in the conventional sense, it's more a library you link to so the application can "go".
Zephyr is an "OS" in pretty much every conventional sense. What you're saying, I think, is that a default build of Zephyr on a no-MPU device is a single shared .text segment where all the threads and other OS-managed objects are app-created by C code. So... sure, it's a library and an OS. And even when running on a system with an MPU/MMU, Zephyr tries hard to present that same API (i.e. function calls into the kernel become syscalls automatically, etc...), so the illusion is a fairly strong one.
And given the target market (even the biggest Zephyr targets have addressable RAM in the megabytes, not gigabytes), there's no self-hosting notion. You build on a big system and flash to your target, always. So there's no single filesystem image (though it supports a somewhat limited filesystem layer for the external storage these devices tend to see) containing "programs" to run (even though there's a ELF-like runtime linker to use if you want it).
If you want it to look like Linux: no, that's not what it's for. If you want to target a Cortex-M or ESP32 or whatever on which Linux won't run, it gives you the tools you expect to see.
xv6 already runs on RISC-V.
Wow, I am somewhat ashamed to admin that I had never heard of "xv6" until your comment! I found the MIT homepage here: https://pdos.csail.mit.edu/6.1810/2024/xv6.html
Two things standout to me: (1) It was written in 2006. RISC V was not released until 2010 (so says Google). I guess it was ported from x86? (2) Russ Cox is one of the contacts listed on MIT homepage. That guy's digital footprints are enormous.
Yes, it was ported from x86. And no, xv6 is not really an OS you want to use in production. There are a lot of design decisions that optimize for pedagogy instead of performance or even reliability (e.g. the kernel is more than happy to panic if/when you get off the "intended path").
Why do you need a full blown Linux for that? Much of the provided features are overkill for such embedded systems. Both NuttX and Zephyr provide POSIX(-like) APIs, NuttX has an API quite similar to the Linux kernel, so it should be somewhat easier to port missing stuff (have not tried to do that, the project I was working on got cancelled)
If you want a POSIX OS, nommu Linux already isn't it: it doesn't have fork().
Just reading about this...turns out nommu Linux can use vfork(), which unlike fork() shares the parent's address space. Another drawback is that vfork's parent process gets suspended until the child exits or calls execve().
Typicall you always call vfork() + execve(), vfork is pretty useless on its own.
Think about it like CreateProcess() on Windows. Windows is another operating system which doesn't support fork(). (Cygwin did unholy things to make it work anyway, IIRC.)
You're not alone, I feel the same way. I think the future if linux really will need to remove nommu would be a fork. I'm not sure if there's the community for that though.l
There are plenty of FOSS POSIX like for such systems.
Most likely I won't be around this realm when that takes shape, but I predict the GNU/Linux explosion replacing UNIX was only a phase in computing history, eventually when everyone responsible for its success fades away, other agendas will take over.
It is no accident that the alternatives I mention, are all based on copyleft licenses.
This is a foreseeable cataclysm for me, as I retire next year, and the core of our queing system is 64-bit clean (k&r) as it compiled on Alpha, but our client software is very much not.
This is a young mans' game, and I am very much not.
I don't think it makes sense to run Linux on most nommu hardware anymore. It'd make more sense to have a tiny unikernel for running a single application, because on nommu, you don't typically have any application isolation.
> on nommu, you don't have any application isolation
That isn't necessarily the case. You can have memory protection without a MMU - for instance, most ARM Cortex-M parts have a MPU which can be used to restrict a thread's access to memory ranges or to hardware. What it doesn't get you is memory remapping, which is necessary for features like virtual memory.
Yeah, nommu absolutely doesn't imply zero memory isolation. I have a kernel port to an architecture with a nonstandard way of doing memory isolation and the existing nommu infrastructure is the only reason it can exist.
virtual memory as in swap is one, but imo bigger one is memory-mapped files
ELKS can still run on systems without an mmu (though not microcontrollers afaik).
ELKS runs 16bit x86, including 8086.
Note ELKS is not Linux.
There's also Fuzix.
Supporting 32bit is not 'simple' and the difference between 32bit hardware and 64bit hardware is not big.
The industry has a lot of experience doing so.
In parallel, the old hardware is still supported, just not by the newest Linux Kernel. Which should be fine anyway because either you are not changing anything on that system anyway or you have your whole tool stack available to just patch it yourself.
But the benefit would be a easier and smaller linux kernel which would probably benefit a lot more people.
Also if our society is no longer able to produce chips in a commercial way and we loose all the experience people have, we are probably having a lot bigger issues as a whole society.
But I don't want to deny that it would be nice to have the simplest way of making a small microcontroller yourself (doesn't has to be fast or super easy just doable) would be very cool and could already solve a lot of issues if we would need to restart society from wikipedia.
The comment you're responding to isn't talking about 32 vs 64 bit, but MMU vs no MMU.
Removing nommu makes the kernel simpler and easier to understand.
Nothing prevents you from maintaining nommu as a fork. The reality of things is, despite your feelings, people have to work on the kernel, daily, and there comes a point where your tinkering needs do not need to be supported in main. You can keep using old versions of the kernel, too.
Linux remains open source, extendable, and someone would most likely maintain these ripped out modules. Just not at the expense of the singular maintainer of the subsystem inside the kernel.
> there comes a point where your tinkering needs do not need to be supported in main.
Linux's master branch is actually called master. Not that it really matters either way (hopefully most people have realised by now that it was never really 'non-inclusive' to normal people) but pays to be accurate.
This seems like a dumb thing to be pedantic about, actually
What is meant by inclusive here? I'm having trouble following this comment other than the clarification to the name.
The context is that many people (and especially public repositories from US companies) started changing the name to ‘main’ out of a misguided inclusivity push that comes from a niche political theory that says even if words aren’t slurs (which obviously should be avoided), if a word has a meaning that could possibly be connected to anything that could possibly be in any way sensitive to anybody, then we need to protect people from seeing the word.
In this case, out of dozens of ways the word is used (others being like ‘masters degree’ or ones that pretty closely match Git’s usage like ‘master recording’ in music), one possible one is ‘slavemaster’, so some started to assert we have to protect people from possibly making the association.
I think the amount of whines proved the inclusivity people right. Fwiw, in git the main branch doesn't dominate the other branches and it is not an expert so it is not a master. It is main in the same way the main street in a city is main.
I wish there were strong enough words to tell you about how few shits I give about the name of a branch.
> Not that it really matters either way
But you still decided to make it your personal battle to make a comment remind people that the evil inclusive people did a no no by forcing you to rename your branches from master to main, yes.
>pays to be accurate.
No, it doesn't. Source: you knew exactly what I was talking about when I said main. You also would have known what I was talking about had I said trunk, master, develop or latest.