ziofill 3 days ago

https://www.zdnet.com/article/20-years-later-real-time-linux...

This article explains well why PREEMPT_RT is a big deal and why it was so much work to get in into the kernel.

  • 7e 3 days ago

    That article claims that without NO_HZ Linux wouldn’t be running in datacenters. Um, no. Many hyperscalers still haven’t enabled it! Stopped reading there.

    • jpgvm 2 days ago

      Yeah that is hyperbole but NO_HZ=full and friends are extremely useful for latency sensitive workloads. i.e busy-loop polling workloads that need to rip things as fast as possible out of DMA buffers etc.

      However I would say the more useful features are the less crazy version in cpusets and the various tools to deal with irq pinning. These are in extremely wide use at this point and make a world of difference even if you aren't trying to squeeze every possible thing off your cores.

  • amelius 3 days ago

    Are there any good books or other resources yet on realtime programming on Linux?

simonask 2 days ago

As a musician, glad to see audio production mentioned as an important use case.

Even a 10 ms delay is noticeable when playing an instrument. A lot of processing is required to produce a sound: Receive the input over a hardware interface, send it into userspace, determine what sound to play - usually a very complicated calculation with advanced plugins - potentially accessing many megabytes of raw sound data, apply chains of effects, mix it all together, send megabytes of uncompressed sound data back to the kernel, and push it out through an audio interface.

The more predictable the kernel can be, the more advanced audio processing can be, and better music comes out. Every single microsecond counts.

Modern software instruments can emulate acoustic instruments with a high degree of precision and realism, and a huge range of expressive freedom, but that takes a lot of processing power in real time.

  • guenthert 2 days ago

    I Dunno. "The main aim of the PREEMPT_RT patch is to minimize the amount of kernel code that is non-preemptible", that's nice and all, but there are still no guarantees, which (in some interpretation of "real time") a RTOS is all about. For some applications the effort might suffice, but others will insist on those guarantees. For music recordings, I (perhaps naively) would expect a decent audio card with its own processor and (RT) firmware would yield better results.

    • jpc0 2 days ago

      > For music recordings, I (perhaps naively) would expect a decent audio card with its own processor and (RT) firmware would yield better results.

      And is commonly used in high end professional settings for audio...

      I'm not speaking against the above comment. Audio is hard, but purely because of the timelines required with extremely low buffer sizes. It's pretty common to be running at buffer sizes at 256 samples or lower... However on a modern processor thats very low latency... At 48000 samples per second a buffer of 256 samples would generally give you a latency of 5ms give or take... But that also gives you 5ms to do all processing of that buffer which can be significant as was mentioned above.

      Honestly though in my experience, FPGAs are the way to go for super low latency audio, most modern implementations are measured in micro seconds for processing stages.

      The problem is you have to do all the processing on the FPGA since any round trip to the CPU would take eons compared.

      Avid HDX is the pricy but industry standard for studio and post production work. There are other options but because most people don't need them so they aren't popular, most people are fine with restricting processing during recording which is when the latency is important and then when producing and mixing latency could be several hundred ms and you don't care...

      Source: me, professional audio "engineer" with over a decade in the industry who has recorded multiple commercially successful albums, ad spots, generic corporate videos, radio spots and entire radio shows etc...

    • cwillu 2 days ago

      I can easily manage several instruments at 1.3ms latency while still using my machine for gaming and such on this patchset without more than the occasional xrun, where the same machine can barely manage 10ms latency if I don't have _anything_ running but the audio applications.

      Something doesn't have to be perfect to enable a usage.

hamandcheese 3 days ago

The Pi 5 support is surprising, or rather that it's only landing now.

  • haukem 3 days ago

    The Raspberry Pi Foundation does not do a good upstream support. It is not very bad, but also not good. They should start adding support for their new chips in the upstream projects before they get into the market, like Intel does it for example. They could add support for some new IP cores, without reveling when and how it will be used later. When the product comes out the only add small patches linking the code together like device tree files.

    It would also be good if they would release all the closed source firmware files needed for their devices under a redistributable license in the linux-firmware repository. For some time it was not allowed to redistribute the binary wifi firmware needed for the Raspberry Pi devices. This is needed for Linux distributions to package the binaries.

    I hope they do this only because of lack of resources and not intentionally to lock people into their Linux distribution.

    • rbanffy a day ago

      > They should start adding support for their new chips in the upstream projects before they get into the market, like Intel does it for example

      This is even savvy marketing - it always creates some buzz around features of future chips.

    • jakjak123 3 days ago

      Its not awful, but its not good either. Pi4 has been a looooong slog, but they also use components that cant be released under GPL license, so there is not much they could upstream for those components

  • seba_dos1 2 days ago

    Virtually nobody uses mainline Linux on Raspberry Pis, so not sure why would it be so surprising.

  • master_crab 3 days ago

    If I went and checked commits, I’m sure there are a lot of business critical IOT/controller type processes that now rely on Raspberry and therefore engineers backing support out the gate

    • hamandcheese 3 days ago

      But it's not out of the gate, it's been almost a year.

      • master_crab 3 days ago

        All relative I guess. “Out the gate” for fairly conservative industrial controllers can probably be measured in months to years.

        You never catch them using anything near latest in an application.

xattt 3 days ago

Still waiting on SR-IOV for Xe graphics to make it…

Modules are available, but it’s hit-and-miss with updates.

  • beeflet 3 days ago

    Isn't intel SRIOV getting mainlined with Xe driver in 6.12 or am I mistaken?

    • xattt 3 days ago

      It’s a fairly significant feature but it’s not mentioned in the 6.12 pull notes. Various sites mention that it’s “definitely, for sure, this time” mainlined in the next release, but I haven’t seen anything definitive.

      I’m just an end user, so I don’t know if it’s been mentioned in a newsgroup somewhere.

  • vamega 3 days ago

    Yeah, went through a whole journey getting this to work on a NixOS guest with a Proxmox host recently.

    You reminded me to write up the process, and publish a flake for NixOS users who want the kernel.

eblanshey 3 days ago

So Linux now officially support RTOS capabilities, without patches, which is pretty cool. I wonder, realistically, how many applications that were originally designed to use microcontrollers for real-time purposes, can be migrated to use Linux, which vastly simplifies and lowers the cost development. And having the ability to use high-level languages like Python significantly lowers the barrier to entry. Obviously certain applications require the speed of a MCU without an operating system, but how many projects really don't need dedicated MCUs?

  • elcritch 3 days ago

    Unfortunately migrating real-time stuff to Linux _doesn't_ necessarily reduce costs or simplify real-time development needs. I've been doing embedded development for 5+ years at a few companies and doing embedded Linux is still a slog. I prefer a good MCU running Nim or other modern language. Heck there's even MicroPython nowadays.

    Especially for anything that needs to "just run" for multiple years. Linux means you must deal of the distro or something like Yocto or Buildroot. Both of which have major pain points.

    • eblanshey 3 days ago

      I would think the portability of, say, a Python application running on Linux is a nice benefit. Try switching from one MCU to a totally different one and you may have to start from scratch (e.g. try going from Microchip to STM.) Can you describe why embedded Linux is still a slog? And what do you think it would take for the issues to be addressed?

      • makapuf 2 days ago

        I thought we were talking about real-time applications, which I'm not sure Python is (even tuning the GC). But if we're talking about the difficulty of changing MCU families (remember stm32 are >1000 different chips) changing OS is also difficult, even changing from yocto to buildroot can also be a lot of pain on linux.

      • wongarsu 3 days ago

        Doesn't Micropython already get you 95% of the way towards just running the same Python code on multiple MCUs?

        • eblanshey 3 days ago

          I'm not sure, I've never used it. But I think the issue is that the number of MCUs that support micropython is very small.

  • rcxdude 2 days ago

    I think there's still a wide range of devices for which a bare-metal or smaller RTOS approach is still more cost-effective. Anything that's simple enough it doesn't need networking, a filesystem, or a display, for example. Especially considering bare-metal embedded work seems to pay less than development on linux. But yes, embedded linux can address a huge part of the market and RT expands that a lot (though, of course, most people for whom that is a good option are already using it, it was a well-supported patchset for a long time)

7e 3 days ago

Linux is boomer tech. It keeps up with the latest hardware, that’s about it. Even real time is decades old. I would rather hear about the latest crypto Ponzi scheme than the latest Linux release. At least that would be novel.

  • beeflet 3 days ago

    its what a kernels supposed to do. also cryptocurrency scams are not that novel in my experience, they're usually pretty derivative and low-effort

resource_waste 3 days ago

Ubuntu/Debian/Mint family will get it, one day...

Reminded that Debian-family is outdated linux that uses the marketing 'stable' which has 0 relations to the number of bugs.

Modern Linux like Fedora has those bugs already fixed.

I say this as a warning. I thought we were trapped with Windows 11 and 'Linux'(meaning Debian-family). No, turns out that is outdated linux and Modern linux is amazing.

I'm a bit reluctant to lump Fedora in with 'Linux', because its not fair to Fedora.

  • rcxdude 2 days ago

    Stability is more about not introducing new bugs than reducing the number of bugs in total. i.e. if a system is working well enough for you now, because you can work around or are unaffected by the bugs that currently exist, and your requirements don't change, it will continue to work for you in the future, because they are much less likely to disrupt your setup with new bugs or even just changes that require your intervention. Debian is great for that 'throw some infrastructure on a box and leave it chugging away for years' kind of situation, and conversely terrible for the 'actively developing a new product' situation.

  • yjftsjthsd-h 2 days ago

    > Reminded that Debian-family is outdated linux that uses the marketing 'stable' which has 0 relations to the number of bugs.

    Correct: It is stable, which is to say that things that work today will work tomorrow and the system does its best to stay out of your way. It doesn't, say, expect you to jump major versions every 6 months like other distros you might name.

    > I thought we were trapped with Windows 11 and 'Linux'(meaning Debian-family).

    I mean, yeah, the family of Linux distros alone is huge; conflating "Linux" and "Debian-family" is a significant mistake.

    > No, turns out that is outdated linux and Modern linux is amazing.

    And less-bleeding-edge Linux is also amazing. Unless you need the new stuff, moving at a slightly more sedate pace is perfectly fine. Which way the tradeoffs go depends on you and your usecases; some people want all the new shiny features right right now and benefit from riding the bleeding edge (like Debian Sid, Arch, or yes Fedora), and some people want to set things up and then not have to think about that machine for another few years (funny enough, I'd actually prefer that Debian supported each release for an even longer time, but that takes resources the project doesn't have). Bashing either approach is myopic.

  • exe34 2 days ago

    I don't think I've ever had a Debian install crash. When they say stable, they mean stable.

  • kcb 2 days ago

    Fedora and Ubuntu release at the same frequency...

  • jakjak123 3 days ago

    Yeah, I have been saying this for years now. Debian and Ubuntu type distros starts doing custom patches and cherry picks changes into 3 year old software, just to keep it chugging along. Sounds like insanity to me. Just upgrade so you are not 3-4 years behind, and by keeping the number of custom patching lower, have a simpler time fixing bugs.

    • nineteen999 2 days ago

      This is fine if you are say, only running a web site that sells internet widgets and the only downside to your site going down is a few stroppy stockholders.

      If you are running a mission critical service, oh say for example wide-area emergency services dispatch, the downside is people might die and you will look like a dickhead for insisting on living on the bleeding edge.