adrian_b 6 hours ago

Looking at the sparse documentation of openrsync does not create any confidence for me that it can be an acceptable substitute for rsync.

In my opinion, any program that is supposed to copy files, but which is not able to make perfect copies, i.e. copies that do not lose any bit of data or metadata that was present in the original file, is just unusable garbage.

Unfortunately, most copying programs available in UNIX-like operating systems (and also many archiving programs) do not make perfect file copies with their default options and many of them are never able to make perfect copies, regardless what options are used.

I have not looked recently at the scp command of ssh, but at least until a few years ago it was not possible to make perfect file copies with scp, especially when the copies were done between different operating systems and file systems. That is why I never use scp, but only rsync over ssh.

Rsync is the only program that I have seen, which is able (with the right options) to make perfect file copies even between different operating systems and file systems (for instance between FreeBSD with UFS and Linux with XFS), preserving also metadata like extended file attributes, access control lists and high-precision file timestamps (some copying programs and archiving programs truncate high-precision timestamps).

The current documentation of openrsync does not make any guarantee that it can make complete file copies, so by default I assume that it cannot, so for now it is a program that I consider useless.

Beside rsync for copying, one of the few Linux archiving programs that can archive perfect file copies is bsdtar (when using the pax file format; the ancient tar and cpio file formats cannot store all modern file metadata).

(FYI: I always alias rsync to '/usr/bin/rsync --archive --xattrs --acls --hard-links --progress --rsh="ssh -p XXX -l YYYYYYY"')

(With the right CLI options, "cp" from coreutils can make perfect file copies, but only if it has been compiled with appropriate options; some Linux distributions compile coreutils with wrong options, e.g. without extended file attributes support, in which case "cp" makes only partial file copies, without giving any warnings or errors.)

  • inglor 5 hours ago

    As a contrast to your take - I work for a backup company and I was really surprised to discover most of our customers (big enterprises) really do not care about 99% of metadata restored correctly and are fine with just restoring the data.

    (We restore everything super carefully but sometimes I feel like we're the only ones who care)

    • nolok 5 hours ago

      I'm willing to bet a decent number "don't care" until they do care because their permissions don't work or their time based script screws up or whatever else nobody thinks about when they're in panic mode about "I lost my data".

      • ExoticPearTree 23 minutes ago

        In case of a complete disaster recovery, the fact that a script or two might fail is super OK. That's why after recovery there's always the cleanup phase where you fix stuff that broke during recovery.

    • mmcnl 5 hours ago

      They don't care because you care, so they never experienced the misfortune of not caring.

    • mohas 4 hours ago

      I'm with you on this, I think that data is 99% of what is important and the rest can be recreated or improvised and if in your system you rely too much on file metadata your need more engineering

      • ForHackernews 3 hours ago

        If that information drives operational processes then you can argue it is data, not metadata.

        • wyclif 2 hours ago

          The output of the command the OP mentions

            $ /usr/bin/rsync –version
          
          ...doesn't return anything referencing openrsync. I'm on Sequoia 15.3.1.
          • luckman212 2 hours ago

            The change was made in 15.4

  • dcow 5 hours ago

    > The current documentation of openrsync does not make any guarantee that it can make complete file copies, so by default I assume that it cannot, so for now it is a program that I consider useless.

    Is it possible this is just a documentation style-tone mismatch? My default assumption would be that openrsync is simply a less restrictively licensed rsync, and I wouldn’t assume it works any differently. Have you verified your strong hypothesis? Or are you just expressing skepticism. It’s hard to tell exactly.

    Edit: I read the openrsync readme. It says it’s compatible with rsync and points the reader to rsync’s docs. Unless extended file attributes, ACLs, and high resolution timestamps are optional at the protocol level, it must support everything modern rsync supports to be considered compatible, right? Or are you suggesting it lies and accepts the full protocol but just e.g. drops ACLs on the floor?

    • wkat4242 3 hours ago

      From the article:

      > The openrsync command line tool is compatible with rsync, but as noted in the documentation openrsync accepts only a subset of rsync’s command line arguments.

  • WhyNotHugo 2 hours ago

    OpenRsync is from the OpenBSD project. This is typically an indicator of good quality and a good focus on security. However, in this case, even the official website indicates:

    > We are still working on it... so please wait.

  • graemep 6 hours ago

    This is a licensing issue for Apple, and only a small proportion of their users will care about this, and those users will just install rsync.

    • adrian_b 6 hours ago

      You are right, but I have written my comment exactly for making those users aware of this problem.

      I consider this a very serious problem, because most naive users will assume automatically that when they give a file copy command they obtain a perfect duplicate of the original file.

      It is surprising for them to discover that this is frequently not true.

    • sneak 39 minutes ago

      And the rsync that has historically come with macOS was always way out of date, so we end up installing a newer one anyway. This doesn’t change much.

  • karel-3d an hour ago

    Current rsync version Apple ships is from 2006. It predates the iPhone.

  • scrapheap 6 hours ago

    What do you mean by perfect copies here? Do you mean the file content itself or are you also including the filesystem attributes related to the file in your definition?

    • adrian_b 6 hours ago

      A file consists of data and various metadata, e.g. file name, timestamps, access rights, user-defined file attributes.

      By default, a file copy should include everything that is contained in the original file. Sometimes the destination file system cannot store all the original metadata, but in such cases a file copying utility must give a warning that some file metadata has been lost, e.g. like when copying to a FAT file system or to a tmpfs file system as implemented by older Linux kernels. (Many file copy or archiving utilities fail to warn the user when metadata cannot be preserved.)

      Some times you may no longer need some of the file metadata, but the user should be the one who chooses to loose some information, it should not be the default behavior, especially when this unexpected behavior is not advertised anywhere in the documentation.

      The origin of the problem is that the old UNIX file systems did not support many kinds of modern file metadata, i.e. they did not have access control lists or extended file attributes and the file timestamps had a very low resolution.

      When the file systems were modernized (XFS was the first Linux file system supporting such features, then slowly also the other file systems were modernized), most UNIX utilities have not been updated until many years later, and even then the additional features remained disabled by default.

      Copying like rsync, between different computers, creates additional problems, because even if e.g. both Windows and Linux have extended file attributes, access control lists and high-resolution file timestamps, the APIs used for accessing file metadata differ between operating systems, so a utility like rsync must contain code able to handle all such APIs, otherwise it will not be able to preserve all file metadata.

      • scrapheap 5 hours ago

        But what you're referring to here are the attributes that the file system stores about the file, not the file itself. By default I wouldn't expect a copy of a file to have identical file system attributes, just an identical content for the file. I would expect some of the file system attributes to be copied, but not all of them.

        Take the file owner for example if I take a copy of a file then by default I should be the owner of that file as it's my copy of the file, and not the original file owner's copy.

        An alternative way of looking at it is if I have created a file on my local machine that's owned by root and has the setuid bit set on it's file permissions then there's no way that I should be able to copy that file up to a server with my normal user account and have those atttibutes still set on the copy.

        • bayindirh 4 hours ago

          As a counterpoint, many daemons or programs (e.g.: sshd, ssh, slurm, munge to name a few) expect their files to have specific users, groups and modes for security and behavioral guarantees, and flat out refuse to run if these requirements are not met.

          When installing these things from archives or moving/distributing relevant files to large fleets, I expect the file contents and all metadata incl. datestamps to be carried the way I want, because all of that data is useful for me and the application which uses the file.

          If the user doing the copying has no right to copy the file exactly, I either expect a loud warning or an error depending on the situation.

          • op00to an hour ago

            Should the SELinux context of a file always be copied from the source when moving or copying it? Or should it typically inherit the context defined by policy for the destination directory structure?

            For example, copying a file from a user's home directory (perhaps user_home_t) into /var/www/html/ usually requires it to get the httpd_sys_content_t context (or similar) to be served by the webserver correctly and securely. Blindly copying the original user_home_t context would likely prevent the webserver from accessing the file.

            Doesn't this suggest that some metadata, specifically the SELinux context, often shouldn't be copied verbatim from the source but rather be determined by the destination and the system's security policy?

            • bayindirh 41 minutes ago

              What if the tool accessing the file is malicious, and can copy the file, but can't change the context of the said file? SELinux shall be strict on its behavior even if it's a detriment to user convenience.

              SELinux contexts shall be sticky, and needs to be manually (re)set after copying.

              This is the default behavior, BTW. SELinux contexts are not (re)set during copy operations in most cases, from my experience. You need to change/fix the context manually.

          • prmoustache an hour ago

            That is not what most file copying tools do by default. They usually only do that when you specify it and for good reasons.

            When foo copy a file from user bar, and put it on his homedir, the last thing h want is for it to be owned by the foo user.

            Your expectations are irrealistics.

            • bayindirh 43 minutes ago

              > That is not what most file copying tools do by default.

              Yes, and that's OK.

              > When foo copy a file from user bar, and put it on his homedir, the last thing h want is for it to be owned by the foo user.

              It depends.

              > Your expectations are irrealistics (sic).

              No, rsync can do this (try -avSHAX) and tar does this by default, and we're talking about rsync here.

        • LoganDark 5 hours ago

          "File" means an entry in the file system, and so includes the metadata. It is not only the data.

          When a copy a file you will be the owner because the new copy is your copy. Other attributes however like modification date for example will remain the same. It's not as if you wrote the contents of the file anew, especially not for copy-on-write architectures like Apple's APFS.

          • scrapheap 5 hours ago

            So you also would expect some of the file system attributes to be copied, but not all of them. :D

            • LoganDark 3 hours ago

              I expect all of them to be copied except for specifically the owner and group. Created date, modified date, ACLs, extended attributes, eeeverything else.

              My expectations are more specific than "not all of them", so please don't misrepresent them.

              • scrapheap an hour ago

                Out of interest, why wouldn't you expect the created timestamp for a file that you've created by copying another file to be the point in time which the copy was made? After all, before that moment the file didn't exist, and after that moment it did.

                • brulard 7 minutes ago

                  For some context you may want the new file creation time, but if I copy a folder of some backups for example, I don't want every file to have date set for today. I'll lose the possibility to filter files based on creation date, which is very useful for such use case. I don't remember that I would ever need a copy to have creation date reset.

                • LoganDark 25 minutes ago

                  macOS has "date added" for this, which is the date the file was added to its containing folder. It's not the exact same as the date created that you're talking about, though.

                  I honestly don't have a strong preference either way on this. I don't use date created except for misbehaving media downloaders that think the file modified date is a good place to put the video publication date. I'm sure there's a flag somewhere that I don't care enough to find.

      • prmoustache an hour ago

        The cp command does copy the file data but not the metadata. There is a reason we have come up with 2 words to distinguish them.

        Rsync only cp the metadata when you specifically ask it to anyway. I haven't had a look at openrsync man page but I would assume it is the same in the case of the later.

    • fhars 3 hours ago

      It maens that if you copy a file from NTFS to ext4, ext4 will magically sprout support for alternate data streams.

      • johnisgood 3 hours ago

        And all files from NTFS have +x. :|

thrdbndndn 12 hours ago

As a relatively new Linux user, I often find the "versioning" of bundled system utilities also to be a bit of a mess, for lack of a better word.

A classic example, at least from my experience, is `unzip`. On two of my servers (one running Debian and the other an older Ubuntu), neither of their bundled `unzip` versions can handle AES-256 encrypted ZIP files. But apparently, according to some Stack Overflow posts, some distributions have updated theirs to support it.

So here is what I ran into:

1. I couldn't easily find an "updated" version of `unzip`, even though I assume it exists and is open source.

2. To make things more confusing, they all claim to be "version 6.00", even though they obviously behave differently.

3. Even if I did find the right version, I'm not sure if replacing the system-bundled one is safe or a good idea.

So the end result is that some developer out there (probably volunteering their time) added a great feature to a widely used utility, and yet I still can’t use it. So in a sense, being a core system utility makes `unzip` harder to update than if it were just a third-party tool.

I get that it's probably just as bad if not worse on Windows or macOS when it comes to system utilities. But I honestly expected Linux to handle this kind of thing better.

(Please feel free to correct me if I’ve misunderstood anything or if there’s a better way to approach this.)

  • adwf 11 hours ago

    In the specific case here, 7z is your friend for all zips and compressed files in general, not sure I've ever used unzip on Linux.

    Related to that, the Unix philosophy of simple tools that do one job and do it well, also applies here a bit. More typical workflow would be a utility to tarball something, then another utility to gzip it, then finally another to encrypt it. Leading to file extensions like .tar.gz.pgp, all from piping commands together.

    As for versioning, I'm not entirely sure why your Debian and Ubuntu installs both claim version 6.00, but that's not typical. If this is for a personal machine, I might recommend switching to a rolling release distro like Arch or Manjaro, which at least give upto date packages on a consistent basis, tracking the upstream version. However, this does come with it's own set of maintenance issues and increased expectation of managing it all yourself.

    My usual bugbear complaint about Linux (or rather OSS) versioning is that people are far too reluctant to declare v1.00 of their library. Leading to major useful libraries and programs being embedded in the ecosystem, but only reaching something like v0.2 or v0.68 and staying that way for years on end, which can be confusing for people just starting out in the Linux world. They are usually very stable and almost feature complete, but because they aren't finished to perfection according to the original design, people hold off on that final v1 declaration.

    • Squossifrage 4 hours ago

      Info-Zip Unzip 6.00 was released in 2009 and has not been updated since. Most Linux distros (and Apple) just ship that 15-plus-year-old code with their own patches on top to fix bugs and improve compatibility with still-maintained but non-free (or less-free) competing implementations. Unfortunately, while the Info-Zip license is pretty liberal when it comes to redistribution and patching, it makes it hard to fork the project; furthermore, anyone who wanted to do so would face the difficult decision of either dropping or trying to continue to support dozens of legacy platforms. Therefore, nobody has stepped up to take charge and unify the many wildly disparate mini-forks.

    • setopt 8 hours ago

      > Related to that, the Unix philosophy of simple tools that do one job and do it well, also applies here a bit. More typical workflow would be a utility to tarball something, then another utility to gzip it, then finally another to encrypt it. Leading to file extensions like .tar.gz.pgp, all from piping commands together.

      I do this for my own files, but half of the time I zip something, it’s to send it to a Windows user, in which case zip is king.

    • aragilar 3 hours ago

      The issue in this case is upstream is dead, so there are random patches. Same thing happened to screen for a bit.

    • tecleandor 3 hours ago

      Was there any problem with 7z some years ago? I feel like I've been actively avoiding it for having the feeling that I've read something bad about it, but I can't remember what. But I could've mixed it with something else. It sometimes happens to me.

      • oblio 3 hours ago

        Hard to say for sure, did SourceForge put malware in their installers many millennia ago?

    • DonHopkins 11 hours ago

      The "Unix Philosophy" is a bankrupt romanticized after the fact rationalization to make up excuses and justifications for ridiculous ancient vestigial historic baggage like the lack of shared libraries and decent scripting languages, where you had to shell out THREE heavyweight processes -- "[" and "expr" and a sub-shell -- with an inexplicable flurry of punctuation [ "$(expr 1 + 1)" -eq 2 ] just to test if 1 + 1 = 2, even though the processor has single cycle instructions to add two numbers and test for equality.

      • chubot 10 hours ago

        ??? This complaint seems more than 20 years too late

        Arithmetic is built into POSIX shell, and it's universally implemented. The following works in basically every shell, and starts 0 new processes, not 2:

            $ bash -c '[ $((1 + 1)) = 2 ]; echo $?'
            0
            $ zsh -c '[ $((1 + 1)) = 2 ]; echo $?'
            0
            $ busybox ash -c '[ $((1 + 1)) = 2 ]; echo $?'
            0
        
        YSH (part of https://oils.pub/ ) has a more familiar C- or JavaScript-like syntax:

            $ ysh -c 'if (1 + 1 === 2) { echo hi }'
            hi
        
        It also has structured data types like Python or JS:

            $ echo '{"foo": 42}' > test.json
            $ ysh
            ysh-0.28$ json read < test.json
            ysh-0.28$ echo "next = $[_reply.foo + 1]"
            next = 43
        
        and floats, etc.

            $ echo "q = $[_reply.foo / 5]"
            q = 8.4
        
        https://oils.pub/release/latest/doc/ysh-tour.html (It's probably more useful for scripting now, but it's also an interactive shell)
        • DonHopkins 10 hours ago

          20 years doesn't even get you back to the last century, it's more like 48 years since 1977 when Bourne wrote sh. As one of the authors of the Unix Haters Handbook, published relatively recently in 1994, and someone who's used many versions of Unix since the 1980's, of course I'm fully aware that those problems are hell of a lot more than 20 years old, and that's the whole point: we're still suffering from their "vestigial historic baggage", arcane syntax and semantics originally intended to fork processes and pipe text to solve trivial tasks instead of using shared libraries and machine instructions to perform simple math operations, and people are still trying to justify all that claptrap as the "Unix Philosophy".

          Care to explain to me how all the problems of X-Windows have been solved so it's no longer valid to criticize the fallout from its legacy vestigial historic baggage we still suffer from even today? How many decades ago did they first promise the Year of the Linux Desktop?

          The X-Windows Disaster: This is Chapter 7 of the UNIX-HATERS Handbook. The X-Windows Disaster chapter was written by Don Hopkins.

          https://medium.com/@donhopkins/the-x-windows-disaster-128d39...

          Why it took THREE processes and a shitload of context switches and punctuation that we are still stuck with to simply test if 1 + 1 = 2 in classic Unix [TM]:

            [ "$(expr 1 + 1)" -eq 2 ]
          
          Breakdown:

            expr 1 + 1
          
          An external program used to perform arithmetic.

            $(...) (Command substitution)
          
          Runs expr in a subshell to capture its output.

            [ ... ]
          
          In early shells, [ (aka test) was also an external binary.

          It took THREE separate processes because:

          Unix lacked built-in arithmetic.

          The shell couldn't do math.

          Even conditionals ([) were external.

          Everything was glued together with fragile text and subprocesses.

          All of this just to evaluate a single arithmetic expression by ping-ponging in and out of user and kernel space so many times -- despite the CPU being able to do it in a single cycle.

          That’s exactly the kind of historical inefficiency the "Unix Philosophy" retroactively romanticizes.

          • op00to an hour ago

            > The X-Windows Disaster: This is Chapter 7 of the UNIX-HATERS Handbook. The X-Windows Disaster chapter was written by Don Hopkins.

            This gave me a big laugh, I love the UNIX-haters Handbook despite loving UNIXy systems. Thank you for decades of enjoyment and learning, especially in my late-90s impressionable youth.

          • chubot 10 hours ago

            I'm aware it used to be that way, but it's long been fixed

            It's fine to hate Unix, but you should update your examples :)

          • wazoox 6 hours ago

            I love "the Unix Haters Handbook", just as I love "Worse is Better", but this ship has sailed 30 years ago as you mentioned. Your "old man yelling at clouds" rant reminds me of Bjarne Stroustrup's quip, "there are two type of languages, those everyone complains about and those nobody uses". I mean run your nice, coherent, logical LISP machine or Plan9 system of whatever is that you prefer, but let us enjoy our imperfect tools and their philosophy :)

            • DonHopkins 3 hours ago

              The Unix philosophy really comes down to: "I have a hammer, and everything is a nail."

              ESR's claptrap book The Art of Unix Programming turns Unix into philosophy-as-dogma, where flaws are reframed as virtues. His book romanticizes history and ignores inconvenient truths. He's a self-appointed and self-aggrandizing PR spokesperson, not a designer, and definitely not a hacker, and he overstates and over-idealizes the Unix way, as well as and his own skills and contributions. Plus he's an insufferable unrepentant racist bigot.

              Don't let historical accident become sacred design. Don’t confuse an ancient workaround with elegant philosophy. We can, and should, do better.

              Philosophies need scrutiny, not reverence.

              Tools should evolve, not stagnate.

              And sometimes, yelling at clouds stirs the winds of change.

              https://en.wikipedia.org/wiki/Unix_philosophy#Criticism

              >In a 1981 article entitled "The truth about Unix: The user interface is horrid" published in Datamation, Don Norman criticized the design philosophy of Unix for its lack of concern for the user interface. Writing from his background in cognitive science and from the perspective of the then-current philosophy of cognitive engineering, he focused on how end-users comprehend and form a personal cognitive model of systems—or, in the case of Unix, fail to understand, with the result that disastrous mistakes (such as losing an hour's worth of work) are all too easy.

              Donald A. Norman: The truth about Unix: The user interface is horrid:

              http://www.ceri.memphis.edu/people/smalley/ESCI7205_misc_fil...

              >In the podcast On the Metal, game developer Jonathan Blow criticised UNIX philosophy as being outdated. He argued that tying together modular tools results in very inefficient programs. He says that UNIX philosophy suffers from similar problems to microservices: without overall supervision, big architectures end up ineffective and inefficient.

              On the Metal: Jonathan Blow:

              https://archive.org/details/on-the-metal-jonathan-blow

              >Well, the Unix philosophy for example it has been inherited by Windows to some degree even though it's a different operating system, right? The Unix philosophy of you have all these small programs that you put together in two like Waves, I think is wrong. It's wrong for today and it was also picked up by Plan Nine as well and so -

              >It's micro services, micro services are an expression of Unix philosophy, so the Unix philosophy, I've got a complicated relationship with Unix philosophy. Jess, I imagine you do too, where it's like, I love it, I love a pipeline, I love it when I want to do something that is ad hoc, that is not designed to be permanent because it allows me- and you were getting inside this earlier about Rust for video games and why maybe it's not a fit in terms of that ability to prototype quickly, Unix philosophy great for ad hoc prototyping.

              >[...] All this Unix stuff, it's the sort of the same thing, except instead of libraries or crates, you just have programs, and then you have like your other program that calls out to the other programs and pipes them around, which is, as far from strongly typed as you can get. It’s like your data coming in a stream on a pipe. Other things about Unix that seemed cool, well, in the last point there is just to say- we've got two levels of redundancy that are doing the same thing. Why? Get rid of that. Do that do the one that works and then if you want a looser version of that, maybe you can have a version of a language that just doesn't type check and use that for your crappy spell. There it is.

              >[...] It went too far. That's levels of redundancy that where one of the levels is not very sound, but adds a great deal of complexity. Maybe we should put those together. Another thing about Unix that like- this is maybe getting more picky but one of the cool philosophical things was like, file descriptors, hey, this thing could be a file on disk or I could be talking over the network, isn't it so totally badass, that those are both the same thing? In a nerd kind of way, like, sure, that's great but actually, when I'm writing software, I need to know whether I'm talking over the network or to a file. I'm going to do very different things in both of those cases. I would actually like them to be different things, because I want to know what things that I could do to one that I'm not allowed to do to another, and so forth.

              >Yes, and I am of such mixed mind. Because it's like, it is a powerful abstraction when it works and when it breaks, it breaks badly.

              • skydhash 12 minutes ago

                No tool is perfect. The unix philosophy is a philosophy, not a dogma. It serves well in some use cases. And in the other use case, you’re perfectly fine to put the whole domain in a single program. The hammer has been there for millennia, but once we invented screw, we had to invent the screwdriver.

          • exe34 6 hours ago

            [flagged]

            • eesmith 6 hours ago

              Based on the account name, bio, and internal evidence you should assume this is Don Hopkins. His Wikipedia entry at https://en.wikipedia.org/wiki/Don_Hopkins includes:

              > He inspired Richard Stallman, who described him as a "very imaginative fellow", to use the term copyleft. ... He ported the SimCity computer game to several versions of Unix and developed a multi player version of SimCity for X11, did much of the core programming of The Sims, ... He is also known for having written a chapter "The X-Windows Disaster" on X Window System in the book The UNIX-HATERS Handbook.

              I hope this experience helps you realize that jumping immediately to contempt can easily backfire.

              • exe34 2 hours ago

                Nice, I'll put some bandaids on the stump that used to be my foot :-D

      • verandaguy 10 hours ago

            > TWO heavyweight processes
        
        If you're going to emphasize that it's two processes, at least make sure it's actually two processes. `[` is a shell builtin.

            > `eval` being heavy
        
        If you want a more lightweight option, `calc` is available and generally better-suited.

            > inexplicable flurry of punctuation
        
        It's very explicable. It's actually exceptionally well-documented. Shell scripting isn't syntactically easy, which is an artifact of its time plus standardization. The bourne shell dates back to 1979, and POSIX has made backwards-compatibility a priority between editions.

        In this case:

        - `[` and `]` delimit a test expression

        - `"..."` ensure that the result of an expression is always treated as a single-token string rather than splitting a token into multiple based on spaces, which is the default behaviour (and an artifact of sh and bash's basic type system)

        - `$(...)` denotes that the expression between the parens gets run in a subshell

        - `-eq` is used for numerical comparison since POSIX shells default to string comparison using the normal `=` equals sign (which is, again, a limitation of the type system and a practical compromise)

            > even though the processor has single cycle instructions to add two numbers and test for equality
        
        I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their domain.

        If I wanted to test if one plus one equals two at a multi-terabit-per-second bandwidth I'd write a C program for it that forces AVX512 use via inline assembly, but at that point I think I'd have lost the plot a bit.

        • DonHopkins 9 hours ago

          I was quite clear that this is HISTORICAL baggage whose syntax and semantics we're still suffering from. I corrected it from TWO to THREE and wrote a step by step description of why it was three processes in the other comment. That's the whole point: it was originally a terrible design, but we're still stuck with the syntactic and semantic consequences even today, in the name of "backwards compatibility".

          > they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance

          Even now you're bending over backwards to make ridiculous rationalizations for the bankrupt "Unix Philosophy". And you're just making my point for me. Does the Unix Philosophy say that the shell should be designed to be slow and inefficient and syntactically byzantine on purpose, or are you just making excuses? Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.

          • verandaguy 8 hours ago

            I actually didn't mention the Unix philosophy once in my comment, I just explained why the shell snippet you posted is the way it is. As far as I can tell, nobody in this thread's making long-winded ideological arguments about the Unix philosophy except you.

            I think it's a perfectly reasonable assessment to think of shell scripts as a glue layer between more complex software. It does a few things well, including abstracting away stuff like pipelining software, navigating file systems, dispatching batch jobs, and exposing the same interface to scripts as you'd use to navigate a command line as a human, interactively.

                > Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.
            
            This is the opinion of the vast majority of sysadmins, devops people, and other shell-adjacent working professionals I've encountered during my career. None of them, including myself when I'm wearing a sysadmin hat, deny the shortcomings of bash and friends, but none of us have found anything as stable or ubiquitous that fits this domain remotely as well.

            I also reject the idea that faster or more full-featured alternatives lack footguns, pre-loaded or otherwise.

            - C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.

            - C++ offers some improvements, but, being a near superset of C, it still has the footguns of its predecessor, to say nothing of the STL and the bloat issues caused by it.

            - Rust improves upon C++ by miles, but the borrow checker can bite you in nontrivial ways, the type system can be obtuse under some circumstances, cargo can introduce issues in the form of competing dependency versions, and build times can be very slow. Mutable global state is also, by design, difficult to work with.

            - Python offers ergonomic and speed improvements over POSIX shells in some cases, and a better type system than anything in POSIX shells, but it can't compete with most serious compiled languages for speed. It's also starting to have a serious feature bloat issue.

            Pick your poison. The reality is that all tools will suck if you use them wrong enough, and most tools are designed to serve a specific domain well. Even general-purpose programming languages like the ones I mentioned have specializations -- you can use C to build an MVC website, yes, but there are better tools out there for most real-world applications in that domain. You can write an optimizing compiler in Ruby, but if you do that, you should reevaluate what life choices led you to do that.

            Bash and co. are fine as shell languages. Their syntax is obtuse but it's everywhere, which means that it's worth learning, cause a bash script that works on one host should, within reason, work on almost any other *nix host (plus or minus things like relying on a specific host's directory structure or some such). I'd argue the biggest hurdle when learning are the difference between pure POSIX shell scripting idioms and bashisms, which are themselves very widely available, but that's a separate topic.

            • pjmlp 7 hours ago

              C was already limited by 1960's standards when compared to PL/I, NEWP and JOVIAL, 1970's standards when compared to Mesa and Modula-2, .....

              It got lucky ridding the UNIX adoptiong wave, an OS that got adopted over the others, thanks to having its source available almost at a symbol price of a tape copy, and a book commenting its source code, had it been available as commercial AT&T product at VMS, MVS, et al price points, no one would be talking about UNIX philosophy.

            • johnisgood 6 hours ago

              > - C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.

              That is a feature, not a bug. Add your own bound checks if you want it, or use Ada or other languages that add a lot of fluff (Ada has options to disable the addition of bound checks, FWIW).

              I am fine with Bash too (and I use shellcheck all the time), but I try to aim to be POSIX-compliant by default. Additionally, sometimes I just end up using Perl or Lua (LuaJIT).

              • verandaguy 24 minutes ago

                I never said it wasn't a feature. There was a time, and there are still certain specific domains, where bit bashing the way C lets you is a big benefit to have. But bug or not, I think it's reasonable to call these limitations as far as general-purpose programming goes.

                My argument was that C puts the onus on the user to work within those limitations. Implementing your own bounds checks, doing shared memory management, all that stuff, is extra work that you either have to do yourself or know and trust a library enough to use it, and in either case carry around the weight of having to know that nonstandard stuff.

          • wpm 9 hours ago

            We’re stuck with plenty of non-optimal stuff because of path dependency and historical baggage. So what? Propose something better. Show that the benefits of following the happy path of historical baggage don’t outweigh the outrageously “arcane” and byzantine syntax of…double quotes, brackets, dollar signs, and other symbols that pretty much every other language uses too.

        • DonHopkins 3 hours ago

          >I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their domain.

          DDT is a hell of a lot older than Bourne shell, is not interpreted, does have full efficient access to the machine instructions and operation system, and it even features a built-in PDP-10 assembler and disassembler, and lets you use inline assembly in your login file to customize it, like I described here:

          https://news.ycombinator.com/item?id=43609418

          And even the lowly Windows PowerShell is much more recent, and blows Bourne shell out of the water along so many dimensions, by being VASTLY more interoperable, powerful, usable, learnable, maintainable, efficient, and flexible, with a much better syntax, as I described here:

          https://news.ycombinator.com/item?id=43609571

          >When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.

          >It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.

      • pjmlp 7 hours ago

        The "Unix Philosophy" is some cargo cult among FOSS folks that never used commercial UNIX systems, since Xenix I haven't used any that doesn't have endless options on their man pages.

        • anthk 6 hours ago

          Well, we are set by your "Windows philosphy", and forget NT being a VMS rehash, we would still be using the crappy W9x designs with DOS crap back and forth.

          Even Risc OS seems to do better even if it doesn't have memory protection too (I think it hasn't, I didn't try it for more than a few days).

          • pjmlp 5 hours ago

            Thing is there is no "Windows philosphy" cargo cult, and I don't worship OSes nor languages, all have their plus and minus, use any of them when the situation calls for it, and it is a disservice to oneself to identify themselves to technology stacks like football club memberships given at birth.

            • anthk an hour ago

              Neither I am a sole Unix user; I have Risc OS open (Apache 2.0?) on an RPI to experiment something else beyond Unix/C.

              But Windows it's too heavyweight, from 8 it has been a disaster. And the NT kernel+explorer can be really slim (look at ReactOS, or XP, or a debloated W7).

              The problem it's that Apple and MS (and RedHat) are just selling shiny turds wasting tons of cycles to do trivial tasks.

              Worse, you can't slim down your install so it behaves like a sane system for 1GB of RAM.

              I can watch 720p@30FPS videos under a n270 netbook with MPV. Something even native players for WXP can't do with low level direct draw calls well enough.

              The Windows > XP philosophy among RedHat and Apple it's: let bloat and crap out our OSes with unnecesary services and XML crap (and interpreted languages such as JS and C#) for the desktop until hardware vendors idolize US so the average user has to buy new crap to do the same task ever and ever.

              Security? Why the fuck does Gnome 3 need JS at first? Where's Vala, where it could shine here and Mutter could get a big boost and memory leaks could be a thing of the past?

              • skydhash 3 minutes ago

                While I’m mot bothered by Gnome UI design choices, I was surprised by the choice of JS for the implementation.

          • DonHopkins 4 hours ago

            Even an operating system as brain damaged as Windows still has PowerShell, which lets you easily and efficiently perform all kinds of operations, dynamically link in libraries ("cmdlets") and call them directly, call functions with typed non-string parameters, pipe live OBJECTS between code running in the SAME address space without copying and context switching and serializing and piping and deserializing everything as text.

            PowerShell even has a hosting api that lets you embed it inside other applications -- try doing that with bash. At least you can do that with python!

            When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.

            It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.

            https://en.wikipedia.org/wiki/PowerShell

            >Pipeline

            >PowerShell implements the concept of a pipeline, which enables piping the output of one cmdlet to another cmdlet as input. As with Unix pipelines, PowerShell pipelines can construct complex commands, using the | operator to connect stages. However, the PowerShell pipeline differs from Unix pipelines in that stages execute within the PowerShell runtime rather than as a set of processes coordinated by the operating system. Additionally, structured .NET objects, rather than byte streams, are passed from one stage to the next. Using objects and executing stages within the PowerShell runtime eliminates the need to serialize data structures, or to extract them by explicitly parsing text output.[47] An object can also encapsulate certain functions that work on the contained data, which become available to the recipient command for use.[48][49] For the last cmdlet in a pipeline, PowerShell automatically pipes its output object to the Out-Default cmdlet, which transforms the objects into a stream of format objects and then renders those to the screen.[50][51]

            >Because all PowerShell objects are .NET objects, they share a .ToString() method, which retrieves the text representation of the data in an object. In addition, PowerShell allows formatting definitions to be specified, so the text representation of objects can be customized by choosing which data elements to display, and in what manner. However, in order to maintain backward compatibility, if an external executable is used in a pipeline, it receives a text stream representing the object, instead of directly integrating with the PowerShell type system.[52][53][54]

            > Hosting

            >One can also use PowerShell embedded in a management application, which uses the PowerShell runtime to implement the management functionality. For this, PowerShell provides a managed hosting API. Via the APIs, the application can instantiate a runspace (one instantiation of the PowerShell runtime), which runs in the application's process and is exposed as a Runspace object.[12] The state of the runspace is encased in a SessionState object. When the runspace is created, the Windows PowerShell runtime initializes the instantiation, including initializing the providers and enumerating the cmdlets, and updates the SessionState object accordingly. The Runspace then must be opened for either synchronous processing or asynchronous processing. After that it can be used to execute commands. [...]

            • anthk an hour ago

              9front it's the truest Unix philosophy since Unix v6. It makes it much better. Proper devices and network connections as files, plus namespaces and aux/listen plus friends. It makes AWK better than Perl and rc it's much simpler without the bullshit of sh. You only have functions, not aliases, and the syntax it's much saner.

              On Powershell/C#, TCL/Tk might not be as powerful but it works under Windows XP with IronTCL unlike MS' own and newest C# implementations ( >= 4.5). Double irony there. TCL can help to write some useful software such as a Gopher /Gemini client with embedded TLS support. And the resource usage will still be far lower.

              On embedding, TCL wins here, hands down. It's everywhere.

              And JimTCL can run under a potato.

              • pjmlp an hour ago

                If we forget that the authors moved on into Inferno and Limbo, while re-doing all the Plan 9 decisions they had to rollback like Alef as main userspace language.

                • anthk an hour ago

                  9front lives today and Inferno can run on top it perfectly well.

                  Golang it's almost the byproduct of Limbo, and we owe tons of legacy or 9front's C compilers into Golang too.

      • a-french-anon 6 hours ago

        I don't see what crusty implementation details have to do with a philosophy. In fact, UNIX itself is a poor implementation of the "UNIX" philosophy, which is why Plan 9 exists.

        The idea of small composable tools doing one thing and doing it well may have been mostly an ideal (and now pretty niche), but I don't think it was purely invented after the fact. Just crippled by the "worse is better".

      • whatnow37373 8 hours ago

        Shell != Unix (philosophy) as I’m sure you are aware. The unix philosophy is having a shell and being able to replace it, not its particular idiosyncrasies at any moment in time.

        This is like bashing Windows for the look of its buttons.

      • eesmith 6 hours ago

        I realized the hype for the Unix Philosophy was overblown around 1993 when I learned Perl and almost immediately stopped using a dozen different command-line tools.

        • whatnow37373 23 minutes ago

          I realized the hype for composing $thing$s was overblown around 1993 when I learned I could just have "A Grand Unified $thing$" and almost immediately stopped using a dozen different $thing$s.

          Then, a decade or two later, I realized the Grand Unified $thing$ was itself composed, but not by me so I had no control over it. Then I thought to myself, how great would it be if we decompose this Grand Unified $thing$ into many reusable $thing$s? That way we can be optimally productive by not being dependent on the idiosyncrasies of Grand Unified $thing$.

          And so it was written and so it was done. We built many a $thing$ and the world was good, excellent even. But then one of the Ancients realized we could increase our productivity dramatically if we would compose our many $thing$s into one Grand Unified $thing$ so we wouldn't have to learn to use all these different $thing$s.

          And so it was written and so it was done. Thus goes the story of the Ancients and their initiation of the most holy of cycles.

        • anthk 6 hours ago

          Ken Thompson and Unix folks agree with you. The point is... Perl was a solution to the former Unix (BSD/GNU) bloatings.

          When you have a look at Plan 9 (now 9fron) with rc as a shell, awk and the power of rio/acme scripting and namespaces among aux/listen... Perl feels bloated and with the same terse syntax as SH derived shells.

          • eesmith 5 hours ago

            I've been using Python almost full time since 1998 so, to misquote Dijkstra, I am mentally mutilated beyond regeneration.

    • pxc 9 hours ago

      I came here to make the same recommendation. Just use p7zip for everything; no need to learn a bunch of different compression tools.

      • setopt 8 hours ago

        If you use `atool`, there is no need to use different tools either – it wraps all the different compression tools behind a single interface (`apack`, `aunpack`, `als`) and chooses the right one based on file extensions.

        • pxc 25 minutes ago

          I'll check this out. I actually don't love p7zip's CLI.

  • cogman10 10 hours ago

    Debian and Ubuntu tend to want to lock the version of a system tools to the version of the OS.

    Debian tends to have long release cycles, but is very stable. Everything will work perfectly together on stable (in fact, testing tends to be almost as good at stability vs other OSes).

    Ubuntu is basically Debian with "but what if we released more frequently?".

    If you want the latest tools, then you'll have to settle for a less stable OS (sort of). Nix and Arch come to mind. Neither are super user friendly.

    If you want stable and the latest tools, Gentoo is the way to go. However, it's even more intimidating than Arch.

    If you want stability and simplicity, then the other way to go is sacrificing disk space. Docker/podman, flatpak, appcontainers, and snap are all contenders in this field.

    Windows and Mac both have the same problem. Windows solved this by basically just shipping old versions of libraries and dynamically linking them in based on what app is running.

    • chrismorgan 6 hours ago

      I find it funny calling Arch “less stable”, because I’m inclined to find it more stable, for my purposes, skills and attitudes.

      I’ve administered at least one each of: Ubuntu server (set up by another; the rest were by me), Ubuntu desktop at least ten years ago, Arch desktop, Arch server.

      The Arch machines get very occasional breakages, generally either very obvious, or signposted well. I did have real trouble once, but that was connected with cutting corners while updating a laptop that had been switched off for two years. (I’ve updated by more than a year at least two other times, with no problems beyond having to update the keyring package manually before doing the rest. The specific corners I cut this one time led to the post-upgrade hooks not running, and I simply forgot to trigger them manually in order to redo the initcpio image, because I was in a hurry. Due to boot process changes, maybe it was zstd stuff, can’t remember, it wouldn’t boot until I fixed it via booting from a USB drive and chrooting into it and running the hooks.)

      Now Ubuntu… within a distro release it’s no trouble, except that you’re more likely to need to add external package sources, which will cause trouble later. I feel like Ubuntu release upgrades have caused a lot more pain than Arch ever did. Partly that may be due to differences in the sorts of packages that are installed on the machines, and partly it may be due to having used third-party repositories and/or PPAs, but there were reasons why those things had to be added, whether because software or OS were too old or too new, and none of them would have been needed under Arch (maybe a few AUR packages, but ones where there would have been no trouble). You could say that I saw more trouble from Ubuntu because I was using it wrong, but… it wouldn’t have been suitable without so “using it wrong”.

    • odo1242 9 hours ago

      Fedora strikes a pretty good tradeoff on the “is user friendly” and “has latest tools regardless of system version” balance, I would say.

      • rurban 8 hours ago

        Exactly. Much more stable and much more uptodate as Debian derivates. But much less packages also.

    • thayne 10 hours ago

      "stable" as used to describe debian (and Ubuntu) means "does not change", which includes adding new functionality.

      • damentz 2 hours ago

        Correct, another way of looking at it is from a programming angle. If Debian fixes a bug that breaks your tool, then Debian is unstable. Therefore, to maintain stability, Debian must not fix bugs unless they threaten security.

        The term "stable" is the most polluted term in Linux, it's not something to be proud of. Similar to how high uptime was a virtue, now it just means your system probably has been pwned at some point.

    • jjayj 8 hours ago

      The other option here is "pick an OS and when necessary install newer packages from source."

      We've been doing this for a long time at my current workplace (for dev containers) and haven't run into any problems.

  • tame3902 8 hours ago

    unzip is a special case: upstream development has basically stopped. The last release was in 2009[0]. (That's the version 6.0.) Since then there were multiple issues discovered and it lacks some features. So everybody patches the hell out of that release[1]. The end result is that you have very different executables with the same version number.

    [0]: https://infozip.sourceforge.net/UnZip.html

    [1]: here the build recipe from Arch, where you can see the number of patches that are applied: https://gitlab.archlinux.org/archlinux/packaging/packages/un...

    • blueflow 6 hours ago

      I maintain a huge number of git mirror of git repositories and i have some overview of activity there. Many open source projects have stopped activity and/or do not make any new releases. Like syslinux, which seems to be in a similar situation as unzip. And some projects like Quagga went completely awol and don't even have a functional git remote.

      So unzip is not really that special, its a mode general problem with waning interest.

      • tame3902 2 hours ago

        I wasn't trying to imply that unzip is the only one.

        But the way I learned that unzip is unmaintained was pretty horrible. I found an old zip file I created ages ago on Windows. Extracting it on Arch caused no problem. But on FreeBSD, filenames containing non-ASCII characters were not decoded correctly. Well, they probably use different projects for unzip, this happens. Wrong, they use the same upstream, but each decided to apply different patches to add features. And some of the patches address nasty bugs.

        For something as basic as unzip, my experience as a user is that when it has so many issues, it either gets removed completely or it gets forked. The most reliable way I found to unzip a zip archive consists of a few lines of python.

        • blueflow an hour ago

          I think you got unlucky with unzip because you noticed. Distributions heavily patching software is rather the norm than the exception.

          As an example, look how Debian patches the Linux kernel: https://udd.debian.org/patches.cgi?src=linux&version=6.12.21... . And the kernel is a very active project.

          Funnily, this makes recoding the version number for a SBOM pretty useless.

      • erinnh 3 hours ago

        Quagga got forked though and is actively being developed.

        FRRouting is the fork.

  • __MatrixMan__ 12 hours ago

    It is a mess. My suggestion is to just rely on the built-in stuff as little as possible.

    Everything I do gets a git repo and a flake.nix, and direnv activates the environment declared in the flake when I cd to that dir. If I write a script that uses grep, I add the script to the repo and I add pkgs.gnugrep to the flake.nix (also part of the repo).

    This way, it's the declared version that gets used, not the system version. Later, when I hop from MacOS to Linux, or visa versa, or to WSL, the flake declares the same version of grep, so the script calls the same version of grep, again avoiding whatever the system has lying around.

    It's a flow that I rather like, although many would describe nix as unfriendly to beginniners, so I'm reluctant to outright recommend it precisely. The important part is: declare your dependencies somehow and use only declared dependencies.

    Nix is one way to do that, but there's also docker, or you could stick with a particular language ecosystem. python, nodejs, go, rust... they all have ways to bundle and invoke dependencies so you don't have to rely on the system being a certain way and be surprised when it isn't.

    A nice side effect of doing this is that when you update your dependencies to newer versions, that ends up in a commit, so if everything breaks you can just check out the old commit and use that instead. And these repos, they don't have to be for software projects--they can just be for "all the tools I need when I'm doing XYZ". I have one for a patio I'm building.

  • soraminazuki 12 hours ago

    Distros are independent projects, so that's to be expected IMO. Though some level of interoperability is nice, diverse options being available is good.

    That said, most distros have bsdtar in their repositories so you might want to use that instead. The package might be called libarchive depending on the distro. It can extract pretty much any format with a simple `bsdtar xf path/to/file`. AES is also supported for zips.

    macOS includes it by default and Windows too IIRC, in case you're forced to become a paying Microsoft product^Wuser.

  • dazzawazza an hour ago

    On of the many reasons I switched to FreeBSD over 20 years ago. Kernel and user-space developed together. No surprises, just consistent productivity.

  • NoboruWataya 4 hours ago

    I use Arch on my personal laptop daily but have Debian installed on a VPS, and this is one aspect of Debian that bugs me (though I totally understand why they do it). I am so used to having the latest version of everything available to me very quickly on Arch, I am quite commonly stung when I try to do something on my VPS only to find that the tools in the Debian repos are a few versions behind and don't yet have the features I have been happily using on Arch. It's particularly frustrating when I have been working on a project on my personal laptop and then try to deploy it on my VPS only to find that all of the dependencies are several versions behind and don't work.

    Again, not a criticism of Debian, just a friction I noticed moving between a "bleeding edge" and more stable distro regularly.

    • everfrustrated 3 hours ago

      If you want the latest version of everything you are looking for Debian Unstable

  • wmf 9 hours ago

    If I want to mess around with something without endangering the system I put it in ~/bin. You could compile unzip from source and rename it something like ~/bin/newunzip. If it doesn't work just delete it.

  • aragilar 3 hours ago

    unzip 6.0 is from 2009 (see the manpage or https://infozip.sourceforge.net/UnZip.html). I suspect there are patches floating around (so YMMV as to which patches are applied), or someone has aliases/symlinked some other implementation as "unzip" (like Apple has done here, though unlike unzip rsync is maintained).

    Try using atool (which wraps the various options for different archives and should hopefully fix your problem) or the tools provided by https://libzip.org/documentation/.

    Practically, what you're hitting is the problem when upstream is dead, and there is no coordination between different distros to centrally take over maintenance.

  • procaryote 6 hours ago

    Compressing and encrypting as separate operations would bypass this issue.

    A symmetrically encrypted foo.zip.gpg or foo.tgz.gpg would work in a lot more places than a bleeding edge zip version. Also you get better tested and audited encryption code

  • sneak 37 minutes ago

    I feel there is an opportunity for a modern go or rust utility that does compression/decompression in a zillion different formats with a subcommand interface “z gzip -d” or “z zstd -9” or “z zip -d” or “z cpio -d” or similar.

    Maybe I’ll write it.

  • lukan 12 hours ago

    No idea, I feel your confusion, I just use 7z and it could handle my zip needs so far (There are always a million ways to do anything on linux).

    But I assume, you should be able to update unzip without issues. And if no critical service depends on it, just update and see.

    • DonHopkins 11 hours ago

      [flagged]

      • jdwithit 10 hours ago

        Settle down, Beavis. Not everyone is running Linux in a 24/7 production environment. I hear some people even fart around with it at home for fun.

        I've been in pager rotations for most of the last 20 years so I'm sympathetic to that. If some genius symlinked unzip to 7z with no testing in production and caused an incident I'd be real mad. But uh I don't think that's remotely what OP was suggesting here.

        • lukan 6 hours ago

          Indeed. Which is why I said:

          "if no critical service depends on it, just update and see"

          It did not sound like OP was running a hospital infrastructure. And I never did either, nor intend to. I try to have a linux that does what I want on my computer. 7z was helpful to me, so I shared it, that's it.

      • eli 10 hours ago

        Why the hostile comments?

        • donnachangstein 10 hours ago

          This guy has been ranting and raving here longer than I can remember or thought to make an account so I assume he is HN royalty and that's why it's tolerated. That said it doesn't really bother me if I understand the circumstances.

          • sbuk 3 hours ago

            “This guy” is Don Hopkins who, amongst a long list of achievements in the field of computer science specializing in human computer interaction and computer graphics, is one of the authors of the UNIX haters handbook - specifically the extremely prescient chapter 7 "The X-Windows Disaster", published when Linux was in its infancy. You don't have to like what he is saying, but he has decades of experience and research behind what he says. Know where your field came from. The longer you can look back, the farther you can look forward - sadly something a vocal minority of the community refuses to do.

            https://www.donhopkins.com/home/resume.html

  • neckro23 9 hours ago

    It is even worse on MacOS, because Apple bundles the BSD versions of common Unix utilities instead of the (generally more featureful) GNU versions. So good luck writing a Bash script that works on both MacOS and Linux...

    • everfrustrated 3 hours ago

      First thing anyone doing dev on MacOS should do is install brew. Second is use brew to install the coreutils and bash packages to get a linux-compatiable gnu environment.

    • pjmlp 7 hours ago

      Do like in the good old days of portable UNIX scripts, write a POSIX sh script instead, use Perl or Python.

    • pathartl 7 hours ago

      Just use PowerShell!

      Half sarcastic with that one

      • papichulo2023 6 hours ago

        I used to think like this, but PS is kinda slow. Nowadays bunjs seems to be the best one imo.

    • petre 9 hours ago

      Just use zsh on MacOS.

      • ElectricalUnion 8 hours ago

        Using zsh will not fix the fact that other, non-shell POSIX utilities will not suddenly have useful GNU extensions.

        Also, zsh is not installed by default on most distros.

        • everfrustrated 3 hours ago

          I would argue that POSIX is long dead. The real standard is Linux (GNU) compatibility and has been for a while now.

          • bentley 2 hours ago

            As an OpenBSD developer who frequently fixes portability issues in external software, this doesn’t match my experience. Upstream developers are typically happy to merge patches to improve POSIX compliance; often the result is simpler than their existing kludges attempting to support desired platforms like MacOS, Alpine/Musl, Android, Dash-as-sh, and various BSDs. It turns out a lot of people find value in relying on an agreed‐upon behavior that’s explicitly documented, rather than “this seems to work at the moment on the two or three distros I’ve tested.”

        • petre 4 hours ago

          MacOS userspace was forked from FreeBSD, that's why it bundles non-GNU extensions. Also the FreeBSD userspace has since then incorporated GNUisms.

          Why they went with Bash 2 as the defualt shell is beyond me. I always switched to and used Zsh which had a more recent version. Now I'm also using it on Linux and FreeBSD, because I want a consistent shell.

          • Squossifrage 4 hours ago

            The macOS userspace was never forked from FreeBSD or any other BSD. If anything, it was forked from NextSTEP. In actual practice, it is a collection of individual components taken from a variety of sources. When development of Mac OS X began in 1999, most command-line tools and a large part of libc were derived from either NetBSD or OpenBSD via NextSTEP. Over the years, there has been a shift toward FreeBSD. Apple maintain a collection of GitHub repositories of their Open Source components where you can see the evolution from one release to the next. Most of them have XML metadata indicating the origin of each individual component.

          • wkat4242 3 hours ago

            Apple no longer ships bash 2. They moved to zsh also a few years ago.

            The reason was the same as here: bash moved to GPL v3.

  • mistrial9 11 hours ago

    forthright point of view and more power to that.. however in this case the weight falls on one small bit there - the same version number. There is information missing somehow someways

duskwuff 16 hours ago

On one hand, it's a little annoying that openrsync doesn't support some features that rsync does.

On the other hand, it's great that there are multiple independent implementations of rsync now. It means that it's actually being treated as a protocol, not just a piece of software.

  • varenc 15 hours ago

    I'm exciting about this too. It becoming more like a protocol makes me optimistic we'll see binary diff API points based on the rsync algorithm.

    fun fact: Dropbox internally used rsync binary diff to quickly upload small changes to large file. I assume they still do. But their public API endpoints don't offer this and a small change to a large file means the whole file must be updated.

    • zmj 15 hours ago

      I implemented rsync's binary diff/patch in .NET several years ago: https://github.com/zmj/rsync-delta

      It's a decent protocol, but it has shortcomings. I'd expect most future use cases for that kind of thing to reach for a content-defined chunking algorithm tuned towards their common file formats and sizes.

    • andrewflnr 15 hours ago

      > binary diff API points based on the rsync algorithm

      Now that's an idea I never considered. Nice.

      • nine_k 9 hours ago

        Now consider applying it to git. How about clean semantic diffs to your .xlsx files? To your .PNG files?

        • andrewflnr 8 hours ago

          ...that's rather a different question, I think. Rsync doesn't claim to use a semantic diff.

  • drob518 14 hours ago

    librsync, anyone?

    • edoceo 13 hours ago

      LGPL

      • mattl 13 hours ago

        librsync is distributed under the GNU LGPL v2.1

        I can see no reason why Apple wouldn't be fine with that.

      • DrillShopper 13 hours ago

        Maybe Apple should stop leeching off Free Software then

        • p_ing 13 hours ago

          BSD license allows/intends for this. The basic netutils in Windows come from BSD.

  • chungy 15 hours ago

    The website says "We are still working on it... so please wait."

    rsync has a lot of features, surely this will take a good amount of time.

  • candiddevmike 15 hours ago

    How does this mean rsync is a protocol?

    • somat 14 hours ago

      it was always a protocol, however it is never good when the protocol is defined by it's only implementation

      My understanding is that this is the whole reason for the existence of openrsync. The people doing work on the rpki standards wanted to use rsync for one type of transfer, the standards body (IETF?) balked with a concern that the rsync protocol had only one implementation, so the openbsd folk, specifically Kristaps Dzonson stepped up and wrote a second implementation. It does not do everything rsync does but it interoperates enough for the rpki project.

      https://man.openbsd.org/rpki-client

      • josephg 12 hours ago

        > it is never good when the protocol is defined by it's only implementation

        I don't know that I'd go that far. The benefit of having only one implementation of a protocol is that the protocol can evolve much faster. You don't have to have committee meetings to tweak how it works. And as a first pass, the more iterations you make of something, the better the result.

        Rsync is mature enough to benefit from multiple implementations. But I'm glad it had some time to iterate on the protocol first.

        • throw0101d an hour ago

          > The benefit of having only one implementation of a protocol is that the protocol can evolve much faster.

          Or you design the protocol to allow non-standard extensions, like with SSH, so you can have foo@example.com implemented by one product (and others can look for it if useful), and bar@example.org by another product. And if enough folks file the feature(s) useful they can be standardize with tweaks that fixed issues that were found though operational experience.

          Lots of IETF standards have a "x-" prefix mentioned for private / proprietary extensions.

          • josephg 8 minutes ago

            Sure; but there's a limit of how much you can sensibly do with an extension mechanism. You can't - for example - change a text based protocol into a binary protocol using an extension mechanism. If you're in control of both client and server, you can change everything.

      • superkuh 11 hours ago

        >however it is never good when the protocol is defined by it's only implementation

        One counter-example to this is in desktop GUI environments. You want one single strong reference implementation there for stability/consistent expectations of what will run. Pretty much everything that will run on the eleventh X protocol will work X.orgs X11 everywhere. Whereas the core wayland protocol is not feature complete and the reference implementation weston is weak. So every wayland compositor implements what should be core wayland protocol features in their own choice of third party lib or custom code. Like libei vs libinput vs no support at all (weston) for normal keyboard/mouse features. Software that works on one wayland won't work on others.

        My point here is that strong single reference implementations prevent fragmentation. And sometimes that's important. This is not one of those cases and I'm glad to see more rsync protocol implementations.

    • bombela 15 hours ago

      Think ssh, http etc

watersb 13 hours ago

Patches to mainline rsync added support for extended attributes, particularly for supporting macOS metadata.

Bombich "Carbon Copy Cloner" is a GUI app that wraps it.

https://support.bombich.com/hc/en-us/articles/20686446501143...

I started following Mike Bombich from his posts on macOS Server sysadmin boards; see

https://web.archive.org/web/20140707182312/http://static.afp...

Nathaniel Gray created a testing tool to verify the fidelity of backups; files with multiple streams, extended attributes and ACLs, all the good stuff... Backup Bouncer:

https://github.com/n8gray/Backup-Bouncer

See also this SwiftUI app that wraps rsync, RsyncX.

https://github.com/rsyncOSX/RsyncOSX

We used to really care about this stuff, back when we were still running software from "Classic" macOS on top of our new UNIX systems.

https://web.archive.org/web/20161022012615/http://blog.plast...

secure 6 hours ago

I looked at openrsync when I was writing my own https://github.com/gokrazy/rsync implementation (in Go!) and it’s good code :)

It’s a shame that openrsync is not 100% compatible with rsync — I noticed that Apple was starting to switch to openrsync because my own tests broke on macOS 15.

jeroenhd 16 hours ago

So, anyone got a good resource on why Apple is so afraid of GPLv3? Surely this shouldn't be a problem as long as they statically compile the executables?

  • ninkendo 15 hours ago

    GPL3 closes what was considered a loophole, where device makers would ship a product derived from GPL’d code, and release the source, but provide no ability for users to actually compile and run that source on the device (this was called “tivo-ization” at the time, because TiVo did it.)

    So for iOS, it’s pretty obvious why they don’t use gplv3… because it would violate the terms.

    For macOS they could certainly get away with shipping gplv3 code, but they do a lot of code sharing between iOS and macOS (and watchOS/tvOS/visionOS/etc) and it doesn’t make much sense to build on a gplv3 foundation for just one of these operating systems and not the others. So it’s simpler to just not use it at all.

    It also means they’re more free to lock down macOS from running your own code on it in the future, without worrying about having to rip out all the gpl3 code when it happens. Better to just not build on it in the first place.

    • mappu 13 hours ago

      > this was called “tivo-ization” at the time, because TiVo did it.

      It's not widely known but what TiVo actually did was something different than this, and both RMS and the SFC believe that both the GPLv2 and GPLv3 allow what TiVo actually did. Some discussion and further links via https://lwn.net/Articles/858905/

      • imcritic 3 hours ago

        I'm just curious: do you have that link bookmarked?

    • duskwuff 15 hours ago

      Current versions of macOS use a signed system volume [1], much like iOS - under a standard system configuration, the user can't replace system executables or other files, even as root. Unlike iOS, the user can disable SSV, but I'm not certain that's sufficient for GPLv3 - and I can't imagine Apple feels comfortable with that ambiguity.

      [1]: https://support.apple.com/guide/security/signed-system-volum...

      • ezfe 14 hours ago

        By the GNU website it would be sufficient. The website says:

        > GPLv3 stops tivoization by requiring the distributor to provide you with whatever information or data is necessary to install modified software on the device

        By my reading of this, there is not a requirement that the operating system is unlocked, but the device. Being able to install an alternate operating system should meet the requirement to "install modified software on the device."

        > This may be as simple as a set of instructions, or it may include special data such as cryptographic keys or information about how to bypass an integrity check in the hardware.

        As you've mentioned with disabling SSV, and as Asahi Linux has shown, Apple Silicon hardware can run 3rd party operating systems without any problems.

        • WD-42 10 hours ago

          The hardware might be open for now but you can imagine Apple would like to keep the possibility of closing it off on the table, thus the allergy to gplv3.

          Edit: "without any problems" is definitely a stretch.

        • rtpg 14 hours ago

          I also imagine that quite simply saying "look you can compile this binary as an alternative and run it on the machine" would fit the requirements, even if it doesn't entirely capture the spirit of anti-tivoisation

          • philistine 13 hours ago

            Still doesn't change the fact that Darwin is the basis for iOS, tvOS, watchOS etc.

            Can't install Asahi Linux on those!

      • chongli 14 hours ago

        Sure, though there's little point in replacing executables such as rsync when you can install your own version (perhaps through a package manager and package repository / database such as Homebrew [1] or MacPorts [2]) and use the PATH environment variable to decide which version of the executable you'd like to use in which context.

        [1] https://brew.sh

        [2] https://www.macports.org

        • __float 14 hours ago

          This might be true for the most part as an end user, but from a licensing perspective regarding the original binaries, this is irrelevant.

          You must be able to modify and change the code, not merely append to the PATH:

          > Tivoization: Some companies have created various different kinds of devices that run GPLed software, and then rigged the hardware so that they can change the software that's running, but you cannot.

          from https://www.gnu.org/licenses/quick-guide-gplv3.en.html

          • duskwuff 10 hours ago

            I'd advise looking at the actual language of the GPL, not the FSF's (non-binding) statements about what they intended it to mean. The relevant text is at the end of section 6 of https://www.gnu.org/licenses/gpl-3.0.txt - search for the words "Installation Information". I am not a lawyer, but my reading of the text suggests that:

            1) The so-called anti-Tivoization clauses are scoped to "consumer products". Don't ask me why, but the language is very deliberately constructed to limit these terms to products "which are normally used for personal, family, or household purposes" - if you're building hardware for commercial or industrial use, none of this applies.

            2) These clauses are also scoped to object code which is conveyed "as part of a transaction" in which the user purchases or rents a consumer product which the code is intended for use with. The intent was to limit this to software which was incorporated in the device; however, it accidentally ends up applying to any consumer transaction where the user purchases (e.g.) both a computer and a piece of software which includes GPLv3 code - regardless of who's selling them. So, in practice, this actually applies to any GPLv3 software, regardless of whether it's part of a device's firmware or not.

            3) The end result of these clauses is to require that any software distributed under these conditions (which is to say, any GPLv3 software) be distributed with "Installation Information". It's somewhat ambiguous what precisely this encompasses, but it's quite possible that, if Apple distributed GPLv3 software, some of their internal software signing keys and/or build processes would be considered part of that Installation Information.

          • chongli 13 hours ago

            My claim is entirely from the end user perspective. We should not really care which tool Apple includes for their licensing purposes. If we have a dependency on a particular tool then we have the ability to install and use it ourselves. The signed system volume does not interfere with our ability to do that.

        • kuschku 11 hours ago

          I'm not sure that'd qualify, as many tools shipped with the system would continue to use the preinstalled version, not yours.

    • p0w3n3d 3 hours ago

      > they’re more free to lock down macOS from running your own code on it in the future, without worrying about having to rip out all the gpl3 code when it happens. Better to just not build on it in the first place.

      That's actually quite scary what you wrote there.

      That's also even more scary to me, as I am really watchful for such restrictions which can IMO happen in current OSes any time now ...

    • harry8 13 hours ago

      > So for iOS, it’s pretty obvious why they don’t use gplv3… because it would violate the terms.

      Apple using "openrsync" because they want to close the code more than the rsync license lets them.

      • mattl 13 hours ago

        I’m not sure they care about rsync’s code, they probably just don’t want to maintain an old fork of rsync under GPLv2.

    • jitl 15 hours ago

      > It also means they’re more free to lock down macOS from running your own code on it in the future, without worrying about having to rip out all the gpl3 code when it happens. Better to just not build on it in the first place.

      how does locking down macOS have anything to do w/ GPL compliance? Apple is free to do whatever BS with the OS they ship in terms of terminal access, user permission level, etc regardless of GPL of any code on the device. I could ship a GPLv3 system tomorrow that disallows user root access and as long as I make the OS source freely available and redistributable, it's fine.

      • ninkendo 14 hours ago

        If you make a device which uses GPL’d code, and provide all the covered source code you used, but prevent users from putting any modified code on the device, you are in violation of GPLv3, but not GPLv2. That means this sentence:

        > I could ship a GPLv3 system tomorrow that disallows user root access and as long as I make the OS source freely available and redistributable, it's fine.

        Is not true for gpl3. It’s called the “tivo-ization” loophole, and it’s one of the principal reasons the GPL3 was made in the first place. I think you’re just wrong.

        (Note: I’m not claiming Apple is would be in violation for shipping e.g. a GPLv3 bash on macOS, today, only that they would be in violation for doing that on iOS today, or if in the future they locked down macOS in the same way that iOS was, then for macOS too.)

    • Someone 7 hours ago

      > For macOS they could certainly get away with shipping gplv3 code

      Even limiting that to “in the USA” I would never say certainly for a license for which so little jurisprudence exists.

      Once you add in multiple countries, it doesn’t get clearer.

      And yes, that applies to GPLv2, too, but that ship has sailed. I also don’t see them add much new GPLv2 licensed software.

      For GPLv3, they also may be concerned about patents. If, to support some MacOS feature, they change a GPLv3 licensed program that uses one of their patents, GPLv3 gives others the rights to use those patents in versions of the tool that run on other platforms.

    • KerrAvon 15 hours ago

      No, this doesn't quite scan, because there's no reason they couldn't ship a current of `bash` or any number of other GPL3 things. Aurornis is probably closest to the mark: it is legally ambiguous, and Apple probably does not want to be a test case for GPL3 compliance.

      • ninkendo 15 hours ago

        If they shipped a gpl3 version of bash on iOS, they would be in violation. This isn’t really a question: gpl3 requires you to not only provide the source if you use it in a product, but the ability to modify it and run your modified version. Which iOS doesn’t let you do.

        Now, macOS would be fine in shipping a gpl3 bash. But not iOS. (Yes, iOS has bash. Or ar least they used to, they may be all on zsh now, I’m not sure.)

        So, the question becomes to Apple, do we ship different bash versions for different devices, and treat macOS as being different, and have to worry about only using newer bash features on macOS? Or do we keep the same old version on all platforms, and just eschew the new bash everywhere? It’s a pretty simple decision IMO, especially because users can just use brew on macOS and put their own bash on there if they want.

        Others are pointing out that gpl3 is less tested in court and that lawyers are just more uncertain/afraid of gpl3 than gpl2, especially with respect to patents… but I don’t think these are mutually exclusive. It’s clear that they can’t ship gpl3 on 4 out of their 5 operating systems. macOS is an outlier, and from an engineering standpoint it’s a lot simpler to just keep them all the same than it is to ship different scripts/etc for different platforms. It can be both reasons.

  • Aurornis 15 hours ago

    My perspective on GPL and related licenses changed a lot after working with lawyers on the topic. Some of the things I thought to be completely safe were not as definitive to the lawyers.

    I don’t know Apple’s reasoning, but I know that choosing non-GPL licenses when available was one of the guiding principals given to us by corporate lawyers at another company.

    • cosmic_cheese 15 hours ago

      A lot of it is indeed the legal murkiness.

      On the engineering level, other liceneses likely get selected because it’s easy. You don’t need to consult the legal department to know how to comply with licenses like MIT, BSD, etc, so you just pull the thing in, make any required attributions, and continue on with your day. It’s a lot less friction, which is extremely attractive.

      • butchlugrod 10 hours ago

        I work at a large corporation, but one that only has 6% of Apple’s annual revenue. Even the emails we send to end users get a review from the legal team prior to us hitting send.

        Yeah, there are some assumptions which can be made about licenses and their suitability for our purposes, but no serious organization is touching that code until there has been a full audit of those license terms and the origin of every commit to the repository.

      • pjmlp 7 hours ago

        The kind of places I usually work for, you do need to consult with legal regardless of the license.

        And to prevent your scenario, usually CI/CD systems are gapped to internal repos, unless dependencies are validated and uploaded into those repos, the build is going to break.

      • KerrAvon 15 hours ago

        Yes, although even for the more liberal licenses you actually still want legal review at a sufficiently large company to ensure that your engineering read of the license is accurate. What if someone changed the wording slightly in some way that turns out to be legally significant, etc.

        • cosmic_cheese 15 hours ago

          That might apply in a handful of cases, but the vast majority will check out when a quick diff against a reference license file shows that the only changes are party names.

          • KerrAvon 15 hours ago

            I think it's very unlikely to happen, in general. I'm just saying a large corporation will want to check every time because they cannot really afford to do otherwise.

            • arccy 14 hours ago

              you didn't have to be a large corporations, there's a bunch of automated tools that help you check for your dependencies' licenses and flag anything non standard.

    • palata 15 hours ago

      > but I know that choosing non-GPL licenses when available was one of the guiding principals

      Sure, but in this case Apple has chosen, for 20 years, to not go with GPLv3 when there was no alternative.

      • sbuk 12 hours ago

        You could also say the same of the Linux kernel too. After all, they have chosen, for 20 years, to not go with GPLv3…

        • palata 5 hours ago

          It's different. You are talking about the Linux kernel changing their licence to GPLv3. We were talking about macOS shipping a GPLv3 program.

        • stephen_g 9 hours ago

          Which is a fair choice, since so much of Linux development and driver development is driven by commercial interests - there would very likely be a fork from the last GPLv2 commit which all the vendors would switch to...

    • giantrobot 15 hours ago

      This was basically the justification I was told when I was at Apple. The GPLv3 is too viral for the liking of Apple's legal department. They do not want to be the test case for the license.

      • quotemstr 15 hours ago

        The funny thing is that the rest of the world has moved on and is no longer afraid of the GPLv3. The reality that people aren't, as Apple's legal people predicted, being legally obliterated hasn't changed Apple legal's stance. Doomsday cults actually get stronger when doomsday fails to drive.

        • kmeisthax 10 hours ago

          The reason why doomsday never came is that the GPLv3 bomb was never dropped. Linux, Android, and busybox all rejected v3, because it's basically a ban on embedded development[0], and that's all the FOSS most embedded developers care about using.

          Likewise, if you don't do any embedded, you don't need to worry about v3, it's functionally identical to v2 except the compliance story is slightly easier (you don't immediately lose your license if you fuck up a source release).

          There's very few companies that have their fingers in both the embedded and desktop markets; those are the ones that need to worry about GPLv3 doomsday. AFAIK that's only Apple and Microsoft[1], both of which have very hostile attitudes towards v3 as a result.

          [0] To be clear, when you hear "embedded development", think "TiVoization". The business model of embedded development is putting your proprietary software in a box to sell. GPLv3 wants to make it so that if you do that, you can't stop someone from modifying the OS around the software by making the software detect that and break. But that also makes it significantly harder to defend your business model. Remember: the embedded landscape is chock full of very evil DRM schemes, many of which would break trivially if the app had to support running on arbitrarily modified OSes or with arbitrarily modified libraries.

          [1] Microsoft controls the signing keys for UEFI, and while they are willing to sign stuff to let Linux boot, they will not sign GRUB because that's GPLv3 and they worry signing any v3 software will obligate them to release their signing keys.

        • hnfong 8 hours ago

          The rest of the world has moved on and no longer using GPLv3.

          In the early 2000s all the miscellaneous small projects on sourceforge used GPLv2 (v3 was not out yet).

          These days you'll be hard pressed to find any new projects using GPLv3, except the ones with close ties to the GNU or FSF.

          The GPL is getting more irrelevant and more easy to avoid. That's why nobody is afraid of GPLv3 any more.

          • rs186 2 hours ago

            Exactly. I am surprised this isn't talked more.

            The web stack is such an example. Almost everything you use -- chrome, webpack, electron, babel, React etc all adopted the permissive license.

            Not quite so for other areas, but I can count with one hand the number of GPLv3 licenses I have seen in new projects.

        • arccy 14 hours ago

          I think the rest of the world is very much moving in Apple's direction: look at what Ubuntu is doing, and any big open source project with more than a single corporate backer (i.e. not just using open source as a marketing channel) isn't using GPL.

          • pama 14 hours ago

            Not sure what you mean about ubuntu… there is tons of GPL there. https://ubuntu.com/legal/open-source-licences?release=jammy

            • anonfordays 12 hours ago

              Replacing GPL coreutils with Rust reimplementations. The conspiracy theorists say that's the reason behind the huge RiiR push. There's effectively zero GPL'ed Rust software.

              • quotemstr 10 hours ago

                It makes me sad to realize that it was possible that the GPL was necessary to bootstrap free software culture and that we no longer need it now that we've won.

                • johannes1234321 6 hours ago

                  Is there a win?

                  One large side of the industry is turning to managed services. They run free/libre software, but build lock-in on higher level and avoid giving direct contact.

                  On the other market, the desktop free/libre software won as with Android and free/libre parts of MacOS/iOS.

                  However they don't do that to benefit the free/libre software in any way, but for getting software cheap or even for free.

                  The amount by which this flows in one direction, there isn't a win.

                • pabs3 6 hours ago

                  We definitely have not won, locked-down consumer device vendors like Apple are the prime example of how we lost.

                • pjmlp 5 hours ago

                  MIT and BSD predates it, and GPL only had a go at it for two reasons:

                  1 - Sun decided to inovate by spliting UNIX into user and developer SKUs, thus making the until then irrelevant GCC, interesting to many organisations not willing to pay for UNIX development SDK.

                  2 - AT&T tried to get control back over UNIX's destiny, and made BSD's future uncertain

          • applied_heat 14 hours ago

            What is Ubuntu doing?

            • WD-42 10 hours ago

              There was an Ubuntu engineer recently talking about using the rust coreutils which are bsd licensed instead the old gpl ones. But he made it clear it was more about “rust is better” than anything to do with the license.

        • giantrobot 13 hours ago

          Most organizations don't have many billions of dollars at stake. I doubt you'll find many Fortune 500 companies with a flippant attitude towards the GPLv3. You don't even see the GPLv3 used much by the "we love Open Source" crowd. Most externally released FOSS is under non-viral Open Source licenses.

          No big company wants to spend a million(s) dollars defending themselves from an NPE with an East Texas mailbox in a frivolous licensing suit. Worst case is a judge deciding the license infects their proprietary code because they're built on the same cluster.

          The rest of the world has hardly moved on. I've heard of multiple companies with the same GPLv3 policy as Apple for largely the same reasons.

    • jillesvangurp 4 hours ago

      I've had similar training back in the day. This was when my employer (Nokia) was making Linux based phones and they needed to educate their engineers on what was and wasn't legally dodgy to stay out of trouble. Gplv2 was OK with permission (with appropriate measures to limit its effect). Particularly with Java, you had to be aware of the so-called classpath exception Sun added to make sure things like dynamic linking of jar files would not get you into trouble. Permissive licenses like Apache 2.0, MIT, and BSD were not considered a problem. GPLv3 was simply a hard no. You'd get no permission to use it, contribute to it, etc.

      Apple, Nokia, and many other large companies, employ lawyers that advice them to steer clear of things like GPLv3. The history of that particular license is that it tried to make a few things stricter relative to GPLv2 which unintentionally allowed for things like commercial Linux distributions mixing closed and open source. That's why Android exists and is Linux based, for example. That could not have happened without the loopholes in GPLv2. In a way that was a happy accident and definitely not what the authors of that license had in mind when they wrote the GPL.

      It's this intention that is the problem. GPLv3 might fail to live up to its intentions in some respects because of untested (in court), ambiguous clauses, etc. like its predecessor. But the intention is clearly against the notion of mixing proprietary and OSS code. Which, like it or not, is what a lot of big companies do for a living. So, Apple is respecting licenses like this by keeping anything tainted by it at arms length and just not dealing with it.

    • ants_everywhere 12 hours ago

      I'm curious if you remember any of the specifics.

      At a big company I worked for, GPL licenses were strictly forbidden. But I got the vibe that was more about not wanting to wind up in a giant court case because of engineers not being careful in how they combined code.

      I'd be super curious if there are explicit intentional acts that people generally think are okay under GPL but where lawyers feel the risk is too high.

      • squiggleblaz 5 hours ago

        Linking against GPL code on a backend server which is never distributed - neither in code or binary form. (Because what might happen tomorrow? Maybe now you want to allow enterprise on prem.)

    • ndiddy 14 hours ago

      > Some of the things I thought to be completely safe were not as definitive to the lawyers.

      Can you elaborate?

  • ndegruchy 15 hours ago

    In all likelihood they just don't want to broach the idea of having to fight (and potentially lose) the GPL3 in court. Given the case history on the GPL2, it seems like more work than it's worth. They can just replace the parts that are "problematic" in their eyes and avoid a whole class of issues.

  • toast0 14 hours ago

    They're respecting the terms of the license.

    Especially when a piece of software changes from GPLv2 to GPLv3, it's asking Apple to stop updating, and they do as asked.

  • pjmlp 7 hours ago

    Not only Apple, everyone.

    I never worked at any company that allows for GPLv3 dependencies, and even GPLv2 aren't welcomed, unless validated by legal team first.

  • m463 9 hours ago

    "Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software." -- https://www.gnu.org/licenses/gpl-3.0.en.html

  • Arnt 15 hours ago

    Apple doesn't say. IMO you should not trust other people's statements about Apple's reasoning.

  • banqjls 15 hours ago

    The TiVo clause.

    • rbanffy 15 hours ago

      It wouldn’t apply to the kernel. Also, a lot of the command line tools are not distributed as part of the OS.

      • jhasse 15 hours ago

        [flagged]

        • mistrial9 14 hours ago

          from the point of view of the GPL side of the aisle, yes agree they are evil. Shareholders who want returns are on the other side of the aisle, so to speak, and definitely see "risk" and "no" when it comes to anything close to GPL. OK no problem, except that the code that Apple Computer profits mightily with, substantially originates in the former.

          recall the John Gilmore camp handing out "don't tread on me Apple" buttons 35 years ago.. it has been going on that long. Apple knows very well what they are doing.

        • Eggpants 14 hours ago

          lol. You must be a hoot at parties.

  • quotemstr 15 hours ago

    Companies develop idiosyncratic cultures and either learn to live with them or die. Apple's learned to live with a legal culture deathly afraid of the GPLv3. Some influential director or someone made a decision 20 years ago and the GPLv3 superstition became self perpetuating, reality be damned. Outside incentives never became strong enough to override it.

    Every company has its stupid superstitions.

  • WD-42 10 hours ago

    Probably because they are working towards a future where they don’t have to worry about releasing source code for anything, while being free to make any modifications they want. They just need time to code around all the FOSS they’ve leeched off of the last couple decades or wait for BSD licensed projects like this pop up to do that work for them.

Symbiote 14 hours ago

> openrsync is written as part of the rpki-client(1) project, an RPKI validator for OpenBSD. openrsync was funded by NetNod, IIS.SE, SUNET and 6connect.

Could anyone suggest why these organizations would want to fund this development?

https://github.com/kristapsdz/openrsync?tab=readme-ov-file#p...

  • jimsmart 13 hours ago

    This comment explains the reason for its existence quite well:

    https://news.ycombinator.com/item?id=43605846

    Companies fund things because they're useful or necessary. My guess is that some of the companies listed might use BSD — and perhaps wanted/needed an implementation of rsync that was not GPL3 licensed.

    And/or they simply have an interest in funding Open Source projects / development.

    • Squossifrage 3 hours ago

      Three out of four aren't even companies. SUNET is the Swedish NREN, NetNod is a non-profit that manages Internet infrastructure services (like DNS and NTP) in Sweden, IIS is the non-profit that manages the Swedish TLDs.

abotsis 12 hours ago

I continue to be happy that Apple continues to enhance and embrace the posix side of osx vs gradually stripping it away in some kind of attempt to make it more like iOS.

0x0 4 hours ago

I recently ran into an issue with this because building an iOS .ipa from the command line with xcodebuild apparently ends up shelling out to call rsync to copy some files between local directories, and because I had homebrew rsync earlier in $PATH, it would end up running homebrew rsync, but xcodebuild passed an openrsync-only command line argument "--extended-attributes" that homebrew rsync doesn't understand and would exit with a failure.

emmelaich 13 hours ago

For a while, (up to including Sequioa 15.3) both rsync_samba and rsync_openrsync were available, via /var/select/rsync or the env variable CHOSEN_RSYNC.

One particular annoyance of openrsync is that it claimed to support the /./ magic path element for --relative. I sent a bug report to Apple for this about a month ago.

rsync_samba is gone as of Sequoia 15.4.

I've installed rsync from homebrew.

fmajid 15 hours ago

Just like they replaced bash with zsh. Most Big Tech firms are allergic to GPL3.

  • 7e 15 hours ago

    GPLv3 is a legal landmine. In fact, GPL itself is wildly unpopular compared to more open licenses. The FSF is getting what it deserve here. Open source predates the FSF and will remain long after the FSF is dead.

    • wanderingmind 15 hours ago

      Can you show examples of impactful open software that predates fsf and stallman?

      • donnachangstein 14 hours ago

        BSD predates the Stallman Utilities (kernel sold separately) by about a decade.*

        * in "shared source" form

        • hollerith 14 hours ago

          The BSD releases did not form a complete OS and were not runnable except in combination with source code from ATT Unix, which was emphatically proprietary software. The first release of BSD that was unequivocally legal for anyone to acquire and run without getting ATT's permission was 4.4BSD-Lite in June 1994. (Yes, organizations did create OSes from BSD Networking Release 2 (Net/2) released in June 1991, but legal uncertainty hung around them for years.)

          In contrast, by 1984, Stallman had already formed a close working relationship with a competent lawyer (Eben Moglen) to devise a legal strategy to maximize the probability that everyone will continue to enjoy a list of freedoms (chosen by Stallman) around any software put under the GPL.

          • mustache_kimono 11 hours ago

            > The BSD releases did not form a complete OS and were not runnable except in combination with source code from ATT Unix, which was emphatically proprietary software.

            Is that the measure: a complete OS? When exactly did GNU ship a complete OS?

            IMHO none of the above is relevant to the question which was first. IMHO both were not first. IBM, among others, were shipping source code with their product, until they didn't. OSS is and was a reaction to an only object model. And there were seeds at Berkeley and MIT.

            And Stallman isn't strictly responsible for the MIT strain. As Keith Packard said in his "A Political History of X", the X11 project chose not use the GPL license, because Stallman was simply too annoying.

            • hollerith 5 hours ago

              >Is that the measure: a complete OS?

              The fact that BSD was incomplete is relevant because it illustrates the fact that the only people who could run BSD were shops that had a source-code license for the proprietary AT&T Unix.

              • mustache_kimono 2 hours ago

                So... >> When exactly did GNU ship a complete OS?

                > the only people who could run BSD were shops that had a source-code license for the proprietary AT&T Unix.

                So -- finally! -- that's the measure of OSS? It must run on non-proprietary systems? Not simply the source code? OSS that runs on Windows or MacOS or VMS is not actually OSS?

                You figure that Linux is the first non-proprietary system in 1991? Not 4.3BSD released in 1989?

                I think you can understand my and others reluctance to state definitively Stallman was first, when by a dozen different metrics he wasn't. I'm still trying to understand what he was supposedly first at? First to find a lawyer?

                Linux is important. GNU is important. BSD is important. And they remain important. I don't think any of them are made more important by distinguishing only one and not the others. Like -- as much as it pains me to say it, because of how I loath Stallman and the FSF, GCC was more than important to the entire ecosystem for years. Until LLVM, it was required. Etc, etc.

                • dagw 2 hours ago

                  >> When exactly did GNU ship a complete OS?

                  I want to say around 2006 or 2007 was the first time a 'normal' *nix hacker could install and boot[0] a complete GNU OS[1] and get something resembling work done (ie edit and compile C code in vi or emacs). (yes I know the question was rhetorical)

                  [0] without having to to a bunch of bootstrapping steps and other hackery

                  [1] Technically 'shipped' by Debian rather than GNU/FSF

            • anthk 6 hours ago

              GNU+Linux was good enough. Meanwhilke, BSD in early 90's was rotting until the BSD 4.4 forks arise.

              • mustache_kimono an hour ago

                > GNU+Linux was good enough. Meanwhilke, BSD in early 90's was rotting until the BSD 4.4 forks arise.

                What exactly are we arguing about?

          • donnachangstein 14 hours ago

            > but legal uncertainty hung around them for years.

            I mean if we're going to split hairs and play this game, SCO claimed ownership to alleged Unix code in Linux which wasn't initially resolved until 2008 or so (and further continued for another decade). That never stopped anybody.

            • hollerith 14 hours ago

              Yes, but not having a copy of the source code for ATT Unix stopped everyone from using BSD or any system based on BSD till 1991. Again, before then BSD was very far from being a complete OS.

              So BSD has severe shortcomings as an answer to the question that started this thread, namely, "Can you show examples of impactful open software that predates fsf and stallman?"

        • ndiddy 14 hours ago

          The first time any BSD code was made publicly available was Networking Release 1 (just contained the networking stack) in 1989, or around 5 years after Stallman started the GNU project. It took until Networking Release 2 in 1991 for the code for a runnable BSD operating system to be made publicly available. Prior to that, BSD was based on proprietary UNIX source code, and anyone who wanted to run it had to purchase a source code license from AT&T.

          • donnachangstein 14 hours ago

            > or around 5 years after Stallman started the GNU project.

            So 5 years after he started with an empty repo and some political ramblings?

            GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

            There is a lot of deliberate ignorance of public domain code being posted on BBSes at the time. I'm not discounting anything Richard did but let's not rewrite history here.

            • ndiddy 13 hours ago

              > So 5 years after he started with an empty repo and some political ramblings?

              Or around 4 years after the first public GNU Emacs release, 4 years after the first public GNU Bison release, 3 years after the first public GDB release, and 2 years after the first public GCC release.

              > GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

              Correct, just like how the initial public BSD release was just the networking stack (worthless on its own).

              > There is a lot of deliberate ignorance of public domain code being posted on BBSes at the time.

              Not sure where you got that from. Nobody claims that Stallman was the first one to come up with publicly releasing source code. I will say that a lot of the "public domain" software from back then lacks the uniformity you see from later movements like free software or open source. Some of it isn't even public domain, and has a license like "this is copyright me, any modified copies must have my copyright statement preserved, this software may not be used for commercial purposes".

            • squiggleblaz 4 hours ago

              > GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

              People were installing GNU onto existing Unix systems because GNU was better than they were distributed with. Maybe they did that with components of BSD Net/1 - no one has ever told me they did but it probably happened - but that was definitively post GNU.

              Anyway, I'm not sure if this matters so much to the debate. Stallman was reacting to a change. He rambled politically and wrote some code to back it up because he used to be able to do things, and now he could only do them if he would write some code and win some allies.

            • pessimizer 13 hours ago

              > GNU did not have a working system until Linus released Linux in 1992. They had pieces and components which were worthless on their own.

              Whether or not GNU had an OS or would ever have an OS has nothing to do with anything, though. What are you trying to illustrate? Those "pieces and components" are some of the most used pieces of software in history.

              • mistrial9 13 hours ago

                agree - portability across *nix was the point, not a complete product

      • emmelaich 13 hours ago

        Sharing (typically via tape) of software utilities use to be very common in every user group from the start (1960s). It was just the culture, and expected. Especially IBM mainframe users, DEC VMS.

        Of course the answer to your question depends on the definition of 'open source' and 'impactful'.

    • mcstafford 14 hours ago

      Whose popularity do you champion, and what sorts of motive bring deservedness in to the discussion?

    • anthk 6 hours ago

      Thanks to the FSF we have cheap Unix clones with easy installs. Even Android should thank the FSF for its existence.

    • handsclean 6 hours ago

      “Pesticide wildly unpopular with pests.”

    • man4 13 hours ago

      [dead]

larusso 6 hours ago

How is it these days for other developers actually banking on certain tools to be present / working. What I mean is that apple isn’t shipping rsync etc to help us developers but because the system needs it. It was already mentioned that this issue also exists for the other cli tools because they ship the BSD not the Gnu versions. Which brings me to POSIX which was introduced back then to tackle just that. Make sure that a set of tools has defined options / behavior etc. it seems to me that we lapsed here because more and more systems ship very custom setups which are not compatible. Or see the Linux and the issue with binary compatibility and the state of flatpack vs Snap vs others. I fear it becomes harder and harder to create cross platform solutions by using the system provided packages. Writing a shell script is already a challenge.

NelsonMinar 15 hours ago

Does openrsync work?

The problem with Apple's ancient userspace is so many of the utilities are outdated and don't support things like files bigger than 4GB. So switching to a tool updated in the last 19 years may be an improvement. But then rsync is such a standard, is openrsync 100% compatible?

The need to install and maintain Homebrew was a big part of why I switched from MacOS to Windows. WSL is a very good Unix environment, being just Ubuntu or Debian.

  • commandersaki 13 hours ago

    don't support things like files bigger than 4GB

    citation needed

  • procaryote 6 hours ago

    I imagine it will kind of work, with some weird traps, much like the ancient version of bash bundled, or the buggy bsd grep bundled or the weird mktemp bundled etc

    Mac OS userland is slowly rotting away because they're terrified of GPL. It's strange, as installing a modern version of rsync seems like it would be specifically allowed by GPL without "infecting" any other parts of the OS

    • pasc1878 5 hours ago

      The important word here is "seems". Where is the case law that backs up your statements. Apple does not want to be part of a legal case re GPL.

  • alphabettsy 15 hours ago

    How is maintaining two operating systems simpler?

    • tymscar 14 hours ago

      To me this post can be framed under dissonance in the dictionary.

      Installing a package manager and a package from it apparently is harder than installing an OS and the installing a package from its package manager.

      And lets be honest here. It’s not like homebrew is a set in stone necessity.

      I use Nix, theres macports, you can build the package from source. All with less complexity than running what is in the end a whole os in a vm

      • hughw 11 hours ago

        I'm mystified at Homebrew's dominance. It seemed to come unglued for me every few months. I switched to Macports years ago and my cli world has been stable and up-to-date.

        • pasc1878 5 hours ago

          I suspect because users and the originbal auithor did not understand multi user UNIX and so don't like the idea of having to use sudo and also they use Apple tools as much as possible rather than controlling the versions of libraries that they use which would be what commercial users were doing with Unix over 20 years before. It also uses /usr/local which is for locally compiled software so you get in a mess if you have a compiled version of a library that is also in Homebrew.

          Macports and nix and fink will build under a new user id and install as root as per any other Unix. Thus the build can be controlled to only use known versions of other libraries.

          Homebrew installs as the current user. - Try using it when you do have multiple users on a mac (which is uncommon).

    • NelsonMinar 13 hours ago

      I don't think at all about the Windows host. It's purely a Linux system when I interact with it. Homebrew sort of gives you that too in its setup by virtue of putting all its stuff in a particular path.

    • emmelaich 12 hours ago

      You don't really do maintenance on either. It's just clicky stuff or apt|dnf update. Do your classic gui apps stuff in MacOS and everything cli-oriented or development in Linux.

procaryote 6 hours ago

I already replace the bundled rsync on mac with the proper one, as the bundled one is ancient and is missing some features I like. Same for grep, awk, sed, find

Mac OS is getting a bit worse every release, clearly trending towards an iOS world where we have to ask apple for permission to run anything, even in a sandbox

pjmlp 7 hours ago

As I keep telling, GNU/Linux had a lucky moment sidestepping the whole issue of AT&T trying to get control back from UNIX, had this not taken place and everyone would be using classical UNIXes, with some BSD code running on them.

You see the same on embedded as well, all new kids on the block as embedded FOSS OSes or bare metal libraries, are either Apache or MIT licensed.

linsomniac 10 hours ago

Am I the only one that has had some hard to pin down problems with rsync? I'm excited about this because I'd love to have an alternative implementation. In particular, "rsync --compress" over SSH seems to have a rare and hard to track down issue. I've used rsync for decades doing hundreds of nightly system backups, and maybe once a month one of them goes out to lunch (IIRC it just hangs). The rarity of it makes it hard to isolate or come up with a reproduction. Removing the "--compress" resolves it. Anyone else ever come across something like that?

firecall 15 hours ago

I feel compelled to comment just to note the vintage WordPress Theme being used!

Worth a click just to see how we used to live!

INTPenis 5 hours ago

This hardly matters as any power user will keep their own toolset maintained from brew I guess.

And relying on open source CLI tools in Macintosh for helping end users is not a good idea.

What this signals to me most of all is "oh we can't steal from GNU anymore so we'll steal from openbsd".

Because even if it is a neglible part of the appeal of Macintosh computers, they still make an effort to ship these tools with their OS and they make a lot of money doing it.

yonran 8 hours ago

I ran into this issue too since I implemented an --rsh wrapper script (based on https://github.com/kubernetes/kubernetes/issues/13776) and the options passed to ssh are different (samba rsync passes in -l user host command, openrsync passes in user@host command).

> openrsync accepts only a subset of rsync’s command line arguments.

I have not upgraded to MacOS Sequoia yet so I cannot verify but from the source (https://github.com/apple-oss-distributions/rsync/blob/rsync-...) it appears that there is a wrapper and they ship both samba rsync and openrsync and fall back to samba if you use an unsupported option?

  • stephenr 5 hours ago

    Interestingly, I see evidence of that wrapper on my Sonoma machine, but `/usr/libexec/rsync/` only contains the `rsync.samba` binary,

    On my Sequoia machines, there is no `/usr/libexec/rsync`, and the `rsync` binary at /usr/bin seems to just be the regular `openrsync` binary.

SuperSandro2000 4 hours ago

Yeah, more MacOS utilities that accept slightly different arguments.

simongray 5 hours ago

I wonder if this is why Time Machine started taking up all my CPU resources after I upgraded? I had to shut off automated Time Machine backups because it literally makes my M1 MBA unusable for few minutes every hour.

numbsafari 14 hours ago

I stopped treating Mac OS as a Unix and I started sleeping at night. It’s a great platform for running a Unix in a VM.

  • pjmlp 5 hours ago

    It helps not to mistake UNIX with GNU/Linux, as first step into tranquility.

wewewedxfgdf 16 hours ago

Good - MacOS had an old and crappy version of rsync

  • procaryote 6 hours ago

    They should just have updated to the recent version of rsync. Their GPL fears are overblown

  • dima55 16 hours ago

    [flagged]

    • modeless 15 hours ago

      It's really not. The hardware is so good that I put up with it, but it is a bad OS in so many ways. My dream laptop would be a MacBook with a normal keyboard layout and running a well supported version of Linux.

      • wenc 15 hours ago

        macOS is a BSD.

        If you're used to Linux (I am) it feels there are lots of quality of life changes, but I realized it's because I'm used to Linux.

        The OS itself is fine.

        • modeless 13 hours ago

          The problem is not the kernel. It's the anti-user hostility to open source (GPL3 utilities e.g. modern Bash and rsync etc) and open standards (e.g. OpenGL and Vulkan) that stem from an over-active legal department. And the GUI that's stuck in the past. The top menu hasn't made sense since screens got bigger than the original Macintosh. The Dock has always sucked. Window management is primitive and saddled with interminable animations. And then there are the random unconfigurable things like the stupid camera gesture recognition popup or the inability to use natural scrolling on touchpads without reversing the mouse wheel too. MacOS needs an overhaul.

        • echelon 13 hours ago

          The GUI / UX is horrible. The hardware is great, but I'd prefer to be in Gnome or KDE and Linux proper.

          Finder is annoying as hell. The icons / layouts do not snap to resizing, proper navigation requires arcane keyboard shortcuts, it's difficult to open new instances in the expected way, tabs suck, navigation sucks. Finder is made for non-power users.

          The preinstalled apps are annoying and can't be removed. Media apps constantly nag to be used or logged into, and they open with magic URLs despite your intention.

          Window management and virtual desktops are a pain. Plugins like SizeUp, Amethyst, and BetterTouchTool are awful hacks and feel like it.

          I do not want to "define" terms with the shitty built in dictionary tool, yet that option eats up context window space in every tool.

          If the DOJ breaks them up, I hope it's into a hardware company and software company. I'll buy their hardware, but I want far away from their software.

      • mrlonglong 15 hours ago

        They already do. Asahi Linux.

        • goosedragons 15 hours ago

          Still missing a bunch of features like USB-C displays. Isn't ready for newer CPUs yet either.

          • echelon 13 hours ago

            Not to mention that the leads are no longer working on it. Asahi Lina and Hector Martin are gone.

    • dghlsakjg 15 hours ago

      Why?

      This is an included package from a 3rd party that was kept at a previous version for licensing reasons.

      If you want the latest version of rsync, you can just install it.

      Are you upset that MacOs doesn’t include a copy of Libre Office, or every other bit of 3rd party software?

      • yjftsjthsd-h 14 hours ago

        > Are you upset that MacOs doesn’t include a copy of Libre Office, or every other bit of 3rd party software?

        I'd be kind of unhappy if my OS shipped an old version of LO.

    • rbanffy 15 hours ago

      It’s adequate. You can use MacPorts to install a more modern Unix environment.

      Much better than Windows.

    • ndegruchy 15 hours ago

      Eh, I swap between the big three every day and they're all terrible in their own unique manners. macOS certainly has problems, and Apple's adversarial relationship with open source is not helping anything, but I wouldn't call macOS bad, just not suited for everyone's needs.

p0w3n3d 3 hours ago

Strange, I've always thought about GPLv3 as of an upgrade (i.e. better license) to GPLv2

  • wkat4242 3 hours ago

    It's an upgrade for us, yes. But not for the companies that want to pretend they're doing FOSS. Like Apple these days.

    They had a good run where they really were open, like when they created OpenCL and Darwin was kept up to date. However these days most of their APIs are closed and Darwin is always lagging behind. Not that anyone actually uses that anyway, but still..

system7rocks 9 hours ago

In general, Apple has had such a positive influence on both hardware and software that I welcome their particular approach. It may not be ideal from a pure Linux perspective, but it does open the door to a variety of approaches. And truly, that is the key - there should always be multiple licenses and approaches to the work of open source.

So, thank you, Apple.

But please open source System 7.

egorfine 4 hours ago

Thanks for bringing attention to it. Did `brew install rsync` immediately, problem solved, fuck Apple.

ndegruchy 16 hours ago

Huh, interesting. I hadn't noticed when I upgraded, but I don't use many of the features of `rsync` to begin with. I ended up installing the real `rsync` shortly thereafter.

  • jethro_tell 15 hours ago

    Why?

    • ndegruchy 2 hours ago

      Force of habit, I usually install the set of tools I need from homebrew because I know that either Apple has the BSD variants, or has old versions.

ikmckenz 14 hours ago

And yet I don't see Apple on the list of contributors of the OpenBSD Foundation (https://www.openbsdfoundation.org/contributors.html), shame.

  • OsrsNeedsf2P 14 hours ago

    The BSD ecosystem benefits from MacOS plenty. Apple doesn't need to be a monetary donator, I for one would be grateful if Apple used my tools at all

    • WD-42 9 hours ago

      Have some self respect lol

wkat4242 3 hours ago

Lol. Apple's war on GPL 3 again. Same reason they replaced bash with zsh.

I'm glad I'm no longer using their stuff.

And yes I would need a complete implementation obviously.

gausswho 13 hours ago

Should I be embarassed for my bash alias?

alias rsy="rsync -avP"

I do this with many unix utils that have insensible (imo) defaults

  • emmelaich 13 hours ago

    I use -i (itemize) or even --ii (itemize everything) rather than -v. Also be aware than -a can conflict with other options -- in classic rsync gotcha fashion.

DeathArrow 6 hours ago

I see most people are discussing BSD VS GPL v3 and some wonder why on earth do we need more than one implementation of the rsync protocol.

My view is that having more than one choice is good. It is good for both people and companies that we have BSD and Linux. It is good we have both BSD and GPL.

Sometimes, having too many choices is bad because it leads to fragmentation, creates support and technical issues and leads to analysis paralysis and procrastination. But it's not the case here.

DeathArrow 6 hours ago

Is good that we see BSD software thriving.

keepamovin 11 hours ago

Why can’t the developers just release a licensable corporate version and Apple just agree to pay the corporate license fee?

  • DeathArrow 6 hours ago

    Because for most open source packages there are thousands of contributors and all have to agree?

  • saagarjha 10 hours ago

    Why would they want to?

    • keepamovin 9 hours ago

      Why would who want to? I consider each of the participants below. But first let me answer generally:

      Why? Because it's right to. If you create good karma, the world will get better. If you do bad things, the world and your world (ie, your karma) will get worse. Paying for software you use extensively is good karma. Not doing so is bad karma that erodes the world (and your world), because it severs the exchange of value and erodes the justice that arises from that, which then reflects back on you inevitably.

      For the participants in this archetypal case:

      Apple - because it's not right to not pay the developers of software you use a lot, even if it was released under permissive licenses. Apple paying rsync producers for their software is just and right. Apple wants to be a good company, so they want to do this, too. Plus they could get a tailored custom license that works for them, and gives them standard good rsync.

      The rsync developers - so they get the just reward for the value they produce, as is right and absolutely correct. They can choose to allocate that however they want, which is them expressing their good interest. What's good for them, is good for what they produce. Everything gets better. Happy cycle.

      Everybody else - to participate in that just and right exchange of value, which nourishes the good of both the software, the developers, Apple, and everybody else, supporting the karma of the world, rather than participating in an exploitative abuse that erodes it.

      More generally, using software extensively that is permissively licensed is not piracy, but it has the effect of piracy in that value consumed is severed from value rewarded to the producers. This is fundamentally exploitative and abusive, in the limit leads to poor software quality by eroding productive capacity.

      One caveat is large well-organized ad-hocracies that maintain giant FOSS projects, like the FOSS or FOSS-like Linux distributions. These are sort of hybrid volunteer, corporate volunteer forces that are large enough to make such fossonomics work. But there's plenty of hyperuseful software built by tiny, single-person or single-company teams for whom those economies don't work as they don't have that scale nor fractional-corp-labor.

      To conclude: normalize improving the world and spreading good karma by normalizing paying for the software you use. Even if a given developer team is yet to realize how to bank the value they created for you, as a savvy and responsible software consumer it's your responsibility to seek out and initiate opportunities to pay them, and not to seek out what you can take and exploit. If they make payment available, use it.

      Basically, it's fairly simple. Don't be evil. And respond to and create opportunities to do good!

      • ranger207 8 hours ago

        The rsync project has set a higher price than Apple seems willing to pay: allow users to run whatever they'd like on the hardware they've bought. Apple is free to pay that price and use rsync, but chooses not to

shmerl 14 hours ago

To not like GPLv3 one has be a DRM proponent. That checks out for Apple.

In practice though authors of GPLv3 see it as a clarification of GPLv2, i.e. they should have the same practical intent.

  • shagie 10 hours ago

    Linus’s views on GPLv3 would be something to watch https://youtu.be/PaKIZ7gJlRU?si=263GyZd9YaPu4-PC

    • shmerl 9 hours ago

      I agree that it could be a separate license, but that doesn't really contradict the point that GPLv2 was intended to prevent DRM scenarios that in practice violate basic idea of being able to run the changes. It's a natural thing to want, even if Linus doesn't find it important.

palata 15 hours ago

Am I the only one finding that "openrsync" sounds like "rsync" is not open source? I find it a bit confusing because rsync is GPL.

Just like I would find it weird for a project to be called openlinux or librelinux...

Still it's great to have multiple implementations, of course!

  • ronsor 15 hours ago

    It's called openrsync because it's developed for and by OpenBSD.

    • bentley 12 hours ago

      Fun fact, the “open” in OpenBSD doesn’t refer to open source licensing, but to an open development process, including the ability to anonymously checkout the CVS repository without an account, which was a novelty in the 90s.

    • palata 15 hours ago

      I truly respect OpenBSD, but I hope they won't end up writing openopenssl :-)

      (I will admit: I had to check and openssh is actually "the OpenBSD Secure Shell" project, so I guess it makes sense :-) ).

      • yjftsjthsd-h 14 hours ago

        They called their openssl fork libressl - I assume because of exactly that naming conflict - but most of their "exports" follow the convention; OpenBSD, OpenSMTPD, OpenNTPD, OpenSSH. Possibly others that I don't know off the top of my head.

      • ronsor 15 hours ago

        Sorry, OpenBSD already wrote libressl and libtls

        • palata 6 hours ago

          Well "libressl" doesn't sound like an open-source rewrite of the proprietary "openssl" :-).

          I don't really get the point about libtls, though.

          But I get it, OpenBSD has been using Open* as a prefix for many projects, I didn't know it :-).

  • jitl 15 hours ago

    "free" != open

    open != "free"

    • thayne 12 hours ago

      The venn diagram of licenses that are "free software" and "open source software" is practically a circle.

      Both rsync and openrsync are both free software and open source software.

    • palata 15 hours ago

      Are you trying to imply that GPL is not open source?

      • jitl 15 hours ago

        one is software anarchism, the other is software communism

        • palata 6 hours ago

          Can you give me one open-source license that is not "free software"?

tonetheman 15 hours ago

It has differences in the command lines / behavior. We discovered this last week.

brunorsini 7 hours ago

After decades using rsync for my local backups, I recently switched to ChronoSync Express. It's simple to use, with a sensible GUI and well-laid-out customization options.

And btw, it's included on Setapp subscriptions.